feat: Sensing-only UI mode with Gaussian splat visualization and Rust migration ADR
- Add Python WebSocket sensing server (ws_server.py) with ESP32 UDP CSI and Windows RSSI auto-detect collectors on port 8765 - Add Three.js Gaussian splat renderer with custom GLSL shaders for real-time WiFi signal field visualization (blue→green→red gradient) - Add SensingTab component with RSSI sparkline, feature meters, and motion classification badge - Add sensing.service.js WebSocket client with reconnect and simulation fallback - Implement sensing-only mode: suppress all DensePose API calls when FastAPI backend (port 8000) is not running, clean console output - ADR-019: Document sensing-only UI architecture and data flow - ADR-020: Migrate AI/model inference to Rust with RuVector ONNX Runtime, replacing ~2.7GB Python stack with ~50MB static binary - Add ruvnet/ruvector as upstream remote for RuVector crate ecosystem Co-Authored-By: claude-flow <ruv@ruv.net>
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
# ADR-013: Feature-Level Sensing on Commodity Gear (Option 3)
|
||||
|
||||
## Status
|
||||
Proposed
|
||||
Accepted — Implemented (36/36 unit tests pass, see `v1/src/sensing/` and `v1/tests/unit/test_sensing.py`)
|
||||
|
||||
## Date
|
||||
2026-02-28
|
||||
@@ -373,6 +373,24 @@ class CommodityBackend(SensingBackend):
|
||||
- **Not a "pose estimation" demo**: This module honestly cannot do what the project name implies
|
||||
- **Lower credibility ceiling**: RSSI sensing is well-known; less impressive than CSI
|
||||
|
||||
### Implementation Status
|
||||
|
||||
The full commodity sensing pipeline is implemented in `v1/src/sensing/`:
|
||||
|
||||
| Module | File | Description |
|
||||
|--------|------|-------------|
|
||||
| RSSI Collector | `rssi_collector.py` | `LinuxWifiCollector` (live hardware) + `SimulatedCollector` (deterministic testing) with ring buffer |
|
||||
| Feature Extractor | `feature_extractor.py` | `RssiFeatureExtractor` with Hann-windowed FFT, band power (breathing 0.1-0.5 Hz, motion 0.5-3 Hz), CUSUM change-point detection |
|
||||
| Classifier | `classifier.py` | `PresenceClassifier` with ABSENT/PRESENT_STILL/ACTIVE levels, confidence scoring |
|
||||
| Backend | `backend.py` | `CommodityBackend` wiring collector → extractor → classifier, reports PRESENCE + MOTION capabilities |
|
||||
|
||||
**Test coverage**: 36 tests in `v1/tests/unit/test_sensing.py` — all passing:
|
||||
- `TestRingBuffer` (4), `TestSimulatedCollector` (5), `TestFeatureExtractor` (8), `TestCusum` (4), `TestPresenceClassifier` (7), `TestCommodityBackend` (6), `TestBandPower` (2)
|
||||
|
||||
**Dependencies**: `numpy`, `scipy` (for FFT and spectral analysis)
|
||||
|
||||
**Note**: `LinuxWifiCollector` requires a connected Linux WiFi interface (`/proc/net/wireless` or `iw`). On Windows or disconnected interfaces, use `SimulatedCollector` for development and testing.
|
||||
|
||||
## References
|
||||
|
||||
- [Youssef et al. - Challenges in Device-Free Passive Localization](https://doi.org/10.1145/1287853.1287880)
|
||||
|
||||
122
docs/adr/ADR-019-sensing-only-ui-mode.md
Normal file
122
docs/adr/ADR-019-sensing-only-ui-mode.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# ADR-019: Sensing-Only UI Mode with Gaussian Splat Visualization
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Status** | Accepted |
|
||||
| **Date** | 2026-02-28 |
|
||||
| **Deciders** | ruv |
|
||||
| **Relates to** | ADR-013 (Feature-Level Sensing), ADR-018 (ESP32 Dev Implementation) |
|
||||
|
||||
## Context
|
||||
|
||||
The WiFi-DensePose UI was originally built to require the full FastAPI DensePose backend (`localhost:8000`) for all functionality. This backend depends on heavy Python packages (PyTorch ~2GB, torchvision, OpenCV, SQLAlchemy, Redis) making it impractical for lightweight sensing-only deployments where the user simply wants to visualize live WiFi signal data from ESP32 CSI or Windows RSSI collectors.
|
||||
|
||||
A Rust port exists (`rust-port/wifi-densepose-rs`) using Axum with lighter runtime footprint (~10MB binary, ~5MB RAM), but it still requires libtorch C++ bindings and OpenBLAS for compilation—a non-trivial build.
|
||||
|
||||
Users need a way to run the UI with **only the sensing pipeline** active, without installing the full DensePose backend stack.
|
||||
|
||||
## Decision
|
||||
|
||||
Implement a **sensing-only UI mode** that:
|
||||
|
||||
1. **Decouples the sensing pipeline** from the DensePose API backend. The sensing WebSocket server (`ws_server.py` on port 8765) operates independently of the FastAPI backend (port 8000).
|
||||
|
||||
2. **Auto-detects sensing-only mode** at startup. When the DensePose backend is unreachable, the UI sets `backendDetector.sensingOnlyMode = true` and:
|
||||
- Suppresses all API requests to `localhost:8000` at the `ApiService.request()` level
|
||||
- Skips initialization of DensePose-dependent tabs (Dashboard, Hardware, Live Demo)
|
||||
- Shows a green "Sensing mode" status toast instead of error banners
|
||||
- Silences health monitoring polls
|
||||
|
||||
3. **Adds a new "Sensing" tab** with Three.js Gaussian splat visualization:
|
||||
- Custom GLSL `ShaderMaterial` rendering point-cloud splats on a 20×20 floor grid
|
||||
- Signal field splats colored by intensity (blue → green → red)
|
||||
- Body disruption blob at estimated motion position
|
||||
- Breathing ring modulation when breathing-band power detected
|
||||
- Side panel with RSSI sparkline, feature meters, and classification badge
|
||||
|
||||
4. **Python WebSocket bridge** (`v1/src/sensing/ws_server.py`) that:
|
||||
- Auto-detects ESP32 UDP CSI stream on port 5005 (ADR-018 binary frames)
|
||||
- Falls back to `WindowsWifiCollector` → `SimulatedCollector`
|
||||
- Runs `RssiFeatureExtractor` → `PresenceClassifier` pipeline
|
||||
- Broadcasts JSON sensing updates every 500ms on `ws://localhost:8765`
|
||||
|
||||
5. **Client-side fallback**: `sensing.service.js` generates simulated data when the WebSocket server is unreachable, so the visualization always works.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
ESP32 (UDP :5005) ──┐
|
||||
├──▶ ws_server.py (:8765) ──▶ sensing.service.js ──▶ SensingTab.js
|
||||
Windows WiFi RSSI ───┘ │ │ │
|
||||
Feature extraction WebSocket client gaussian-splats.js
|
||||
+ Classification + Reconnect (Three.js ShaderMaterial)
|
||||
+ Sim fallback
|
||||
```
|
||||
|
||||
### Data flow
|
||||
|
||||
| Source | Collector | Feature Extraction | Output |
|
||||
|--------|-----------|-------------------|--------|
|
||||
| ESP32 CSI (ADR-018) | `Esp32UdpCollector` (UDP :5005) | Amplitude mean → pseudo-RSSI → `RssiFeatureExtractor` | `sensing_update` JSON |
|
||||
| Windows WiFi | `WindowsWifiCollector` (netsh) | RSSI + signal% → `RssiFeatureExtractor` | `sensing_update` JSON |
|
||||
| Simulated | `SimulatedCollector` | Synthetic RSSI patterns | `sensing_update` JSON |
|
||||
|
||||
### Sensing update JSON schema
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "sensing_update",
|
||||
"timestamp": 1234567890.123,
|
||||
"source": "esp32",
|
||||
"nodes": [{ "node_id": 1, "rssi_dbm": -39, "position": [2,0,1.5], "amplitude": [...], "subcarrier_count": 56 }],
|
||||
"features": { "mean_rssi": -39.0, "variance": 2.34, "motion_band_power": 0.45, ... },
|
||||
"classification": { "motion_level": "active", "presence": true, "confidence": 0.87 },
|
||||
"signal_field": { "grid_size": [20,1,20], "values": [...] }
|
||||
}
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
### Created
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `v1/src/sensing/ws_server.py` | Python asyncio WebSocket server with auto-detect collectors |
|
||||
| `ui/components/SensingTab.js` | Sensing tab UI with Three.js integration |
|
||||
| `ui/components/gaussian-splats.js` | Custom GLSL Gaussian splat renderer |
|
||||
| `ui/services/sensing.service.js` | WebSocket client with reconnect + simulation fallback |
|
||||
|
||||
### Modified
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `ui/index.html` | Added Sensing nav tab button and content section |
|
||||
| `ui/app.js` | Sensing-only mode detection, conditional tab init |
|
||||
| `ui/style.css` | Sensing tab layout and component styles |
|
||||
| `ui/config/api.config.js` | `AUTO_DETECT: false` (sensing uses own WS) |
|
||||
| `ui/services/api.service.js` | Short-circuit requests in sensing-only mode |
|
||||
| `ui/services/health.service.js` | Skip polling when backend unreachable |
|
||||
| `ui/components/DashboardTab.js` | Graceful failure in sensing-only mode |
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- UI works with zero heavy dependencies—only `pip install websockets` (+ numpy/scipy already installed)
|
||||
- ESP32 CSI data flows end-to-end without PyTorch, OpenCV, or database
|
||||
- Existing DensePose tabs still work when the full backend is running
|
||||
- Clean console output—no `ERR_CONNECTION_REFUSED` spam in sensing-only mode
|
||||
|
||||
### Negative
|
||||
- Two separate WebSocket endpoints: `:8765` (sensing) and `:8000/api/v1/stream/pose` (DensePose)
|
||||
- Pose estimation, zone occupancy, and historical data features unavailable in sensing-only mode
|
||||
- Client-side simulation fallback may mislead users if they don't notice the "Simulated" badge
|
||||
|
||||
### Neutral
|
||||
- Rust Axum backend remains a future option for a unified lightweight server
|
||||
- The sensing pipeline reuses the existing `RssiFeatureExtractor` and `PresenceClassifier` classes unchanged
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
1. **Install minimal FastAPI** (`pip install fastapi uvicorn pydantic`): Starts the server but pose endpoints return errors without PyTorch.
|
||||
2. **Build Rust backend**: Single binary, but requires libtorch + OpenBLAS build toolchain.
|
||||
3. **Merge sensing into FastAPI**: Would require FastAPI installed even for sensing-only use.
|
||||
|
||||
Option 1 was rejected because it still shows broken tabs. The chosen approach cleanly separates concerns.
|
||||
157
docs/adr/ADR-020-rust-ruvector-ai-model-migration.md
Normal file
157
docs/adr/ADR-020-rust-ruvector-ai-model-migration.md
Normal file
@@ -0,0 +1,157 @@
|
||||
# ADR-020: Migrate AI/Model Inference to Rust with RuVector and ONNX Runtime
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Status** | Accepted |
|
||||
| **Date** | 2026-02-28 |
|
||||
| **Deciders** | ruv |
|
||||
| **Relates to** | ADR-016 (RuVector Integration), ADR-017 (RuVector-Signal-MAT), ADR-019 (Sensing-Only UI) |
|
||||
|
||||
## Context
|
||||
|
||||
The current Python DensePose backend requires ~2GB+ of dependencies:
|
||||
|
||||
| Python Dependency | Size | Purpose |
|
||||
|-------------------|------|---------|
|
||||
| PyTorch | ~2.0 GB | Neural network inference |
|
||||
| torchvision | ~500 MB | Model loading, transforms |
|
||||
| OpenCV | ~100 MB | Image processing |
|
||||
| SQLAlchemy + asyncpg | ~20 MB | Database |
|
||||
| scikit-learn | ~50 MB | Classification |
|
||||
| **Total** | **~2.7 GB** | |
|
||||
|
||||
This makes the DensePose backend impractical for edge deployments, CI pipelines, and developer laptops where users only need WiFi sensing + pose estimation.
|
||||
|
||||
Meanwhile, the Rust port at `rust-port/wifi-densepose-rs/` already has:
|
||||
|
||||
- **12 workspace crates** covering core, signal, nn, api, db, config, hardware, wasm, cli, mat, train
|
||||
- **5 RuVector crates** (v2.0.4, published on crates.io) integrated into signal, mat, and train crates
|
||||
- **3 NN backends**: ONNX Runtime (default), tch (PyTorch C++), Candle (pure Rust)
|
||||
- **Axum web framework** with WebSocket support in the MAT crate
|
||||
- **Signal processing pipeline**: CSI processor, BVP, Fresnel geometry, spectrogram, subcarrier selection, motion detection, Hampel filter, phase sanitizer
|
||||
|
||||
## Decision
|
||||
|
||||
Adopt the Rust workspace as the **primary backend** for AI/model inference and signal processing, replacing the Python FastAPI stack for production deployments.
|
||||
|
||||
### Phase 1: ONNX Runtime Default (No libtorch)
|
||||
|
||||
Use the `wifi-densepose-nn` crate with `default-features = ["onnx"]` only. This avoids the libtorch C++ dependency entirely.
|
||||
|
||||
| Component | Rust Crate | Replaces Python |
|
||||
|-----------|-----------|-----------------|
|
||||
| CSI processing | `wifi-densepose-signal::csi_processor` | `v1/src/sensing/feature_extractor.py` |
|
||||
| Motion detection | `wifi-densepose-signal::motion` | `v1/src/sensing/classifier.py` |
|
||||
| BVP extraction | `wifi-densepose-signal::bvp` | N/A (new capability) |
|
||||
| Fresnel geometry | `wifi-densepose-signal::fresnel` | N/A (new capability) |
|
||||
| Subcarrier selection | `wifi-densepose-signal::subcarrier_selection` | N/A (new capability) |
|
||||
| Spectrogram | `wifi-densepose-signal::spectrogram` | N/A (new capability) |
|
||||
| Pose inference | `wifi-densepose-nn::onnx` | PyTorch + torchvision |
|
||||
| DensePose mapping | `wifi-densepose-nn::densepose` | Python DensePose |
|
||||
| REST API | `wifi-densepose-mat::api` (Axum) | FastAPI |
|
||||
| WebSocket stream | `wifi-densepose-mat::api::websocket` | `ws_server.py` |
|
||||
| Survivor detection | `wifi-densepose-mat::detection` | N/A (new capability) |
|
||||
| Vital signs | `wifi-densepose-mat::ml` | N/A (new capability) |
|
||||
|
||||
### Phase 2: RuVector Signal Intelligence
|
||||
|
||||
The 5 RuVector crates provide subpolynomial algorithms already wired into the Rust signal pipeline:
|
||||
|
||||
| Crate | Algorithm | Use in Pipeline |
|
||||
|-------|-----------|-----------------|
|
||||
| `ruvector-mincut` | Subpolynomial min-cut | Dynamic subcarrier partitioning (sensitive vs insensitive) |
|
||||
| `ruvector-attn-mincut` | Attention-gated min-cut | Noise-suppressed spectrogram generation |
|
||||
| `ruvector-attention` | Sensitivity-weighted attention | Body velocity profile extraction |
|
||||
| `ruvector-solver` | Sparse Fresnel solver | TX-body-RX distance estimation |
|
||||
| `ruvector-temporal-tensor` | Compressed temporal buffers | Breathing + heartbeat spectrogram storage |
|
||||
|
||||
These replace the Python `RssiFeatureExtractor` with hardware-aware, subcarrier-level feature extraction.
|
||||
|
||||
### Phase 3: Unified Axum Server
|
||||
|
||||
Replace both the Python FastAPI backend (port 8000) and the Python sensing WebSocket (port 8765) with a single Rust Axum server:
|
||||
|
||||
```
|
||||
ESP32 (UDP :5005) ──▶ Rust Axum server (:8000) ──▶ UI (browser)
|
||||
├── /health/* (health checks)
|
||||
├── /api/v1/pose/* (pose estimation)
|
||||
├── /api/v1/stream/* (WebSocket pose stream)
|
||||
├── /ws/sensing (sensing WebSocket — replaces :8765)
|
||||
└── /ws/mat/stream (MAT domain events)
|
||||
```
|
||||
|
||||
### Build Configuration
|
||||
|
||||
```toml
|
||||
# Lightweight build — no libtorch, no OpenBLAS
|
||||
cargo build --release -p wifi-densepose-mat --no-default-features --features "std,api,onnx"
|
||||
|
||||
# Full build with all backends
|
||||
cargo build --release --features "all-backends"
|
||||
```
|
||||
|
||||
### Dependency Comparison
|
||||
|
||||
| | Python Backend | Rust Backend (ONNX only) |
|
||||
|---|---|---|
|
||||
| Install size | ~2.7 GB | ~50 MB binary |
|
||||
| Runtime memory | ~500 MB | ~20 MB |
|
||||
| Startup time | 3-5s | <100ms |
|
||||
| Dependencies | 30+ pip packages | Single static binary |
|
||||
| GPU support | CUDA via PyTorch | CUDA via ONNX Runtime |
|
||||
| Model format | .pt/.pth (PyTorch) | .onnx (portable) |
|
||||
| Cross-compile | Difficult | `cargo build --target` |
|
||||
| WASM target | No | Yes (`wifi-densepose-wasm`) |
|
||||
|
||||
### Model Conversion
|
||||
|
||||
Export existing PyTorch models to ONNX for the Rust backend:
|
||||
|
||||
```python
|
||||
# One-time conversion (Python)
|
||||
import torch
|
||||
model = torch.load("model.pth")
|
||||
torch.onnx.export(model, dummy_input, "model.onnx", opset_version=17)
|
||||
```
|
||||
|
||||
The `wifi-densepose-nn::onnx` module loads `.onnx` files directly.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- Single ~50MB static binary replaces ~2.7GB Python environment
|
||||
- ~20MB runtime memory vs ~500MB
|
||||
- Sub-100ms startup vs 3-5 seconds
|
||||
- Single port serves all endpoints (API, WebSocket sensing, WebSocket pose)
|
||||
- RuVector subpolynomial algorithms run natively (no FFI overhead)
|
||||
- WASM build target enables browser-side inference
|
||||
- Cross-compilation for ARM (Raspberry Pi), ESP32-S3, etc.
|
||||
|
||||
### Negative
|
||||
- ONNX model conversion required (one-time step per model)
|
||||
- Developers need Rust toolchain for backend changes
|
||||
- Python sensing pipeline (`ws_server.py`) remains useful for rapid prototyping
|
||||
- `ndarray-linalg` requires OpenBLAS or system LAPACK for some signal crates
|
||||
|
||||
### Migration Path
|
||||
1. Keep Python `ws_server.py` as fallback for development/prototyping
|
||||
2. Build Rust binary with `cargo build --release -p wifi-densepose-mat`
|
||||
3. UI detects which backend is running and adapts (existing `sensingOnlyMode` logic)
|
||||
4. Deprecate Python backend once Rust API reaches feature parity
|
||||
|
||||
## Verification
|
||||
|
||||
```bash
|
||||
# Build the Rust workspace (ONNX-only, no libtorch)
|
||||
cd rust-port/wifi-densepose-rs
|
||||
cargo check --workspace 2>&1
|
||||
|
||||
# Build release binary
|
||||
cargo build --release -p wifi-densepose-mat --no-default-features --features "std,api"
|
||||
|
||||
# Run tests
|
||||
cargo test --workspace
|
||||
|
||||
# Binary size
|
||||
ls -lh target/release/wifi-densepose-mat
|
||||
```
|
||||
44
ui/app.js
44
ui/app.js
@@ -4,6 +4,7 @@ import { TabManager } from './components/TabManager.js';
|
||||
import { DashboardTab } from './components/DashboardTab.js';
|
||||
import { HardwareTab } from './components/HardwareTab.js';
|
||||
import { LiveDemoTab } from './components/LiveDemoTab.js';
|
||||
import { SensingTab } from './components/SensingTab.js';
|
||||
import { apiService } from './services/api.service.js';
|
||||
import { wsService } from './services/websocket.service.js';
|
||||
import { healthService } from './services/health.service.js';
|
||||
@@ -65,16 +66,17 @@ class WiFiDensePoseApp {
|
||||
this.showBackendStatus('Mock server active - testing mode', 'warning');
|
||||
} else {
|
||||
console.log('🔌 Initializing with real backend');
|
||||
|
||||
|
||||
// Verify backend is actually working
|
||||
try {
|
||||
const health = await healthService.checkLiveness();
|
||||
console.log('✅ Backend is available and responding:', health);
|
||||
this.showBackendStatus('Connected to real backend', 'success');
|
||||
} catch (error) {
|
||||
console.error('❌ Backend check failed:', error);
|
||||
this.showBackendStatus('Backend connection failed', 'error');
|
||||
// Don't throw - let the app continue and retry later
|
||||
// DensePose API backend not running — sensing-only mode
|
||||
backendDetector.sensingOnlyMode = true;
|
||||
console.log('ℹ️ DensePose API not running — sensing-only mode via WebSocket on :8765');
|
||||
this.showBackendStatus('Sensing mode — live WiFi data via WebSocket', 'success');
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -101,33 +103,44 @@ class WiFiDensePoseApp {
|
||||
|
||||
// Initialize individual tab components
|
||||
initializeTabComponents() {
|
||||
// Skip DensePose-dependent tabs in sensing-only mode
|
||||
const sensingOnly = backendDetector.sensingOnlyMode;
|
||||
|
||||
// Dashboard tab
|
||||
const dashboardContainer = document.getElementById('dashboard');
|
||||
if (dashboardContainer) {
|
||||
this.components.dashboard = new DashboardTab(dashboardContainer);
|
||||
this.components.dashboard.init().catch(error => {
|
||||
console.error('Failed to initialize dashboard:', error);
|
||||
});
|
||||
if (!sensingOnly) {
|
||||
this.components.dashboard.init().catch(error => {
|
||||
console.error('Failed to initialize dashboard:', error);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Hardware tab
|
||||
const hardwareContainer = document.getElementById('hardware');
|
||||
if (hardwareContainer) {
|
||||
this.components.hardware = new HardwareTab(hardwareContainer);
|
||||
this.components.hardware.init();
|
||||
if (!sensingOnly) this.components.hardware.init();
|
||||
}
|
||||
|
||||
// Live demo tab
|
||||
const demoContainer = document.getElementById('demo');
|
||||
if (demoContainer) {
|
||||
this.components.demo = new LiveDemoTab(demoContainer);
|
||||
this.components.demo.init();
|
||||
if (!sensingOnly) this.components.demo.init();
|
||||
}
|
||||
|
||||
// Sensing tab
|
||||
const sensingContainer = document.getElementById('sensing');
|
||||
if (sensingContainer) {
|
||||
this.components.sensing = new SensingTab(sensingContainer);
|
||||
}
|
||||
|
||||
// Architecture tab - static content, no component needed
|
||||
|
||||
|
||||
// Performance tab - static content, no component needed
|
||||
|
||||
|
||||
// Applications tab - static content, no component needed
|
||||
}
|
||||
|
||||
@@ -153,6 +166,15 @@ class WiFiDensePoseApp {
|
||||
case 'demo':
|
||||
// Demo starts manually
|
||||
break;
|
||||
|
||||
case 'sensing':
|
||||
// Lazy-init sensing tab on first visit
|
||||
if (this.components.sensing && !this.components.sensing.splatRenderer) {
|
||||
this.components.sensing.init().catch(error => {
|
||||
console.error('Failed to initialize sensing tab:', error);
|
||||
});
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -51,8 +51,8 @@ export class DashboardTab {
|
||||
this.updateStats(stats);
|
||||
|
||||
} catch (error) {
|
||||
console.error('Failed to load dashboard data:', error);
|
||||
this.showError('Failed to load dashboard data');
|
||||
// DensePose API may not be running (sensing-only mode) — fail silently
|
||||
console.log('Dashboard: DensePose API not available (sensing-only mode)');
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
302
ui/components/SensingTab.js
Normal file
302
ui/components/SensingTab.js
Normal file
@@ -0,0 +1,302 @@
|
||||
/**
|
||||
* SensingTab — Live WiFi Sensing Visualization
|
||||
*
|
||||
* Connects to the sensing WebSocket service and renders:
|
||||
* 1. A 3D Gaussian-splat signal field (via gaussian-splats.js)
|
||||
* 2. An overlay HUD with real-time metrics (RSSI, variance, bands, classification)
|
||||
*/
|
||||
|
||||
import { sensingService } from '../services/sensing.service.js';
|
||||
import { GaussianSplatRenderer } from './gaussian-splats.js';
|
||||
|
||||
export class SensingTab {
|
||||
/** @param {HTMLElement} container - the #sensing section element */
|
||||
constructor(container) {
|
||||
this.container = container;
|
||||
this.splatRenderer = null;
|
||||
this._unsubData = null;
|
||||
this._unsubState = null;
|
||||
this._resizeObserver = null;
|
||||
this._threeLoaded = false;
|
||||
}
|
||||
|
||||
async init() {
|
||||
this._buildDOM();
|
||||
await this._loadThree();
|
||||
this._initSplatRenderer();
|
||||
this._connectService();
|
||||
this._setupResize();
|
||||
}
|
||||
|
||||
// ---- DOM construction --------------------------------------------------
|
||||
|
||||
_buildDOM() {
|
||||
this.container.innerHTML = `
|
||||
<h2>Live WiFi Sensing</h2>
|
||||
<div class="sensing-layout">
|
||||
<!-- 3D viewport -->
|
||||
<div class="sensing-viewport" id="sensingViewport">
|
||||
<div class="sensing-loading">Loading 3D engine...</div>
|
||||
</div>
|
||||
|
||||
<!-- Side panel -->
|
||||
<div class="sensing-panel">
|
||||
<!-- Connection -->
|
||||
<div class="sensing-card">
|
||||
<div class="sensing-card-title">Connection</div>
|
||||
<div class="sensing-connection">
|
||||
<span class="sensing-dot" id="sensingDot"></span>
|
||||
<span id="sensingState">Connecting...</span>
|
||||
<span class="sensing-source" id="sensingSource"></span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- RSSI -->
|
||||
<div class="sensing-card">
|
||||
<div class="sensing-card-title">RSSI</div>
|
||||
<div class="sensing-big-value" id="sensingRssi">-- dBm</div>
|
||||
<canvas id="sensingSparkline" width="200" height="40"></canvas>
|
||||
</div>
|
||||
|
||||
<!-- Signal Features -->
|
||||
<div class="sensing-card">
|
||||
<div class="sensing-card-title">Signal Features</div>
|
||||
<div class="sensing-meters">
|
||||
<div class="sensing-meter">
|
||||
<label>Variance</label>
|
||||
<div class="sensing-bar"><div class="sensing-bar-fill" id="barVariance"></div></div>
|
||||
<span class="sensing-meter-val" id="valVariance">0</span>
|
||||
</div>
|
||||
<div class="sensing-meter">
|
||||
<label>Motion Band</label>
|
||||
<div class="sensing-bar"><div class="sensing-bar-fill motion" id="barMotion"></div></div>
|
||||
<span class="sensing-meter-val" id="valMotion">0</span>
|
||||
</div>
|
||||
<div class="sensing-meter">
|
||||
<label>Breathing Band</label>
|
||||
<div class="sensing-bar"><div class="sensing-bar-fill breath" id="barBreath"></div></div>
|
||||
<span class="sensing-meter-val" id="valBreath">0</span>
|
||||
</div>
|
||||
<div class="sensing-meter">
|
||||
<label>Spectral Power</label>
|
||||
<div class="sensing-bar"><div class="sensing-bar-fill spectral" id="barSpectral"></div></div>
|
||||
<span class="sensing-meter-val" id="valSpectral">0</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Classification -->
|
||||
<div class="sensing-card">
|
||||
<div class="sensing-card-title">Classification</div>
|
||||
<div class="sensing-classification" id="sensingClassification">
|
||||
<div class="sensing-class-label" id="classLabel">ABSENT</div>
|
||||
<div class="sensing-confidence">
|
||||
<label>Confidence</label>
|
||||
<div class="sensing-bar"><div class="sensing-bar-fill confidence" id="barConfidence"></div></div>
|
||||
<span class="sensing-meter-val" id="valConfidence">0%</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Extra info -->
|
||||
<div class="sensing-card">
|
||||
<div class="sensing-card-title">Details</div>
|
||||
<div class="sensing-details">
|
||||
<div class="sensing-detail-row">
|
||||
<span>Dominant Freq</span><span id="valDomFreq">0 Hz</span>
|
||||
</div>
|
||||
<div class="sensing-detail-row">
|
||||
<span>Change Points</span><span id="valChangePoints">0</span>
|
||||
</div>
|
||||
<div class="sensing-detail-row">
|
||||
<span>Sample Rate</span><span id="valSampleRate">--</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
// ---- Three.js loading --------------------------------------------------
|
||||
|
||||
async _loadThree() {
|
||||
if (window.THREE) {
|
||||
this._threeLoaded = true;
|
||||
return;
|
||||
}
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const script = document.createElement('script');
|
||||
script.src = 'https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js';
|
||||
script.onload = () => {
|
||||
this._threeLoaded = true;
|
||||
resolve();
|
||||
};
|
||||
script.onerror = () => reject(new Error('Failed to load Three.js'));
|
||||
document.head.appendChild(script);
|
||||
});
|
||||
}
|
||||
|
||||
// ---- Splat renderer ----------------------------------------------------
|
||||
|
||||
_initSplatRenderer() {
|
||||
const viewport = this.container.querySelector('#sensingViewport');
|
||||
if (!viewport) return;
|
||||
|
||||
// Remove loading message
|
||||
viewport.innerHTML = '';
|
||||
|
||||
try {
|
||||
this.splatRenderer = new GaussianSplatRenderer(viewport, {
|
||||
width: viewport.clientWidth,
|
||||
height: viewport.clientHeight || 500,
|
||||
});
|
||||
} catch (e) {
|
||||
console.error('[SensingTab] Failed to init splat renderer:', e);
|
||||
viewport.innerHTML = '<div class="sensing-loading">3D rendering unavailable</div>';
|
||||
}
|
||||
}
|
||||
|
||||
// ---- Service connection ------------------------------------------------
|
||||
|
||||
_connectService() {
|
||||
sensingService.start();
|
||||
|
||||
this._unsubData = sensingService.onData((data) => this._onSensingData(data));
|
||||
this._unsubState = sensingService.onStateChange((state) => this._onStateChange(state));
|
||||
}
|
||||
|
||||
_onSensingData(data) {
|
||||
// Update 3D view
|
||||
if (this.splatRenderer) {
|
||||
this.splatRenderer.update(data);
|
||||
}
|
||||
|
||||
// Update HUD
|
||||
this._updateHUD(data);
|
||||
}
|
||||
|
||||
_onStateChange(state) {
|
||||
const dot = this.container.querySelector('#sensingDot');
|
||||
const text = this.container.querySelector('#sensingState');
|
||||
if (!dot || !text) return;
|
||||
|
||||
const labels = {
|
||||
disconnected: 'Disconnected',
|
||||
connecting: 'Connecting...',
|
||||
connected: 'Connected',
|
||||
simulated: 'Simulated',
|
||||
};
|
||||
|
||||
dot.className = 'sensing-dot ' + state;
|
||||
text.textContent = labels[state] || state;
|
||||
}
|
||||
|
||||
// ---- HUD update --------------------------------------------------------
|
||||
|
||||
_updateHUD(data) {
|
||||
const f = data.features || {};
|
||||
const c = data.classification || {};
|
||||
|
||||
// RSSI
|
||||
this._setText('sensingRssi', `${(f.mean_rssi || -80).toFixed(1)} dBm`);
|
||||
this._setText('sensingSource', data.source || '');
|
||||
|
||||
// Bars (scale to 0-100%)
|
||||
this._setBar('barVariance', f.variance, 10, 'valVariance', f.variance);
|
||||
this._setBar('barMotion', f.motion_band_power, 0.5, 'valMotion', f.motion_band_power);
|
||||
this._setBar('barBreath', f.breathing_band_power, 0.3, 'valBreath', f.breathing_band_power);
|
||||
this._setBar('barSpectral', f.spectral_power, 2.0, 'valSpectral', f.spectral_power);
|
||||
|
||||
// Classification
|
||||
const label = this.container.querySelector('#classLabel');
|
||||
if (label) {
|
||||
const level = (c.motion_level || 'absent').toUpperCase();
|
||||
label.textContent = level;
|
||||
label.className = 'sensing-class-label ' + (c.motion_level || 'absent');
|
||||
}
|
||||
|
||||
const confPct = ((c.confidence || 0) * 100).toFixed(0);
|
||||
this._setBar('barConfidence', c.confidence, 1.0, 'valConfidence', confPct + '%');
|
||||
|
||||
// Details
|
||||
this._setText('valDomFreq', (f.dominant_freq_hz || 0).toFixed(3) + ' Hz');
|
||||
this._setText('valChangePoints', String(f.change_points || 0));
|
||||
this._setText('valSampleRate', data.source === 'simulated' ? 'sim' : 'live');
|
||||
|
||||
// Sparkline
|
||||
this._drawSparkline();
|
||||
}
|
||||
|
||||
_setText(id, text) {
|
||||
const el = this.container.querySelector('#' + id);
|
||||
if (el) el.textContent = text;
|
||||
}
|
||||
|
||||
_setBar(barId, value, maxVal, valId, displayVal) {
|
||||
const bar = this.container.querySelector('#' + barId);
|
||||
if (bar) {
|
||||
const pct = Math.min(100, Math.max(0, ((value || 0) / maxVal) * 100));
|
||||
bar.style.width = pct + '%';
|
||||
}
|
||||
if (valId && displayVal != null) {
|
||||
const el = this.container.querySelector('#' + valId);
|
||||
if (el) el.textContent = typeof displayVal === 'number' ? displayVal.toFixed(3) : displayVal;
|
||||
}
|
||||
}
|
||||
|
||||
_drawSparkline() {
|
||||
const canvas = this.container.querySelector('#sensingSparkline');
|
||||
if (!canvas) return;
|
||||
const ctx = canvas.getContext('2d');
|
||||
const history = sensingService.getRssiHistory();
|
||||
if (history.length < 2) return;
|
||||
|
||||
const w = canvas.width;
|
||||
const h = canvas.height;
|
||||
ctx.clearRect(0, 0, w, h);
|
||||
|
||||
const min = Math.min(...history) - 2;
|
||||
const max = Math.max(...history) + 2;
|
||||
const range = max - min || 1;
|
||||
|
||||
ctx.beginPath();
|
||||
ctx.strokeStyle = '#32b8c6';
|
||||
ctx.lineWidth = 1.5;
|
||||
|
||||
for (let i = 0; i < history.length; i++) {
|
||||
const x = (i / (history.length - 1)) * w;
|
||||
const y = h - ((history[i] - min) / range) * h;
|
||||
if (i === 0) ctx.moveTo(x, y);
|
||||
else ctx.lineTo(x, y);
|
||||
}
|
||||
ctx.stroke();
|
||||
}
|
||||
|
||||
// ---- Resize ------------------------------------------------------------
|
||||
|
||||
_setupResize() {
|
||||
const viewport = this.container.querySelector('#sensingViewport');
|
||||
if (!viewport || !window.ResizeObserver) return;
|
||||
|
||||
this._resizeObserver = new ResizeObserver((entries) => {
|
||||
for (const entry of entries) {
|
||||
if (this.splatRenderer) {
|
||||
this.splatRenderer.resize(entry.contentRect.width, entry.contentRect.height);
|
||||
}
|
||||
}
|
||||
});
|
||||
this._resizeObserver.observe(viewport);
|
||||
}
|
||||
|
||||
// ---- Cleanup -----------------------------------------------------------
|
||||
|
||||
dispose() {
|
||||
if (this._unsubData) this._unsubData();
|
||||
if (this._unsubState) this._unsubState();
|
||||
if (this._resizeObserver) this._resizeObserver.disconnect();
|
||||
if (this.splatRenderer) this.splatRenderer.dispose();
|
||||
sensingService.stop();
|
||||
}
|
||||
}
|
||||
412
ui/components/gaussian-splats.js
Normal file
412
ui/components/gaussian-splats.js
Normal file
@@ -0,0 +1,412 @@
|
||||
/**
|
||||
* Gaussian Splat Renderer for WiFi Sensing Visualization
|
||||
*
|
||||
* Renders a 3D signal field using Three.js Points with custom ShaderMaterial.
|
||||
* Each "splat" is a screen-space disc whose size, color and opacity are driven
|
||||
* by the sensing data:
|
||||
* - Size : signal variance / disruption magnitude
|
||||
* - Color : blue (quiet) -> green (presence) -> red (active motion)
|
||||
* - Opacity: classification confidence
|
||||
*/
|
||||
|
||||
// Use global THREE from CDN (loaded in SensingTab)
|
||||
const getThree = () => window.THREE;
|
||||
|
||||
// ---- Custom Splat Shaders ------------------------------------------------
|
||||
|
||||
const SPLAT_VERTEX = `
|
||||
attribute float splatSize;
|
||||
attribute vec3 splatColor;
|
||||
attribute float splatOpacity;
|
||||
|
||||
varying vec3 vColor;
|
||||
varying float vOpacity;
|
||||
|
||||
void main() {
|
||||
vColor = splatColor;
|
||||
vOpacity = splatOpacity;
|
||||
|
||||
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
|
||||
gl_PointSize = splatSize * (300.0 / -mvPosition.z);
|
||||
gl_Position = projectionMatrix * mvPosition;
|
||||
}
|
||||
`;
|
||||
|
||||
const SPLAT_FRAGMENT = `
|
||||
varying vec3 vColor;
|
||||
varying float vOpacity;
|
||||
|
||||
void main() {
|
||||
// Circular soft-edge disc
|
||||
float dist = length(gl_PointCoord - vec2(0.5));
|
||||
if (dist > 0.5) discard;
|
||||
float alpha = smoothstep(0.5, 0.2, dist) * vOpacity;
|
||||
gl_FragColor = vec4(vColor, alpha);
|
||||
}
|
||||
`;
|
||||
|
||||
// ---- Color helpers -------------------------------------------------------
|
||||
|
||||
/** Map a scalar 0-1 to blue -> green -> red gradient */
|
||||
function valueToColor(v) {
|
||||
const clamped = Math.max(0, Math.min(1, v));
|
||||
// blue(0) -> cyan(0.25) -> green(0.5) -> yellow(0.75) -> red(1)
|
||||
let r, g, b;
|
||||
if (clamped < 0.5) {
|
||||
const t = clamped * 2;
|
||||
r = 0;
|
||||
g = t;
|
||||
b = 1 - t;
|
||||
} else {
|
||||
const t = (clamped - 0.5) * 2;
|
||||
r = t;
|
||||
g = 1 - t;
|
||||
b = 0;
|
||||
}
|
||||
return [r, g, b];
|
||||
}
|
||||
|
||||
// ---- GaussianSplatRenderer -----------------------------------------------
|
||||
|
||||
export class GaussianSplatRenderer {
|
||||
/**
|
||||
* @param {HTMLElement} container - DOM element to attach the renderer to
|
||||
* @param {object} [opts]
|
||||
* @param {number} [opts.width] - canvas width (default container width)
|
||||
* @param {number} [opts.height] - canvas height (default 500)
|
||||
*/
|
||||
constructor(container, opts = {}) {
|
||||
const THREE = getThree();
|
||||
if (!THREE) throw new Error('Three.js not loaded');
|
||||
|
||||
this.container = container;
|
||||
this.width = opts.width || container.clientWidth || 800;
|
||||
this.height = opts.height || 500;
|
||||
|
||||
// Scene
|
||||
this.scene = new THREE.Scene();
|
||||
this.scene.background = new THREE.Color(0x0a0a12);
|
||||
|
||||
// Camera — perspective looking down at the room
|
||||
this.camera = new THREE.PerspectiveCamera(55, this.width / this.height, 0.1, 200);
|
||||
this.camera.position.set(0, 14, 14);
|
||||
this.camera.lookAt(0, 0, 0);
|
||||
|
||||
// Renderer
|
||||
this.renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true });
|
||||
this.renderer.setSize(this.width, this.height);
|
||||
this.renderer.setPixelRatio(Math.min(window.devicePixelRatio, 2));
|
||||
container.appendChild(this.renderer.domElement);
|
||||
|
||||
// Grid & room
|
||||
this._createRoom(THREE);
|
||||
|
||||
// Signal field splats (20x20 = 400 points on the floor plane)
|
||||
this.gridSize = 20;
|
||||
this._createFieldSplats(THREE);
|
||||
|
||||
// Node markers (ESP32 / router positions)
|
||||
this._createNodeMarkers(THREE);
|
||||
|
||||
// Body disruption blob
|
||||
this._createBodyBlob(THREE);
|
||||
|
||||
// Simple orbit-like mouse rotation
|
||||
this._setupMouseControls();
|
||||
|
||||
// Animation state
|
||||
this._animFrame = null;
|
||||
this._lastData = null;
|
||||
|
||||
// Start render loop
|
||||
this._animate();
|
||||
}
|
||||
|
||||
// ---- Scene setup -------------------------------------------------------
|
||||
|
||||
_createRoom(THREE) {
|
||||
// Floor grid
|
||||
const grid = new THREE.GridHelper(20, 20, 0x1a3a4a, 0x0d1f28);
|
||||
this.scene.add(grid);
|
||||
|
||||
// Room boundary wireframe
|
||||
const boxGeo = new THREE.BoxGeometry(20, 6, 20);
|
||||
const edges = new THREE.EdgesGeometry(boxGeo);
|
||||
const line = new THREE.LineSegments(
|
||||
edges,
|
||||
new THREE.LineBasicMaterial({ color: 0x1a4a5a, opacity: 0.3, transparent: true })
|
||||
);
|
||||
line.position.y = 3;
|
||||
this.scene.add(line);
|
||||
}
|
||||
|
||||
_createFieldSplats(THREE) {
|
||||
const count = this.gridSize * this.gridSize;
|
||||
|
||||
const positions = new Float32Array(count * 3);
|
||||
const sizes = new Float32Array(count);
|
||||
const colors = new Float32Array(count * 3);
|
||||
const opacities = new Float32Array(count);
|
||||
|
||||
// Lay splats on the floor plane (y = 0.05 to sit just above grid)
|
||||
for (let iz = 0; iz < this.gridSize; iz++) {
|
||||
for (let ix = 0; ix < this.gridSize; ix++) {
|
||||
const idx = iz * this.gridSize + ix;
|
||||
positions[idx * 3 + 0] = (ix - this.gridSize / 2) + 0.5; // x
|
||||
positions[idx * 3 + 1] = 0.05; // y
|
||||
positions[idx * 3 + 2] = (iz - this.gridSize / 2) + 0.5; // z
|
||||
|
||||
sizes[idx] = 1.5;
|
||||
colors[idx * 3] = 0.1;
|
||||
colors[idx * 3 + 1] = 0.2;
|
||||
colors[idx * 3 + 2] = 0.6;
|
||||
opacities[idx] = 0.15;
|
||||
}
|
||||
}
|
||||
|
||||
const geo = new THREE.BufferGeometry();
|
||||
geo.setAttribute('position', new THREE.BufferAttribute(positions, 3));
|
||||
geo.setAttribute('splatSize', new THREE.BufferAttribute(sizes, 1));
|
||||
geo.setAttribute('splatColor', new THREE.BufferAttribute(colors, 3));
|
||||
geo.setAttribute('splatOpacity',new THREE.BufferAttribute(opacities, 1));
|
||||
|
||||
const mat = new THREE.ShaderMaterial({
|
||||
vertexShader: SPLAT_VERTEX,
|
||||
fragmentShader: SPLAT_FRAGMENT,
|
||||
transparent: true,
|
||||
depthWrite: false,
|
||||
blending: THREE.AdditiveBlending,
|
||||
});
|
||||
|
||||
this.fieldPoints = new THREE.Points(geo, mat);
|
||||
this.scene.add(this.fieldPoints);
|
||||
}
|
||||
|
||||
_createNodeMarkers(THREE) {
|
||||
// Router at center — green sphere
|
||||
const routerGeo = new THREE.SphereGeometry(0.3, 16, 16);
|
||||
const routerMat = new THREE.MeshBasicMaterial({ color: 0x00ff88, transparent: true, opacity: 0.8 });
|
||||
this.routerMarker = new THREE.Mesh(routerGeo, routerMat);
|
||||
this.routerMarker.position.set(0, 0.5, 0);
|
||||
this.scene.add(this.routerMarker);
|
||||
|
||||
// ESP32 node — cyan sphere (default position, updated from data)
|
||||
const nodeGeo = new THREE.SphereGeometry(0.25, 16, 16);
|
||||
const nodeMat = new THREE.MeshBasicMaterial({ color: 0x00ccff, transparent: true, opacity: 0.8 });
|
||||
this.nodeMarker = new THREE.Mesh(nodeGeo, nodeMat);
|
||||
this.nodeMarker.position.set(2, 0.5, 1.5);
|
||||
this.scene.add(this.nodeMarker);
|
||||
}
|
||||
|
||||
_createBodyBlob(THREE) {
|
||||
// A cluster of splats representing body disruption
|
||||
const count = 64;
|
||||
const positions = new Float32Array(count * 3);
|
||||
const sizes = new Float32Array(count);
|
||||
const colors = new Float32Array(count * 3);
|
||||
const opacities = new Float32Array(count);
|
||||
|
||||
for (let i = 0; i < count; i++) {
|
||||
// Random sphere distribution
|
||||
const theta = Math.random() * Math.PI * 2;
|
||||
const phi = Math.acos(2 * Math.random() - 1);
|
||||
const r = Math.random() * 1.5;
|
||||
positions[i * 3] = r * Math.sin(phi) * Math.cos(theta);
|
||||
positions[i * 3 + 1] = r * Math.cos(phi) + 2;
|
||||
positions[i * 3 + 2] = r * Math.sin(phi) * Math.sin(theta);
|
||||
|
||||
sizes[i] = 2 + Math.random() * 3;
|
||||
colors[i * 3] = 0.2;
|
||||
colors[i * 3 + 1] = 0.8;
|
||||
colors[i * 3 + 2] = 0.3;
|
||||
opacities[i] = 0.0; // hidden until presence detected
|
||||
}
|
||||
|
||||
const geo = new THREE.BufferGeometry();
|
||||
geo.setAttribute('position', new THREE.BufferAttribute(positions, 3));
|
||||
geo.setAttribute('splatSize', new THREE.BufferAttribute(sizes, 1));
|
||||
geo.setAttribute('splatColor', new THREE.BufferAttribute(colors, 3));
|
||||
geo.setAttribute('splatOpacity',new THREE.BufferAttribute(opacities, 1));
|
||||
|
||||
const mat = new THREE.ShaderMaterial({
|
||||
vertexShader: SPLAT_VERTEX,
|
||||
fragmentShader: SPLAT_FRAGMENT,
|
||||
transparent: true,
|
||||
depthWrite: false,
|
||||
blending: THREE.AdditiveBlending,
|
||||
});
|
||||
|
||||
this.bodyBlob = new THREE.Points(geo, mat);
|
||||
this.scene.add(this.bodyBlob);
|
||||
}
|
||||
|
||||
// ---- Mouse controls (simple orbit) -------------------------------------
|
||||
|
||||
_setupMouseControls() {
|
||||
let isDragging = false;
|
||||
let prevX = 0, prevY = 0;
|
||||
let azimuth = 0, elevation = 55;
|
||||
const radius = 20;
|
||||
|
||||
const updateCamera = () => {
|
||||
const phi = (elevation * Math.PI) / 180;
|
||||
const theta = (azimuth * Math.PI) / 180;
|
||||
this.camera.position.set(
|
||||
radius * Math.sin(phi) * Math.sin(theta),
|
||||
radius * Math.cos(phi),
|
||||
radius * Math.sin(phi) * Math.cos(theta)
|
||||
);
|
||||
this.camera.lookAt(0, 0, 0);
|
||||
};
|
||||
|
||||
const canvas = this.renderer.domElement;
|
||||
canvas.addEventListener('mousedown', (e) => {
|
||||
isDragging = true;
|
||||
prevX = e.clientX;
|
||||
prevY = e.clientY;
|
||||
});
|
||||
canvas.addEventListener('mousemove', (e) => {
|
||||
if (!isDragging) return;
|
||||
azimuth += (e.clientX - prevX) * 0.4;
|
||||
elevation -= (e.clientY - prevY) * 0.4;
|
||||
elevation = Math.max(15, Math.min(85, elevation));
|
||||
prevX = e.clientX;
|
||||
prevY = e.clientY;
|
||||
updateCamera();
|
||||
});
|
||||
canvas.addEventListener('mouseup', () => { isDragging = false; });
|
||||
canvas.addEventListener('mouseleave',() => { isDragging = false; });
|
||||
|
||||
// Scroll to zoom
|
||||
canvas.addEventListener('wheel', (e) => {
|
||||
e.preventDefault();
|
||||
const delta = e.deltaY > 0 ? 1.05 : 0.95;
|
||||
this.camera.position.multiplyScalar(delta);
|
||||
this.camera.position.clampLength(8, 40);
|
||||
}, { passive: false });
|
||||
|
||||
updateCamera();
|
||||
}
|
||||
|
||||
// ---- Data update -------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Update the visualization with new sensing data.
|
||||
* @param {object} data - sensing_update JSON from ws_server
|
||||
*/
|
||||
update(data) {
|
||||
this._lastData = data;
|
||||
if (!data) return;
|
||||
|
||||
const features = data.features || {};
|
||||
const classification = data.classification || {};
|
||||
const signalField = data.signal_field || {};
|
||||
const nodes = data.nodes || [];
|
||||
|
||||
// -- Update signal field splats ----------------------------------------
|
||||
if (signalField.values && this.fieldPoints) {
|
||||
const geo = this.fieldPoints.geometry;
|
||||
const clr = geo.attributes.splatColor.array;
|
||||
const sizes = geo.attributes.splatSize.array;
|
||||
const opac = geo.attributes.splatOpacity.array;
|
||||
const vals = signalField.values;
|
||||
const count = Math.min(vals.length, this.gridSize * this.gridSize);
|
||||
|
||||
for (let i = 0; i < count; i++) {
|
||||
const v = vals[i];
|
||||
const [r, g, b] = valueToColor(v);
|
||||
clr[i * 3] = r;
|
||||
clr[i * 3 + 1] = g;
|
||||
clr[i * 3 + 2] = b;
|
||||
sizes[i] = 1.0 + v * 4.0;
|
||||
opac[i] = 0.1 + v * 0.6;
|
||||
}
|
||||
|
||||
geo.attributes.splatColor.needsUpdate = true;
|
||||
geo.attributes.splatSize.needsUpdate = true;
|
||||
geo.attributes.splatOpacity.needsUpdate = true;
|
||||
}
|
||||
|
||||
// -- Update body blob --------------------------------------------------
|
||||
if (this.bodyBlob) {
|
||||
const bGeo = this.bodyBlob.geometry;
|
||||
const bOpac = bGeo.attributes.splatOpacity.array;
|
||||
const bClr = bGeo.attributes.splatColor.array;
|
||||
const bSize = bGeo.attributes.splatSize.array;
|
||||
const bPos = bGeo.attributes.position.array;
|
||||
|
||||
const presence = classification.presence || false;
|
||||
const motionLvl = classification.motion_level || 'absent';
|
||||
const confidence = classification.confidence || 0;
|
||||
const breathing = features.breathing_band_power || 0;
|
||||
|
||||
// Breathing pulsation
|
||||
const breathPulse = 1.0 + Math.sin(Date.now() * 0.004) * Math.min(breathing * 3, 0.4);
|
||||
|
||||
for (let i = 0; i < bOpac.length; i++) {
|
||||
if (presence) {
|
||||
bOpac[i] = confidence * 0.4;
|
||||
|
||||
// Color by motion level
|
||||
if (motionLvl === 'active') {
|
||||
bClr[i * 3] = 1.0;
|
||||
bClr[i * 3 + 1] = 0.2;
|
||||
bClr[i * 3 + 2] = 0.1;
|
||||
} else {
|
||||
bClr[i * 3] = 0.1;
|
||||
bClr[i * 3 + 1] = 0.8;
|
||||
bClr[i * 3 + 2] = 0.4;
|
||||
}
|
||||
|
||||
bSize[i] = (2 + Math.random() * 2) * breathPulse;
|
||||
} else {
|
||||
bOpac[i] = 0.0;
|
||||
}
|
||||
}
|
||||
|
||||
bGeo.attributes.splatOpacity.needsUpdate = true;
|
||||
bGeo.attributes.splatColor.needsUpdate = true;
|
||||
bGeo.attributes.splatSize.needsUpdate = true;
|
||||
}
|
||||
|
||||
// -- Update node positions ---------------------------------------------
|
||||
if (nodes.length > 0 && nodes[0].position) {
|
||||
const pos = nodes[0].position;
|
||||
this.nodeMarker.position.set(pos[0], 0.5, pos[2]);
|
||||
}
|
||||
}
|
||||
|
||||
// ---- Render loop -------------------------------------------------------
|
||||
|
||||
_animate() {
|
||||
this._animFrame = requestAnimationFrame(() => this._animate());
|
||||
|
||||
// Gentle router glow pulse
|
||||
if (this.routerMarker) {
|
||||
const pulse = 0.6 + 0.3 * Math.sin(Date.now() * 0.003);
|
||||
this.routerMarker.material.opacity = pulse;
|
||||
}
|
||||
|
||||
this.renderer.render(this.scene, this.camera);
|
||||
}
|
||||
|
||||
// ---- Resize / cleanup --------------------------------------------------
|
||||
|
||||
resize(width, height) {
|
||||
this.width = width;
|
||||
this.height = height;
|
||||
this.camera.aspect = width / height;
|
||||
this.camera.updateProjectionMatrix();
|
||||
this.renderer.setSize(width, height);
|
||||
}
|
||||
|
||||
dispose() {
|
||||
if (this._animFrame) {
|
||||
cancelAnimationFrame(this._animFrame);
|
||||
}
|
||||
this.renderer.dispose();
|
||||
if (this.renderer.domElement.parentNode) {
|
||||
this.renderer.domElement.parentNode.removeChild(this.renderer.domElement);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -9,7 +9,7 @@ export const API_CONFIG = {
|
||||
// Mock server configuration (only for testing)
|
||||
MOCK_SERVER: {
|
||||
ENABLED: false, // Set to true only for testing without backend
|
||||
AUTO_DETECT: true, // Automatically detect if backend is available
|
||||
AUTO_DETECT: false, // Disabled — sensing tab uses its own WebSocket on :8765
|
||||
},
|
||||
|
||||
// API Endpoints
|
||||
|
||||
@@ -27,6 +27,7 @@
|
||||
<button class="nav-tab" data-tab="architecture">Architecture</button>
|
||||
<button class="nav-tab" data-tab="performance">Performance</button>
|
||||
<button class="nav-tab" data-tab="applications">Applications</button>
|
||||
<button class="nav-tab" data-tab="sensing">Sensing</button>
|
||||
</nav>
|
||||
|
||||
<!-- Dashboard Tab -->
|
||||
@@ -478,6 +479,9 @@
|
||||
<p>While WiFi DensePose offers revolutionary capabilities, successful implementation requires careful consideration of environment setup, data privacy regulations, and system calibration for optimal performance.</p>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<!-- Sensing Tab -->
|
||||
<section id="sensing" class="tab-content"></section>
|
||||
</div>
|
||||
|
||||
<!-- Error Toast -->
|
||||
|
||||
@@ -67,9 +67,14 @@ export class ApiService {
|
||||
// Generic request method
|
||||
async request(url, options = {}) {
|
||||
try {
|
||||
// In sensing-only mode, skip all DensePose API calls
|
||||
if (backendDetector.sensingOnlyMode) {
|
||||
throw new Error('DensePose API unavailable (sensing-only mode)');
|
||||
}
|
||||
|
||||
// Process request through interceptors
|
||||
const processed = await this.processRequest(url, options);
|
||||
|
||||
|
||||
// Determine the correct base URL (real backend vs mock)
|
||||
let finalUrl = processed.url;
|
||||
if (processed.url.startsWith(API_CONFIG.BASE_URL)) {
|
||||
@@ -99,7 +104,10 @@ export class ApiService {
|
||||
return data;
|
||||
|
||||
} catch (error) {
|
||||
console.error('API Request Error:', error);
|
||||
// Only log if not a connection refusal (expected when DensePose API is down)
|
||||
if (error.message && !error.message.includes('Failed to fetch')) {
|
||||
console.error('API Request Error:', error);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -55,15 +55,16 @@ export class HealthService {
|
||||
return;
|
||||
}
|
||||
|
||||
// Initial check
|
||||
this.getSystemHealth().catch(error => {
|
||||
console.error('Initial health check failed:', error);
|
||||
// Initial check (silent on failure — DensePose API may not be running)
|
||||
this.getSystemHealth().catch(() => {
|
||||
// DensePose API not running — sensing-only mode, skip polling
|
||||
this._backendUnavailable = true;
|
||||
});
|
||||
|
||||
// Set up periodic checks
|
||||
// Set up periodic checks only if backend was reachable
|
||||
this.healthCheckInterval = setInterval(() => {
|
||||
if (this._backendUnavailable) return;
|
||||
this.getSystemHealth().catch(error => {
|
||||
console.error('Health check failed:', error);
|
||||
this.notifySubscribers({
|
||||
status: 'error',
|
||||
error: error.message,
|
||||
|
||||
271
ui/services/sensing.service.js
Normal file
271
ui/services/sensing.service.js
Normal file
@@ -0,0 +1,271 @@
|
||||
/**
|
||||
* Sensing WebSocket Service
|
||||
*
|
||||
* Manages the connection to the Python sensing WebSocket server
|
||||
* (ws://localhost:8765) and provides a callback-based API for the UI.
|
||||
*
|
||||
* Falls back to simulated data if the server is unreachable so the UI
|
||||
* always shows something.
|
||||
*/
|
||||
|
||||
const SENSING_WS_URL = 'ws://localhost:8765';
|
||||
const RECONNECT_DELAYS = [1000, 2000, 4000, 8000, 16000];
|
||||
const MAX_RECONNECT_ATTEMPTS = 10;
|
||||
const SIMULATION_INTERVAL = 500; // ms
|
||||
|
||||
class SensingService {
|
||||
constructor() {
|
||||
/** @type {WebSocket|null} */
|
||||
this._ws = null;
|
||||
this._listeners = new Set();
|
||||
this._stateListeners = new Set();
|
||||
this._reconnectAttempt = 0;
|
||||
this._reconnectTimer = null;
|
||||
this._simTimer = null;
|
||||
this._state = 'disconnected'; // disconnected | connecting | connected | simulated
|
||||
this._lastMessage = null;
|
||||
|
||||
// Ring buffer of recent RSSI values for sparkline
|
||||
this._rssiHistory = [];
|
||||
this._maxHistory = 60;
|
||||
}
|
||||
|
||||
// ---- Public API --------------------------------------------------------
|
||||
|
||||
/** Start the service (connect or simulate). */
|
||||
start() {
|
||||
this._connect();
|
||||
}
|
||||
|
||||
/** Stop the service entirely. */
|
||||
stop() {
|
||||
this._clearTimers();
|
||||
if (this._ws) {
|
||||
this._ws.close(1000, 'client stop');
|
||||
this._ws = null;
|
||||
}
|
||||
this._setState('disconnected');
|
||||
}
|
||||
|
||||
/** Register a callback for sensing data updates. Returns unsubscribe fn. */
|
||||
onData(callback) {
|
||||
this._listeners.add(callback);
|
||||
// Immediately push last known data if available
|
||||
if (this._lastMessage) callback(this._lastMessage);
|
||||
return () => this._listeners.delete(callback);
|
||||
}
|
||||
|
||||
/** Register a callback for connection state changes. Returns unsubscribe fn. */
|
||||
onStateChange(callback) {
|
||||
this._stateListeners.add(callback);
|
||||
callback(this._state);
|
||||
return () => this._stateListeners.delete(callback);
|
||||
}
|
||||
|
||||
/** Get the RSSI sparkline history (array of floats). */
|
||||
getRssiHistory() {
|
||||
return [...this._rssiHistory];
|
||||
}
|
||||
|
||||
/** Current connection state. */
|
||||
get state() {
|
||||
return this._state;
|
||||
}
|
||||
|
||||
// ---- Connection --------------------------------------------------------
|
||||
|
||||
_connect() {
|
||||
if (this._ws && this._ws.readyState <= WebSocket.OPEN) return;
|
||||
|
||||
this._setState('connecting');
|
||||
|
||||
try {
|
||||
this._ws = new WebSocket(SENSING_WS_URL);
|
||||
} catch (err) {
|
||||
console.warn('[Sensing] WebSocket constructor failed:', err.message);
|
||||
this._fallbackToSimulation();
|
||||
return;
|
||||
}
|
||||
|
||||
this._ws.onopen = () => {
|
||||
console.info('[Sensing] Connected to', SENSING_WS_URL);
|
||||
this._reconnectAttempt = 0;
|
||||
this._stopSimulation();
|
||||
this._setState('connected');
|
||||
};
|
||||
|
||||
this._ws.onmessage = (evt) => {
|
||||
try {
|
||||
const data = JSON.parse(evt.data);
|
||||
this._handleData(data);
|
||||
} catch (e) {
|
||||
console.warn('[Sensing] Invalid message:', e.message);
|
||||
}
|
||||
};
|
||||
|
||||
this._ws.onerror = () => {
|
||||
// onerror is always followed by onclose, so we handle reconnect there
|
||||
};
|
||||
|
||||
this._ws.onclose = (evt) => {
|
||||
console.info('[Sensing] Connection closed (code=%d)', evt.code);
|
||||
this._ws = null;
|
||||
if (evt.code !== 1000) {
|
||||
this._scheduleReconnect();
|
||||
} else {
|
||||
this._setState('disconnected');
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
_scheduleReconnect() {
|
||||
if (this._reconnectAttempt >= MAX_RECONNECT_ATTEMPTS) {
|
||||
console.warn('[Sensing] Max reconnect attempts reached, switching to simulation');
|
||||
this._fallbackToSimulation();
|
||||
return;
|
||||
}
|
||||
|
||||
const delay = RECONNECT_DELAYS[Math.min(this._reconnectAttempt, RECONNECT_DELAYS.length - 1)];
|
||||
this._reconnectAttempt++;
|
||||
console.info('[Sensing] Reconnecting in %dms (attempt %d)', delay, this._reconnectAttempt);
|
||||
|
||||
this._reconnectTimer = setTimeout(() => {
|
||||
this._reconnectTimer = null;
|
||||
this._connect();
|
||||
}, delay);
|
||||
|
||||
// Start simulation while waiting
|
||||
if (this._state !== 'simulated') {
|
||||
this._fallbackToSimulation();
|
||||
}
|
||||
}
|
||||
|
||||
// ---- Simulation fallback -----------------------------------------------
|
||||
|
||||
_fallbackToSimulation() {
|
||||
this._setState('simulated');
|
||||
if (this._simTimer) return; // already running
|
||||
console.info('[Sensing] Running in simulation mode');
|
||||
|
||||
this._simTimer = setInterval(() => {
|
||||
const data = this._generateSimulatedData();
|
||||
this._handleData(data);
|
||||
}, SIMULATION_INTERVAL);
|
||||
}
|
||||
|
||||
_stopSimulation() {
|
||||
if (this._simTimer) {
|
||||
clearInterval(this._simTimer);
|
||||
this._simTimer = null;
|
||||
}
|
||||
}
|
||||
|
||||
_generateSimulatedData() {
|
||||
const t = Date.now() / 1000;
|
||||
const baseRssi = -45;
|
||||
const variance = 1.5 + Math.sin(t * 0.1) * 1.0;
|
||||
const motionBand = 0.05 + Math.abs(Math.sin(t * 0.3)) * 0.15;
|
||||
const breathBand = 0.03 + Math.abs(Math.sin(t * 0.05)) * 0.08;
|
||||
const isPresent = variance > 0.8;
|
||||
const isActive = motionBand > 0.12;
|
||||
|
||||
// Generate signal field
|
||||
const gridSize = 20;
|
||||
const values = [];
|
||||
for (let iz = 0; iz < gridSize; iz++) {
|
||||
for (let ix = 0; ix < gridSize; ix++) {
|
||||
const cx = gridSize / 2, cy = gridSize / 2;
|
||||
const dist = Math.sqrt((ix - cx) ** 2 + (iz - cy) ** 2);
|
||||
let v = Math.max(0, 1 - dist / (gridSize * 0.7)) * 0.3;
|
||||
// Body blob
|
||||
const bx = cx + 3 * Math.sin(t * 0.2);
|
||||
const by = cy + 2 * Math.cos(t * 0.15);
|
||||
const bodyDist = Math.sqrt((ix - bx) ** 2 + (iz - by) ** 2);
|
||||
if (isPresent) {
|
||||
v += Math.exp(-bodyDist * bodyDist / 8) * (0.3 + motionBand * 3);
|
||||
}
|
||||
values.push(Math.min(1, Math.max(0, v + Math.random() * 0.05)));
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
type: 'sensing_update',
|
||||
timestamp: t,
|
||||
source: 'simulated',
|
||||
nodes: [{
|
||||
node_id: 1,
|
||||
rssi_dbm: baseRssi + Math.sin(t * 0.5) * 3,
|
||||
position: [2, 0, 1.5],
|
||||
amplitude: [],
|
||||
subcarrier_count: 0,
|
||||
}],
|
||||
features: {
|
||||
mean_rssi: baseRssi + Math.sin(t * 0.5) * 3,
|
||||
variance,
|
||||
std: Math.sqrt(variance),
|
||||
motion_band_power: motionBand,
|
||||
breathing_band_power: breathBand,
|
||||
dominant_freq_hz: 0.3 + Math.sin(t * 0.02) * 0.1,
|
||||
change_points: Math.floor(Math.random() * 3),
|
||||
spectral_power: motionBand + breathBand + Math.random() * 0.1,
|
||||
range: variance * 3,
|
||||
iqr: variance * 1.5,
|
||||
skewness: (Math.random() - 0.5) * 0.5,
|
||||
kurtosis: Math.random() * 2,
|
||||
},
|
||||
classification: {
|
||||
motion_level: isActive ? 'active' : (isPresent ? 'present_still' : 'absent'),
|
||||
presence: isPresent,
|
||||
confidence: isPresent ? 0.75 + Math.random() * 0.2 : 0.5 + Math.random() * 0.3,
|
||||
},
|
||||
signal_field: {
|
||||
grid_size: [gridSize, 1, gridSize],
|
||||
values,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
// ---- Data handling -----------------------------------------------------
|
||||
|
||||
_handleData(data) {
|
||||
this._lastMessage = data;
|
||||
|
||||
// Update RSSI history for sparkline
|
||||
if (data.features && data.features.mean_rssi != null) {
|
||||
this._rssiHistory.push(data.features.mean_rssi);
|
||||
if (this._rssiHistory.length > this._maxHistory) {
|
||||
this._rssiHistory.shift();
|
||||
}
|
||||
}
|
||||
|
||||
// Notify all listeners
|
||||
for (const cb of this._listeners) {
|
||||
try {
|
||||
cb(data);
|
||||
} catch (e) {
|
||||
console.error('[Sensing] Listener error:', e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---- State management --------------------------------------------------
|
||||
|
||||
_setState(newState) {
|
||||
if (newState === this._state) return;
|
||||
this._state = newState;
|
||||
for (const cb of this._stateListeners) {
|
||||
try { cb(newState); } catch (e) { /* ignore */ }
|
||||
}
|
||||
}
|
||||
|
||||
_clearTimers() {
|
||||
this._stopSimulation();
|
||||
if (this._reconnectTimer) {
|
||||
clearTimeout(this._reconnectTimer);
|
||||
this._reconnectTimer = null;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Singleton
|
||||
export const sensingService = new SensingService();
|
||||
251
ui/style.css
251
ui/style.css
@@ -1654,3 +1654,254 @@ canvas {
|
||||
font-weight: var(--font-weight-semibold);
|
||||
color: var(--color-primary);
|
||||
}
|
||||
|
||||
/* ===== Sensing Tab Styles ===== */
|
||||
|
||||
.sensing-layout {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr 320px;
|
||||
gap: var(--space-16);
|
||||
min-height: 550px;
|
||||
}
|
||||
|
||||
@media (max-width: 900px) {
|
||||
.sensing-layout {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
}
|
||||
|
||||
.sensing-viewport {
|
||||
background: #0a0a12;
|
||||
border-radius: var(--radius-lg);
|
||||
border: 1px solid var(--color-card-border);
|
||||
overflow: hidden;
|
||||
min-height: 500px;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.sensing-viewport canvas {
|
||||
display: block;
|
||||
width: 100% !important;
|
||||
height: 100% !important;
|
||||
}
|
||||
|
||||
.sensing-loading {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
height: 100%;
|
||||
color: var(--color-text-secondary);
|
||||
font-size: var(--font-size-lg);
|
||||
}
|
||||
|
||||
/* Side panel */
|
||||
.sensing-panel {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--space-12);
|
||||
overflow-y: auto;
|
||||
max-height: 600px;
|
||||
}
|
||||
|
||||
.sensing-card {
|
||||
background: var(--color-surface);
|
||||
border: 1px solid var(--color-card-border);
|
||||
border-radius: var(--radius-md);
|
||||
padding: var(--space-12);
|
||||
}
|
||||
|
||||
.sensing-card-title {
|
||||
font-size: var(--font-size-xs);
|
||||
font-weight: var(--font-weight-semibold);
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
color: var(--color-text-secondary);
|
||||
margin-bottom: var(--space-8);
|
||||
}
|
||||
|
||||
/* Connection status */
|
||||
.sensing-connection {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: var(--space-8);
|
||||
font-size: var(--font-size-sm);
|
||||
}
|
||||
|
||||
.sensing-dot {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
border-radius: 50%;
|
||||
background: var(--color-info);
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.sensing-dot.connected {
|
||||
background: #00cc88;
|
||||
box-shadow: 0 0 6px #00cc88;
|
||||
}
|
||||
|
||||
.sensing-dot.simulated {
|
||||
background: var(--color-warning);
|
||||
box-shadow: 0 0 6px var(--color-warning);
|
||||
}
|
||||
|
||||
.sensing-dot.connecting {
|
||||
background: var(--color-info);
|
||||
animation: pulse 1.5s infinite;
|
||||
}
|
||||
|
||||
.sensing-dot.disconnected {
|
||||
background: var(--color-error);
|
||||
}
|
||||
|
||||
.sensing-source {
|
||||
margin-left: auto;
|
||||
font-size: var(--font-size-xs);
|
||||
color: var(--color-text-secondary);
|
||||
font-family: var(--font-family-mono);
|
||||
}
|
||||
|
||||
/* Big RSSI value */
|
||||
.sensing-big-value {
|
||||
font-size: var(--font-size-3xl);
|
||||
font-weight: var(--font-weight-bold);
|
||||
color: var(--color-primary);
|
||||
font-family: var(--font-family-mono);
|
||||
margin-bottom: var(--space-4);
|
||||
}
|
||||
|
||||
#sensingSparkline {
|
||||
width: 100%;
|
||||
height: 40px;
|
||||
display: block;
|
||||
}
|
||||
|
||||
/* Meter bars */
|
||||
.sensing-meters {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--space-8);
|
||||
}
|
||||
|
||||
.sensing-meter {
|
||||
display: grid;
|
||||
grid-template-columns: 90px 1fr 50px;
|
||||
align-items: center;
|
||||
gap: var(--space-8);
|
||||
font-size: var(--font-size-sm);
|
||||
}
|
||||
|
||||
.sensing-meter label {
|
||||
color: var(--color-text-secondary);
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.sensing-bar {
|
||||
height: 6px;
|
||||
background: var(--color-secondary);
|
||||
border-radius: var(--radius-full);
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.sensing-bar-fill {
|
||||
height: 100%;
|
||||
border-radius: var(--radius-full);
|
||||
transition: width 0.3s ease;
|
||||
background: var(--color-primary);
|
||||
width: 0%;
|
||||
}
|
||||
|
||||
.sensing-bar-fill.motion {
|
||||
background: linear-gradient(90deg, #ff6633, #ff3333);
|
||||
}
|
||||
|
||||
.sensing-bar-fill.breath {
|
||||
background: linear-gradient(90deg, #33ccff, #3366ff);
|
||||
}
|
||||
|
||||
.sensing-bar-fill.spectral {
|
||||
background: linear-gradient(90deg, #aa66ff, #ff66aa);
|
||||
}
|
||||
|
||||
.sensing-bar-fill.confidence {
|
||||
background: linear-gradient(90deg, #33cc88, #00ff88);
|
||||
}
|
||||
|
||||
.sensing-meter-val {
|
||||
font-family: var(--font-family-mono);
|
||||
font-size: var(--font-size-xs);
|
||||
text-align: right;
|
||||
color: var(--color-text-secondary);
|
||||
}
|
||||
|
||||
/* Classification */
|
||||
.sensing-classification {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--space-8);
|
||||
}
|
||||
|
||||
.sensing-class-label {
|
||||
font-size: var(--font-size-xl);
|
||||
font-weight: var(--font-weight-bold);
|
||||
text-align: center;
|
||||
padding: var(--space-8);
|
||||
border-radius: var(--radius-base);
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
}
|
||||
|
||||
.sensing-class-label.absent {
|
||||
background: rgba(var(--color-info-rgb), 0.15);
|
||||
color: var(--color-info);
|
||||
}
|
||||
|
||||
.sensing-class-label.present_still {
|
||||
background: rgba(var(--color-success-rgb), 0.15);
|
||||
color: var(--color-success);
|
||||
}
|
||||
|
||||
.sensing-class-label.active {
|
||||
background: rgba(var(--color-error-rgb), 0.15);
|
||||
color: var(--color-error);
|
||||
}
|
||||
|
||||
.sensing-confidence {
|
||||
display: grid;
|
||||
grid-template-columns: 70px 1fr 40px;
|
||||
align-items: center;
|
||||
gap: var(--space-8);
|
||||
font-size: var(--font-size-sm);
|
||||
}
|
||||
|
||||
.sensing-confidence label {
|
||||
color: var(--color-text-secondary);
|
||||
}
|
||||
|
||||
/* Details */
|
||||
.sensing-details {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--space-4);
|
||||
}
|
||||
|
||||
.sensing-detail-row {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
font-size: var(--font-size-sm);
|
||||
padding: var(--space-4) 0;
|
||||
border-bottom: 1px solid var(--color-card-border-inner);
|
||||
}
|
||||
|
||||
.sensing-detail-row:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.sensing-detail-row span:first-child {
|
||||
color: var(--color-text-secondary);
|
||||
}
|
||||
|
||||
.sensing-detail-row span:last-child {
|
||||
font-family: var(--font-family-mono);
|
||||
font-weight: var(--font-weight-medium);
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ export class BackendDetector {
|
||||
this.isBackendAvailable = null;
|
||||
this.lastCheck = 0;
|
||||
this.checkInterval = 30000; // Check every 30 seconds
|
||||
this.sensingOnlyMode = false; // True when DensePose API is down, sensing WS is the only backend
|
||||
}
|
||||
|
||||
// Check if the real backend is available
|
||||
|
||||
@@ -24,6 +24,7 @@ are required.
|
||||
from v1.src.sensing.rssi_collector import (
|
||||
LinuxWifiCollector,
|
||||
SimulatedCollector,
|
||||
WindowsWifiCollector,
|
||||
WifiSample,
|
||||
)
|
||||
from v1.src.sensing.feature_extractor import (
|
||||
@@ -44,6 +45,7 @@ from v1.src.sensing.backend import (
|
||||
__all__ = [
|
||||
"LinuxWifiCollector",
|
||||
"SimulatedCollector",
|
||||
"WindowsWifiCollector",
|
||||
"WifiSample",
|
||||
"RssiFeatureExtractor",
|
||||
"RssiFeatures",
|
||||
|
||||
@@ -20,6 +20,7 @@ from v1.src.sensing.feature_extractor import RssiFeatureExtractor, RssiFeatures
|
||||
from v1.src.sensing.rssi_collector import (
|
||||
LinuxWifiCollector,
|
||||
SimulatedCollector,
|
||||
WindowsWifiCollector,
|
||||
WifiCollector,
|
||||
WifiSample,
|
||||
)
|
||||
@@ -89,7 +90,7 @@ class CommodityBackend:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
collector: LinuxWifiCollector | SimulatedCollector,
|
||||
collector: LinuxWifiCollector | SimulatedCollector | WindowsWifiCollector,
|
||||
extractor: Optional[RssiFeatureExtractor] = None,
|
||||
classifier: Optional[PresenceClassifier] = None,
|
||||
) -> None:
|
||||
@@ -98,7 +99,7 @@ class CommodityBackend:
|
||||
self._classifier = classifier or PresenceClassifier()
|
||||
|
||||
@property
|
||||
def collector(self) -> LinuxWifiCollector | SimulatedCollector:
|
||||
def collector(self) -> LinuxWifiCollector | SimulatedCollector | WindowsWifiCollector:
|
||||
return self._collector
|
||||
|
||||
@property
|
||||
|
||||
@@ -444,3 +444,161 @@ class SimulatedCollector:
|
||||
retry_count=max(0, index // 100),
|
||||
interface="sim0",
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Windows WiFi collector (real hardware via netsh)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class WindowsWifiCollector:
|
||||
"""
|
||||
Collects real RSSI data from a Windows WiFi interface.
|
||||
|
||||
Data source: ``netsh wlan show interfaces`` which provides RSSI in dBm,
|
||||
signal quality percentage, channel, band, and connection state.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
interface : str
|
||||
WiFi interface name (default ``"Wi-Fi"``). Must match the ``Name``
|
||||
field shown by ``netsh wlan show interfaces``.
|
||||
sample_rate_hz : float
|
||||
Target sampling rate in Hz (default 2.0). Windows ``netsh`` is slow
|
||||
(~200-400ms per call) so rates above 2 Hz may not be achievable.
|
||||
buffer_seconds : int
|
||||
Ring buffer capacity in seconds (default 120).
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
interface: str = "Wi-Fi",
|
||||
sample_rate_hz: float = 2.0,
|
||||
buffer_seconds: int = 120,
|
||||
) -> None:
|
||||
self._interface = interface
|
||||
self._rate = sample_rate_hz
|
||||
self._buffer = RingBuffer(max_size=int(sample_rate_hz * buffer_seconds))
|
||||
self._running = False
|
||||
self._thread: Optional[threading.Thread] = None
|
||||
self._cumulative_tx: int = 0
|
||||
self._cumulative_rx: int = 0
|
||||
|
||||
# -- public API ----------------------------------------------------------
|
||||
|
||||
@property
|
||||
def sample_rate_hz(self) -> float:
|
||||
return self._rate
|
||||
|
||||
def start(self) -> None:
|
||||
if self._running:
|
||||
return
|
||||
self._validate_interface()
|
||||
self._running = True
|
||||
self._thread = threading.Thread(
|
||||
target=self._sample_loop, daemon=True, name="win-rssi-collector"
|
||||
)
|
||||
self._thread.start()
|
||||
logger.info(
|
||||
"WindowsWifiCollector started on '%s' at %.1f Hz",
|
||||
self._interface,
|
||||
self._rate,
|
||||
)
|
||||
|
||||
def stop(self) -> None:
|
||||
self._running = False
|
||||
if self._thread is not None:
|
||||
self._thread.join(timeout=2.0)
|
||||
self._thread = None
|
||||
logger.info("WindowsWifiCollector stopped")
|
||||
|
||||
def get_samples(self, n: Optional[int] = None) -> List[WifiSample]:
|
||||
if n is not None:
|
||||
return self._buffer.get_last_n(n)
|
||||
return self._buffer.get_all()
|
||||
|
||||
def collect_once(self) -> WifiSample:
|
||||
return self._read_sample()
|
||||
|
||||
# -- internals -----------------------------------------------------------
|
||||
|
||||
def _validate_interface(self) -> None:
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["netsh", "wlan", "show", "interfaces"],
|
||||
capture_output=True, text=True, timeout=5.0,
|
||||
)
|
||||
if self._interface not in result.stdout:
|
||||
raise RuntimeError(
|
||||
f"WiFi interface '{self._interface}' not found. "
|
||||
f"Check 'netsh wlan show interfaces' for the correct name."
|
||||
)
|
||||
if "disconnected" in result.stdout.lower().split(self._interface.lower())[1][:200]:
|
||||
raise RuntimeError(
|
||||
f"WiFi interface '{self._interface}' is disconnected. "
|
||||
f"Connect to a WiFi network first."
|
||||
)
|
||||
except FileNotFoundError:
|
||||
raise RuntimeError(
|
||||
"netsh not found. This collector requires Windows."
|
||||
)
|
||||
|
||||
def _sample_loop(self) -> None:
|
||||
interval = 1.0 / self._rate
|
||||
while self._running:
|
||||
t0 = time.monotonic()
|
||||
try:
|
||||
sample = self._read_sample()
|
||||
self._buffer.append(sample)
|
||||
except Exception:
|
||||
logger.exception("Error reading WiFi sample")
|
||||
elapsed = time.monotonic() - t0
|
||||
sleep_time = max(0.0, interval - elapsed)
|
||||
if sleep_time > 0:
|
||||
time.sleep(sleep_time)
|
||||
|
||||
def _read_sample(self) -> WifiSample:
|
||||
result = subprocess.run(
|
||||
["netsh", "wlan", "show", "interfaces"],
|
||||
capture_output=True, text=True, timeout=5.0,
|
||||
)
|
||||
rssi = -80.0
|
||||
signal_pct = 0.0
|
||||
|
||||
for line in result.stdout.splitlines():
|
||||
stripped = line.strip()
|
||||
# "Rssi" line contains the raw dBm value (available on Win10+)
|
||||
if stripped.lower().startswith("rssi"):
|
||||
try:
|
||||
rssi = float(stripped.split(":")[1].strip())
|
||||
except (IndexError, ValueError):
|
||||
pass
|
||||
# "Signal" line contains percentage (always available)
|
||||
elif stripped.lower().startswith("signal"):
|
||||
try:
|
||||
pct_str = stripped.split(":")[1].strip().rstrip("%")
|
||||
signal_pct = float(pct_str)
|
||||
# If RSSI line was missing, estimate from percentage
|
||||
# Signal% roughly maps: 100% ≈ -30 dBm, 0% ≈ -90 dBm
|
||||
except (IndexError, ValueError):
|
||||
pass
|
||||
|
||||
# Normalise link quality from signal percentage
|
||||
link_quality = signal_pct / 100.0
|
||||
|
||||
# Estimate noise floor (Windows doesn't expose it directly)
|
||||
noise_dbm = -95.0
|
||||
|
||||
# Track cumulative bytes (not available from netsh; increment synthetic counter)
|
||||
self._cumulative_tx += 1500
|
||||
self._cumulative_rx += 3000
|
||||
|
||||
return WifiSample(
|
||||
timestamp=time.time(),
|
||||
rssi_dbm=rssi,
|
||||
noise_dbm=noise_dbm,
|
||||
link_quality=link_quality,
|
||||
tx_bytes=self._cumulative_tx,
|
||||
rx_bytes=self._cumulative_rx,
|
||||
retry_count=0,
|
||||
interface=self._interface,
|
||||
)
|
||||
|
||||
528
v1/src/sensing/ws_server.py
Normal file
528
v1/src/sensing/ws_server.py
Normal file
@@ -0,0 +1,528 @@
|
||||
"""
|
||||
WebSocket sensing server.
|
||||
|
||||
Lightweight asyncio server that bridges the WiFi sensing pipeline to the
|
||||
browser UI. Runs the RSSI feature extractor + classifier on a 500 ms
|
||||
tick and broadcasts JSON frames to all connected WebSocket clients on
|
||||
``ws://localhost:8765``.
|
||||
|
||||
Usage
|
||||
-----
|
||||
pip install websockets
|
||||
python -m v1.src.sensing.ws_server # or python v1/src/sensing/ws_server.py
|
||||
|
||||
Data sources (tried in order):
|
||||
1. ESP32 CSI over UDP port 5005 (ADR-018 binary frames)
|
||||
2. Windows WiFi RSSI via netsh
|
||||
3. Linux WiFi RSSI via /proc/net/wireless
|
||||
4. Simulated collector (fallback)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
import platform
|
||||
import signal
|
||||
import socket
|
||||
import struct
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
from collections import deque
|
||||
from typing import Dict, List, Optional, Set
|
||||
|
||||
import numpy as np
|
||||
|
||||
# Sensing pipeline imports
|
||||
from v1.src.sensing.rssi_collector import (
|
||||
LinuxWifiCollector,
|
||||
SimulatedCollector,
|
||||
WindowsWifiCollector,
|
||||
WifiSample,
|
||||
RingBuffer,
|
||||
)
|
||||
from v1.src.sensing.feature_extractor import RssiFeatureExtractor, RssiFeatures
|
||||
from v1.src.sensing.classifier import MotionLevel, PresenceClassifier, SensingResult
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Configuration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
HOST = "localhost"
|
||||
PORT = 8765
|
||||
TICK_INTERVAL = 0.5 # seconds between broadcasts
|
||||
SIGNAL_FIELD_GRID = 20 # NxN grid for signal field visualization
|
||||
ESP32_UDP_PORT = 5005
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# ESP32 UDP Collector — reads ADR-018 binary frames
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class Esp32UdpCollector:
|
||||
"""
|
||||
Collects real CSI data from ESP32 nodes via UDP (ADR-018 binary format).
|
||||
|
||||
Parses I/Q pairs, computes mean amplitude per frame, and stores it as
|
||||
an RSSI-equivalent value in the standard WifiSample ring buffer so the
|
||||
existing feature extractor and classifier work unchanged.
|
||||
|
||||
Also keeps the last parsed CSI frame for the UI to show subcarrier data.
|
||||
"""
|
||||
|
||||
# ADR-018 header: magic(4) node_id(1) n_ant(1) n_sc(2) freq(4) seq(4) rssi(1) noise(1) reserved(2)
|
||||
MAGIC = 0xC5110001
|
||||
HEADER_SIZE = 20
|
||||
HEADER_FMT = '<IBBHIIBB2x'
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
bind_addr: str = "0.0.0.0",
|
||||
port: int = ESP32_UDP_PORT,
|
||||
sample_rate_hz: float = 10.0,
|
||||
buffer_seconds: int = 120,
|
||||
) -> None:
|
||||
self._bind = bind_addr
|
||||
self._port = port
|
||||
self._rate = sample_rate_hz
|
||||
self._buffer = RingBuffer(max_size=int(sample_rate_hz * buffer_seconds))
|
||||
self._running = False
|
||||
self._thread: Optional[threading.Thread] = None
|
||||
self._sock: Optional[socket.socket] = None
|
||||
|
||||
# Last CSI frame for enhanced UI
|
||||
self.last_csi: Optional[Dict] = None
|
||||
self._frames_received = 0
|
||||
|
||||
@property
|
||||
def sample_rate_hz(self) -> float:
|
||||
return self._rate
|
||||
|
||||
@property
|
||||
def frames_received(self) -> int:
|
||||
return self._frames_received
|
||||
|
||||
def start(self) -> None:
|
||||
if self._running:
|
||||
return
|
||||
self._sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
self._sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
self._sock.settimeout(1.0)
|
||||
self._sock.bind((self._bind, self._port))
|
||||
self._running = True
|
||||
self._thread = threading.Thread(
|
||||
target=self._recv_loop, daemon=True, name="esp32-udp-collector"
|
||||
)
|
||||
self._thread.start()
|
||||
logger.info("Esp32UdpCollector listening on %s:%d", self._bind, self._port)
|
||||
|
||||
def stop(self) -> None:
|
||||
self._running = False
|
||||
if self._thread:
|
||||
self._thread.join(timeout=2.0)
|
||||
self._thread = None
|
||||
if self._sock:
|
||||
self._sock.close()
|
||||
self._sock = None
|
||||
logger.info("Esp32UdpCollector stopped (%d frames received)", self._frames_received)
|
||||
|
||||
def get_samples(self, n: Optional[int] = None) -> List[WifiSample]:
|
||||
if n is not None:
|
||||
return self._buffer.get_last_n(n)
|
||||
return self._buffer.get_all()
|
||||
|
||||
def _recv_loop(self) -> None:
|
||||
while self._running:
|
||||
try:
|
||||
data, addr = self._sock.recvfrom(4096)
|
||||
self._parse_and_store(data, addr)
|
||||
except socket.timeout:
|
||||
continue
|
||||
except Exception:
|
||||
if self._running:
|
||||
logger.exception("Error receiving ESP32 UDP packet")
|
||||
|
||||
def _parse_and_store(self, raw: bytes, addr) -> None:
|
||||
if len(raw) < self.HEADER_SIZE:
|
||||
return
|
||||
|
||||
magic, node_id, n_ant, n_sc, freq_mhz, seq, rssi_u8, noise_u8 = \
|
||||
struct.unpack_from(self.HEADER_FMT, raw, 0)
|
||||
|
||||
if magic != self.MAGIC:
|
||||
return
|
||||
|
||||
rssi = rssi_u8 if rssi_u8 < 128 else rssi_u8 - 256
|
||||
noise = noise_u8 if noise_u8 < 128 else noise_u8 - 256
|
||||
|
||||
# Parse I/Q data if available
|
||||
iq_count = n_ant * n_sc
|
||||
iq_bytes_needed = self.HEADER_SIZE + iq_count * 2
|
||||
amplitude_list = []
|
||||
|
||||
if len(raw) >= iq_bytes_needed and iq_count > 0:
|
||||
iq_raw = struct.unpack_from(f'<{iq_count * 2}b', raw, self.HEADER_SIZE)
|
||||
i_vals = np.array(iq_raw[0::2], dtype=np.float64)
|
||||
q_vals = np.array(iq_raw[1::2], dtype=np.float64)
|
||||
amplitudes = np.sqrt(i_vals ** 2 + q_vals ** 2)
|
||||
mean_amp = float(np.mean(amplitudes))
|
||||
amplitude_list = amplitudes.tolist()
|
||||
else:
|
||||
mean_amp = 0.0
|
||||
|
||||
# Store enhanced CSI info for UI
|
||||
self.last_csi = {
|
||||
"node_id": node_id,
|
||||
"n_antennas": n_ant,
|
||||
"n_subcarriers": n_sc,
|
||||
"freq_mhz": freq_mhz,
|
||||
"sequence": seq,
|
||||
"rssi_dbm": rssi,
|
||||
"noise_floor_dbm": noise,
|
||||
"mean_amplitude": mean_amp,
|
||||
"amplitude": amplitude_list[:56], # cap for JSON size
|
||||
"source_addr": f"{addr[0]}:{addr[1]}",
|
||||
}
|
||||
|
||||
# Use RSSI from the ESP32 frame header as the primary signal metric.
|
||||
# If RSSI is the default -80 placeholder, derive a pseudo-RSSI from
|
||||
# mean amplitude to keep the feature extractor meaningful.
|
||||
effective_rssi = float(rssi)
|
||||
if rssi == -80 and mean_amp > 0:
|
||||
# Map amplitude (typically 1-20) to dBm range (-70 to -30)
|
||||
effective_rssi = -70.0 + min(mean_amp, 20.0) * 2.0
|
||||
|
||||
sample = WifiSample(
|
||||
timestamp=time.time(),
|
||||
rssi_dbm=effective_rssi,
|
||||
noise_dbm=float(noise),
|
||||
link_quality=max(0.0, min(1.0, (effective_rssi + 100.0) / 60.0)),
|
||||
tx_bytes=seq * 1500,
|
||||
rx_bytes=seq * 3000,
|
||||
retry_count=0,
|
||||
interface=f"esp32-node{node_id}",
|
||||
)
|
||||
self._buffer.append(sample)
|
||||
self._frames_received += 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Probe for ESP32 UDP
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def probe_esp32_udp(port: int = ESP32_UDP_PORT, timeout: float = 2.0) -> bool:
|
||||
"""Return True if an ESP32 is actively streaming on the UDP port."""
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
sock.settimeout(timeout)
|
||||
try:
|
||||
sock.bind(("0.0.0.0", port))
|
||||
data, _ = sock.recvfrom(256)
|
||||
if len(data) >= 20:
|
||||
magic = struct.unpack_from('<I', data, 0)[0]
|
||||
return magic == 0xC5110001
|
||||
return False
|
||||
except (socket.timeout, OSError):
|
||||
return False
|
||||
finally:
|
||||
sock.close()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Signal field generator
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def generate_signal_field(
|
||||
features: RssiFeatures,
|
||||
result: SensingResult,
|
||||
grid_size: int = SIGNAL_FIELD_GRID,
|
||||
csi_data: Optional[Dict] = None,
|
||||
) -> Dict:
|
||||
"""
|
||||
Generate a 2-D signal-strength field for the Gaussian splat visualization.
|
||||
When real CSI amplitude data is available, it modulates the field.
|
||||
"""
|
||||
field = np.zeros((grid_size, grid_size), dtype=np.float64)
|
||||
|
||||
# Base noise floor
|
||||
rng = np.random.default_rng(int(abs(features.mean * 100)) % (2**31))
|
||||
field += rng.uniform(0.02, 0.08, size=(grid_size, grid_size))
|
||||
|
||||
cx, cy = grid_size // 2, grid_size // 2
|
||||
|
||||
# Radial attenuation from router
|
||||
for y in range(grid_size):
|
||||
for x in range(grid_size):
|
||||
dist = math.sqrt((x - cx) ** 2 + (y - cy) ** 2)
|
||||
attenuation = max(0.0, 1.0 - dist / (grid_size * 0.7))
|
||||
field[y, x] += attenuation * 0.3
|
||||
|
||||
# If we have real CSI subcarrier amplitudes, paint them along one axis
|
||||
if csi_data and csi_data.get("amplitude"):
|
||||
amps = np.array(csi_data["amplitude"][:grid_size], dtype=np.float64)
|
||||
if len(amps) > 0:
|
||||
max_a = np.max(amps) if np.max(amps) > 0 else 1.0
|
||||
norm_amps = amps / max_a
|
||||
# Spread subcarrier energy as vertical stripes
|
||||
for ix, a in enumerate(norm_amps):
|
||||
col = int(ix * grid_size / len(norm_amps))
|
||||
col = min(col, grid_size - 1)
|
||||
field[:, col] += a * 0.4
|
||||
|
||||
if result.presence_detected:
|
||||
body_x = cx + int(3 * math.sin(time.time() * 0.2))
|
||||
body_y = cy + int(2 * math.cos(time.time() * 0.15))
|
||||
sigma = 2.0 + features.variance * 0.5
|
||||
|
||||
for y in range(grid_size):
|
||||
for x in range(grid_size):
|
||||
dx = x - body_x
|
||||
dy = y - body_y
|
||||
blob = math.exp(-(dx * dx + dy * dy) / (2.0 * sigma * sigma))
|
||||
intensity = 0.3 + 0.7 * min(1.0, features.motion_band_power * 5)
|
||||
field[y, x] += blob * intensity
|
||||
|
||||
if features.breathing_band_power > 0.01:
|
||||
breath_phase = math.sin(2 * math.pi * 0.3 * time.time())
|
||||
breath_radius = 3.0 + breath_phase * 0.8
|
||||
for y in range(grid_size):
|
||||
for x in range(grid_size):
|
||||
dist_body = math.sqrt((x - body_x) ** 2 + (y - body_y) ** 2)
|
||||
ring = math.exp(-((dist_body - breath_radius) ** 2) / 1.5)
|
||||
field[y, x] += ring * features.breathing_band_power * 2
|
||||
|
||||
field = np.clip(field, 0.0, 1.0)
|
||||
|
||||
return {
|
||||
"grid_size": [grid_size, 1, grid_size],
|
||||
"values": field.flatten().tolist(),
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# WebSocket server
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class SensingWebSocketServer:
|
||||
"""Async WebSocket server that broadcasts sensing updates."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.clients: Set = set()
|
||||
self.collector = None
|
||||
self.extractor = RssiFeatureExtractor(window_seconds=10.0)
|
||||
self.classifier = PresenceClassifier()
|
||||
self.source: str = "unknown"
|
||||
self._running = False
|
||||
|
||||
def _create_collector(self):
|
||||
"""Auto-detect data source: ESP32 UDP > Windows WiFi > Linux WiFi > simulated."""
|
||||
# 1. Try ESP32 UDP first
|
||||
print(" Probing for ESP32 on UDP :5005 ...")
|
||||
if probe_esp32_udp(ESP32_UDP_PORT, timeout=2.0):
|
||||
logger.info("ESP32 CSI stream detected on UDP :%d", ESP32_UDP_PORT)
|
||||
self.source = "esp32"
|
||||
return Esp32UdpCollector(port=ESP32_UDP_PORT, sample_rate_hz=10.0)
|
||||
|
||||
# 2. Platform-specific WiFi
|
||||
system = platform.system()
|
||||
if system == "Windows":
|
||||
try:
|
||||
collector = WindowsWifiCollector(sample_rate_hz=2.0)
|
||||
collector.collect_once() # test that it works
|
||||
logger.info("Using WindowsWifiCollector")
|
||||
self.source = "windows_wifi"
|
||||
return collector
|
||||
except Exception as e:
|
||||
logger.warning("Windows WiFi unavailable (%s), falling back", e)
|
||||
elif system == "Linux":
|
||||
try:
|
||||
collector = LinuxWifiCollector(sample_rate_hz=10.0)
|
||||
self.source = "linux_wifi"
|
||||
return collector
|
||||
except RuntimeError:
|
||||
logger.warning("Linux WiFi unavailable, falling back")
|
||||
|
||||
# 3. Simulated
|
||||
logger.info("Using SimulatedCollector")
|
||||
self.source = "simulated"
|
||||
return SimulatedCollector(seed=42, sample_rate_hz=10.0)
|
||||
|
||||
def _build_message(self, features: RssiFeatures, result: SensingResult) -> str:
|
||||
"""Build the JSON message to broadcast."""
|
||||
# Get CSI-specific data if available
|
||||
csi_data = None
|
||||
if isinstance(self.collector, Esp32UdpCollector):
|
||||
csi_data = self.collector.last_csi
|
||||
|
||||
signal_field = generate_signal_field(features, result, csi_data=csi_data)
|
||||
|
||||
node_info = {
|
||||
"node_id": 1,
|
||||
"rssi_dbm": features.mean,
|
||||
"position": [2.0, 0.0, 1.5],
|
||||
"amplitude": [],
|
||||
"subcarrier_count": 0,
|
||||
}
|
||||
|
||||
# Enrich with real CSI data
|
||||
if csi_data:
|
||||
node_info["node_id"] = csi_data.get("node_id", 1)
|
||||
node_info["rssi_dbm"] = csi_data.get("rssi_dbm", features.mean)
|
||||
node_info["amplitude"] = csi_data.get("amplitude", [])
|
||||
node_info["subcarrier_count"] = csi_data.get("n_subcarriers", 0)
|
||||
node_info["mean_amplitude"] = csi_data.get("mean_amplitude", 0)
|
||||
node_info["freq_mhz"] = csi_data.get("freq_mhz", 0)
|
||||
node_info["sequence"] = csi_data.get("sequence", 0)
|
||||
node_info["source_addr"] = csi_data.get("source_addr", "")
|
||||
|
||||
msg = {
|
||||
"type": "sensing_update",
|
||||
"timestamp": time.time(),
|
||||
"source": self.source,
|
||||
"nodes": [node_info],
|
||||
"features": {
|
||||
"mean_rssi": features.mean,
|
||||
"variance": features.variance,
|
||||
"std": features.std,
|
||||
"motion_band_power": features.motion_band_power,
|
||||
"breathing_band_power": features.breathing_band_power,
|
||||
"dominant_freq_hz": features.dominant_freq_hz,
|
||||
"change_points": features.n_change_points,
|
||||
"spectral_power": features.total_spectral_power,
|
||||
"range": features.range,
|
||||
"iqr": features.iqr,
|
||||
"skewness": features.skewness,
|
||||
"kurtosis": features.kurtosis,
|
||||
},
|
||||
"classification": {
|
||||
"motion_level": result.motion_level.value,
|
||||
"presence": result.presence_detected,
|
||||
"confidence": round(result.confidence, 3),
|
||||
},
|
||||
"signal_field": signal_field,
|
||||
}
|
||||
return json.dumps(msg)
|
||||
|
||||
async def _handler(self, websocket):
|
||||
"""Handle a single WebSocket client connection."""
|
||||
self.clients.add(websocket)
|
||||
remote = websocket.remote_address
|
||||
logger.info("Client connected: %s", remote)
|
||||
try:
|
||||
async for _ in websocket:
|
||||
pass
|
||||
finally:
|
||||
self.clients.discard(websocket)
|
||||
logger.info("Client disconnected: %s", remote)
|
||||
|
||||
async def _broadcast(self, message: str) -> None:
|
||||
"""Send message to all connected clients."""
|
||||
if not self.clients:
|
||||
return
|
||||
disconnected = set()
|
||||
for ws in self.clients:
|
||||
try:
|
||||
await ws.send(message)
|
||||
except Exception:
|
||||
disconnected.add(ws)
|
||||
self.clients -= disconnected
|
||||
|
||||
async def _tick_loop(self) -> None:
|
||||
"""Main sensing loop."""
|
||||
while self._running:
|
||||
try:
|
||||
window = self.extractor.window_seconds
|
||||
sample_rate = self.collector.sample_rate_hz
|
||||
n_needed = int(window * sample_rate)
|
||||
samples = self.collector.get_samples(n=n_needed)
|
||||
|
||||
if len(samples) >= 4:
|
||||
features = self.extractor.extract(samples)
|
||||
result = self.classifier.classify(features)
|
||||
message = self._build_message(features, result)
|
||||
await self._broadcast(message)
|
||||
|
||||
# Print status every few ticks
|
||||
if isinstance(self.collector, Esp32UdpCollector):
|
||||
csi = self.collector.last_csi
|
||||
if csi and self.collector.frames_received % 20 == 0:
|
||||
print(
|
||||
f" [{csi['source_addr']}] node:{csi['node_id']} "
|
||||
f"seq:{csi['sequence']} sc:{csi['n_subcarriers']} "
|
||||
f"rssi:{csi['rssi_dbm']}dBm amp:{csi['mean_amplitude']:.1f} "
|
||||
f"=> {result.motion_level.value} ({result.confidence:.0%})"
|
||||
)
|
||||
else:
|
||||
logger.debug("Waiting for samples (%d/%d)", len(samples), n_needed)
|
||||
except Exception:
|
||||
logger.exception("Error in sensing tick")
|
||||
|
||||
await asyncio.sleep(TICK_INTERVAL)
|
||||
|
||||
async def run(self) -> None:
|
||||
"""Start the server and run until interrupted."""
|
||||
try:
|
||||
import websockets
|
||||
except ImportError:
|
||||
print("ERROR: 'websockets' package not found.")
|
||||
print("Install it with: pip install websockets")
|
||||
sys.exit(1)
|
||||
|
||||
self.collector = self._create_collector()
|
||||
self.collector.start()
|
||||
self._running = True
|
||||
|
||||
print(f"\n Sensing WebSocket server on ws://{HOST}:{PORT}")
|
||||
print(f" Source: {self.source}")
|
||||
print(f" Tick: {TICK_INTERVAL}s | Window: {self.extractor.window_seconds}s")
|
||||
print(" Press Ctrl+C to stop\n")
|
||||
|
||||
async with websockets.serve(self._handler, HOST, PORT):
|
||||
await self._tick_loop()
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Stop the server gracefully."""
|
||||
self._running = False
|
||||
if self.collector:
|
||||
self.collector.stop()
|
||||
logger.info("Sensing server stopped")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Entry point
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def main():
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
|
||||
)
|
||||
|
||||
server = SensingWebSocketServer()
|
||||
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
|
||||
def _shutdown(sig, frame):
|
||||
print("\nShutting down...")
|
||||
server.stop()
|
||||
loop.stop()
|
||||
|
||||
signal.signal(signal.SIGINT, _shutdown)
|
||||
|
||||
try:
|
||||
loop.run_until_complete(server.run())
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
finally:
|
||||
server.stop()
|
||||
loop.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
113
v1/tests/integration/live_sense_monitor.py
Normal file
113
v1/tests/integration/live_sense_monitor.py
Normal file
@@ -0,0 +1,113 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Live WiFi sensing monitor — collects RSSI from Windows WiFi and classifies
|
||||
presence/motion in real-time using the ADR-013 commodity sensing pipeline.
|
||||
|
||||
Usage:
|
||||
python v1/tests/integration/live_sense_monitor.py
|
||||
|
||||
Walk around the room (especially between laptop and router) to trigger detection.
|
||||
Press Ctrl+C to stop.
|
||||
"""
|
||||
import sys
|
||||
import time
|
||||
|
||||
from v1.src.sensing.rssi_collector import WindowsWifiCollector
|
||||
from v1.src.sensing.feature_extractor import RssiFeatureExtractor
|
||||
from v1.src.sensing.classifier import PresenceClassifier
|
||||
|
||||
SAMPLE_RATE = 2.0 # Hz (netsh is slow, 2 Hz is practical max)
|
||||
WINDOW_SEC = 15.0 # Analysis window
|
||||
REPORT_INTERVAL = 3.0 # Print classification every N seconds
|
||||
|
||||
|
||||
def main():
|
||||
collector = WindowsWifiCollector(interface="Wi-Fi", sample_rate_hz=SAMPLE_RATE)
|
||||
extractor = RssiFeatureExtractor(window_seconds=WINDOW_SEC)
|
||||
classifier = PresenceClassifier(
|
||||
presence_variance_threshold=0.3, # Lower threshold for netsh quantization
|
||||
motion_energy_threshold=0.05,
|
||||
)
|
||||
|
||||
print("=" * 65)
|
||||
print(" WiFi-DensePose Live Sensing Monitor (ADR-013)")
|
||||
print(" Pipeline: WindowsWifiCollector -> Extractor -> Classifier")
|
||||
print("=" * 65)
|
||||
print(f" Sample rate: {SAMPLE_RATE} Hz")
|
||||
print(f" Window: {WINDOW_SEC}s")
|
||||
print(f" Report every: {REPORT_INTERVAL}s")
|
||||
print()
|
||||
print(" Collecting baseline... walk around after 15s to test detection.")
|
||||
print(" Press Ctrl+C to stop.")
|
||||
print("-" * 65)
|
||||
|
||||
collector.start()
|
||||
|
||||
try:
|
||||
last_report = 0.0
|
||||
while True:
|
||||
time.sleep(0.5)
|
||||
now = time.time()
|
||||
if now - last_report < REPORT_INTERVAL:
|
||||
continue
|
||||
last_report = now
|
||||
|
||||
samples = collector.get_samples()
|
||||
n = len(samples)
|
||||
if n < 4:
|
||||
print(f" [{time.strftime('%H:%M:%S')}] Buffering... ({n} samples)")
|
||||
continue
|
||||
|
||||
rssi_vals = [s.rssi_dbm for s in samples]
|
||||
features = extractor.extract(samples)
|
||||
result = classifier.classify(features)
|
||||
|
||||
# Motion bar visualization
|
||||
bar_len = min(40, max(0, int(features.variance * 20)))
|
||||
bar = "#" * bar_len + "." * (40 - bar_len)
|
||||
|
||||
level_icon = {
|
||||
"absent": " ",
|
||||
"present_still": "🧍",
|
||||
"active": "🏃",
|
||||
}.get(result.motion_level.value, "??")
|
||||
|
||||
print(
|
||||
f" [{time.strftime('%H:%M:%S')}] "
|
||||
f"RSSI: {features.mean:6.1f} dBm | "
|
||||
f"var: {features.variance:6.3f} | "
|
||||
f"motion_e: {features.motion_band_power:7.4f} | "
|
||||
f"breath_e: {features.breathing_band_power:7.4f} | "
|
||||
f"{result.motion_level.value:14s} {level_icon} "
|
||||
f"({result.confidence:.0%})"
|
||||
)
|
||||
print(f" [{bar}] n={n} rssi=[{min(rssi_vals):.0f}..{max(rssi_vals):.0f}]")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print()
|
||||
print("-" * 65)
|
||||
print(" Stopped. Final sample count:", len(collector.get_samples()))
|
||||
|
||||
# Print summary
|
||||
samples = collector.get_samples()
|
||||
if len(samples) >= 4:
|
||||
features = extractor.extract(samples)
|
||||
result = classifier.classify(features)
|
||||
rssi_vals = [s.rssi_dbm for s in samples]
|
||||
print()
|
||||
print(" SUMMARY")
|
||||
print(f" Duration: {samples[-1].timestamp - samples[0].timestamp:.1f}s")
|
||||
print(f" Total samples: {len(samples)}")
|
||||
print(f" RSSI range: {min(rssi_vals):.1f} to {max(rssi_vals):.1f} dBm")
|
||||
print(f" RSSI variance: {features.variance:.4f}")
|
||||
print(f" Motion energy: {features.motion_band_power:.4f}")
|
||||
print(f" Breath energy: {features.breathing_band_power:.4f}")
|
||||
print(f" Change points: {features.n_change_points}")
|
||||
print(f" Final verdict: {result.motion_level.value} ({result.confidence:.0%})")
|
||||
print("=" * 65)
|
||||
finally:
|
||||
collector.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
156
v1/tests/integration/test_windows_live_sensing.py
Normal file
156
v1/tests/integration/test_windows_live_sensing.py
Normal file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Live integration test: WindowsWifiCollector → FeatureExtractor → Classifier.
|
||||
|
||||
Runs the full ADR-013 commodity sensing pipeline against a real Windows WiFi
|
||||
interface using ``netsh wlan show interfaces`` as the RSSI source.
|
||||
|
||||
Usage:
|
||||
python -m pytest v1/tests/integration/test_windows_live_sensing.py -v -o "addopts=" -s
|
||||
|
||||
Requirements:
|
||||
- Windows with connected WiFi
|
||||
- scipy, numpy installed
|
||||
"""
|
||||
import platform
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
|
||||
import pytest
|
||||
|
||||
# Skip the entire module on non-Windows or when WiFi is disconnected
|
||||
_IS_WINDOWS = platform.system() == "Windows"
|
||||
|
||||
def _wifi_connected() -> bool:
|
||||
if not _IS_WINDOWS:
|
||||
return False
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["netsh", "wlan", "show", "interfaces"],
|
||||
capture_output=True, text=True, timeout=5,
|
||||
)
|
||||
return "connected" in r.stdout.lower() and "disconnected" not in r.stdout.lower().split("state")[1][:30]
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
pytestmark = pytest.mark.skipif(
|
||||
not (_IS_WINDOWS and _wifi_connected()),
|
||||
reason="Requires Windows with connected WiFi",
|
||||
)
|
||||
|
||||
from v1.src.sensing.rssi_collector import WindowsWifiCollector, WifiSample
|
||||
from v1.src.sensing.feature_extractor import RssiFeatureExtractor, RssiFeatures
|
||||
from v1.src.sensing.classifier import PresenceClassifier, MotionLevel, SensingResult
|
||||
from v1.src.sensing.backend import CommodityBackend, Capability
|
||||
|
||||
|
||||
class TestWindowsWifiCollectorLive:
|
||||
"""Live tests against real Windows WiFi hardware."""
|
||||
|
||||
def test_collect_once_returns_valid_sample(self):
|
||||
collector = WindowsWifiCollector(interface="Wi-Fi", sample_rate_hz=1.0)
|
||||
sample = collector.collect_once()
|
||||
|
||||
assert isinstance(sample, WifiSample)
|
||||
assert -100 <= sample.rssi_dbm <= 0, f"RSSI {sample.rssi_dbm} out of range"
|
||||
assert sample.noise_dbm <= 0
|
||||
assert 0.0 <= sample.link_quality <= 1.0
|
||||
assert sample.interface == "Wi-Fi"
|
||||
print(f"\n Single sample: RSSI={sample.rssi_dbm} dBm, "
|
||||
f"quality={sample.link_quality:.0%}, ts={sample.timestamp:.3f}")
|
||||
|
||||
def test_collect_multiple_samples_over_time(self):
|
||||
collector = WindowsWifiCollector(interface="Wi-Fi", sample_rate_hz=2.0)
|
||||
collector.start()
|
||||
time.sleep(6) # Collect ~12 samples at 2 Hz
|
||||
collector.stop()
|
||||
|
||||
samples = collector.get_samples()
|
||||
assert len(samples) >= 5, f"Expected >= 5 samples, got {len(samples)}"
|
||||
|
||||
rssi_values = [s.rssi_dbm for s in samples]
|
||||
print(f"\n Collected {len(samples)} samples over ~6s")
|
||||
print(f" RSSI range: {min(rssi_values):.1f} to {max(rssi_values):.1f} dBm")
|
||||
print(f" RSSI values: {[f'{v:.1f}' for v in rssi_values]}")
|
||||
|
||||
# All RSSI values should be in valid range
|
||||
for s in samples:
|
||||
assert -100 <= s.rssi_dbm <= 0
|
||||
|
||||
def test_rssi_varies_between_samples(self):
|
||||
"""RSSI should show at least slight natural variation."""
|
||||
collector = WindowsWifiCollector(interface="Wi-Fi", sample_rate_hz=2.0)
|
||||
collector.start()
|
||||
time.sleep(8) # Collect ~16 samples
|
||||
collector.stop()
|
||||
|
||||
samples = collector.get_samples()
|
||||
rssi_values = [s.rssi_dbm for s in samples]
|
||||
|
||||
# With real hardware, we expect some variation (even if small)
|
||||
# But netsh may quantize RSSI so identical values are possible
|
||||
unique_count = len(set(rssi_values))
|
||||
print(f"\n {len(rssi_values)} samples, {unique_count} unique RSSI values")
|
||||
print(f" Values: {rssi_values}")
|
||||
|
||||
|
||||
class TestFullPipelineLive:
|
||||
"""End-to-end: WindowsWifiCollector → Extractor → Classifier."""
|
||||
|
||||
def test_full_pipeline_produces_sensing_result(self):
|
||||
collector = WindowsWifiCollector(interface="Wi-Fi", sample_rate_hz=2.0)
|
||||
extractor = RssiFeatureExtractor(window_seconds=10.0)
|
||||
classifier = PresenceClassifier()
|
||||
|
||||
collector.start()
|
||||
time.sleep(10) # Collect ~20 samples
|
||||
collector.stop()
|
||||
|
||||
samples = collector.get_samples()
|
||||
assert len(samples) >= 5, f"Need >= 5 samples, got {len(samples)}"
|
||||
|
||||
features = extractor.extract(samples)
|
||||
assert isinstance(features, RssiFeatures)
|
||||
assert features.n_samples >= 5
|
||||
print(f"\n Features from {features.n_samples} samples:")
|
||||
print(f" mean={features.mean:.2f} dBm")
|
||||
print(f" variance={features.variance:.4f}")
|
||||
print(f" std={features.std:.4f}")
|
||||
print(f" range={features.range:.2f}")
|
||||
print(f" dominant_freq={features.dominant_freq_hz:.3f} Hz")
|
||||
print(f" breathing_band={features.breathing_band_power:.4f}")
|
||||
print(f" motion_band={features.motion_band_power:.4f}")
|
||||
print(f" spectral_power={features.total_spectral_power:.4f}")
|
||||
print(f" change_points={features.n_change_points}")
|
||||
|
||||
result = classifier.classify(features)
|
||||
assert isinstance(result, SensingResult)
|
||||
assert isinstance(result.motion_level, MotionLevel)
|
||||
assert 0.0 <= result.confidence <= 1.0
|
||||
print(f"\n Classification:")
|
||||
print(f" motion_level={result.motion_level.value}")
|
||||
print(f" presence={result.presence_detected}")
|
||||
print(f" confidence={result.confidence:.2%}")
|
||||
print(f" details: {result.details}")
|
||||
|
||||
def test_commodity_backend_with_windows_collector(self):
|
||||
collector = WindowsWifiCollector(interface="Wi-Fi", sample_rate_hz=2.0)
|
||||
backend = CommodityBackend(collector=collector)
|
||||
|
||||
assert backend.get_capabilities() == {Capability.PRESENCE, Capability.MOTION}
|
||||
|
||||
backend.start()
|
||||
time.sleep(10)
|
||||
result = backend.get_result()
|
||||
backend.stop()
|
||||
|
||||
assert isinstance(result, SensingResult)
|
||||
print(f"\n CommodityBackend result:")
|
||||
print(f" motion={result.motion_level.value}")
|
||||
print(f" presence={result.presence_detected}")
|
||||
print(f" confidence={result.confidence:.2%}")
|
||||
print(f" rssi_variance={result.rssi_variance:.4f}")
|
||||
print(f" motion_energy={result.motion_band_energy:.4f}")
|
||||
print(f" breathing_energy={result.breathing_band_energy:.4f}")
|
||||
Reference in New Issue
Block a user