feat: Sensing-only UI mode with Gaussian splat visualization and Rust migration ADR

- Add Python WebSocket sensing server (ws_server.py) with ESP32 UDP CSI
  and Windows RSSI auto-detect collectors on port 8765
- Add Three.js Gaussian splat renderer with custom GLSL shaders for
  real-time WiFi signal field visualization (blue→green→red gradient)
- Add SensingTab component with RSSI sparkline, feature meters, and
  motion classification badge
- Add sensing.service.js WebSocket client with reconnect and simulation fallback
- Implement sensing-only mode: suppress all DensePose API calls when
  FastAPI backend (port 8000) is not running, clean console output
- ADR-019: Document sensing-only UI architecture and data flow
- ADR-020: Migrate AI/model inference to Rust with RuVector ONNX Runtime,
  replacing ~2.7GB Python stack with ~50MB static binary
- Add ruvnet/ruvector as upstream remote for RuVector crate ecosystem

Co-Authored-By: claude-flow <ruv@ruv.net>
This commit is contained in:
ruv
2026-02-28 14:37:29 -05:00
parent 6e4cb0ad5b
commit b7e0f07e6e
20 changed files with 2551 additions and 24 deletions

View File

@@ -1,7 +1,7 @@
# ADR-013: Feature-Level Sensing on Commodity Gear (Option 3)
## Status
Proposed
Accepted — Implemented (36/36 unit tests pass, see `v1/src/sensing/` and `v1/tests/unit/test_sensing.py`)
## Date
2026-02-28
@@ -373,6 +373,24 @@ class CommodityBackend(SensingBackend):
- **Not a "pose estimation" demo**: This module honestly cannot do what the project name implies
- **Lower credibility ceiling**: RSSI sensing is well-known; less impressive than CSI
### Implementation Status
The full commodity sensing pipeline is implemented in `v1/src/sensing/`:
| Module | File | Description |
|--------|------|-------------|
| RSSI Collector | `rssi_collector.py` | `LinuxWifiCollector` (live hardware) + `SimulatedCollector` (deterministic testing) with ring buffer |
| Feature Extractor | `feature_extractor.py` | `RssiFeatureExtractor` with Hann-windowed FFT, band power (breathing 0.1-0.5 Hz, motion 0.5-3 Hz), CUSUM change-point detection |
| Classifier | `classifier.py` | `PresenceClassifier` with ABSENT/PRESENT_STILL/ACTIVE levels, confidence scoring |
| Backend | `backend.py` | `CommodityBackend` wiring collector → extractor → classifier, reports PRESENCE + MOTION capabilities |
**Test coverage**: 36 tests in `v1/tests/unit/test_sensing.py` — all passing:
- `TestRingBuffer` (4), `TestSimulatedCollector` (5), `TestFeatureExtractor` (8), `TestCusum` (4), `TestPresenceClassifier` (7), `TestCommodityBackend` (6), `TestBandPower` (2)
**Dependencies**: `numpy`, `scipy` (for FFT and spectral analysis)
**Note**: `LinuxWifiCollector` requires a connected Linux WiFi interface (`/proc/net/wireless` or `iw`). On Windows or disconnected interfaces, use `SimulatedCollector` for development and testing.
## References
- [Youssef et al. - Challenges in Device-Free Passive Localization](https://doi.org/10.1145/1287853.1287880)

View File

@@ -0,0 +1,122 @@
# ADR-019: Sensing-Only UI Mode with Gaussian Splat Visualization
| Field | Value |
|-------|-------|
| **Status** | Accepted |
| **Date** | 2026-02-28 |
| **Deciders** | ruv |
| **Relates to** | ADR-013 (Feature-Level Sensing), ADR-018 (ESP32 Dev Implementation) |
## Context
The WiFi-DensePose UI was originally built to require the full FastAPI DensePose backend (`localhost:8000`) for all functionality. This backend depends on heavy Python packages (PyTorch ~2GB, torchvision, OpenCV, SQLAlchemy, Redis) making it impractical for lightweight sensing-only deployments where the user simply wants to visualize live WiFi signal data from ESP32 CSI or Windows RSSI collectors.
A Rust port exists (`rust-port/wifi-densepose-rs`) using Axum with lighter runtime footprint (~10MB binary, ~5MB RAM), but it still requires libtorch C++ bindings and OpenBLAS for compilation—a non-trivial build.
Users need a way to run the UI with **only the sensing pipeline** active, without installing the full DensePose backend stack.
## Decision
Implement a **sensing-only UI mode** that:
1. **Decouples the sensing pipeline** from the DensePose API backend. The sensing WebSocket server (`ws_server.py` on port 8765) operates independently of the FastAPI backend (port 8000).
2. **Auto-detects sensing-only mode** at startup. When the DensePose backend is unreachable, the UI sets `backendDetector.sensingOnlyMode = true` and:
- Suppresses all API requests to `localhost:8000` at the `ApiService.request()` level
- Skips initialization of DensePose-dependent tabs (Dashboard, Hardware, Live Demo)
- Shows a green "Sensing mode" status toast instead of error banners
- Silences health monitoring polls
3. **Adds a new "Sensing" tab** with Three.js Gaussian splat visualization:
- Custom GLSL `ShaderMaterial` rendering point-cloud splats on a 20×20 floor grid
- Signal field splats colored by intensity (blue → green → red)
- Body disruption blob at estimated motion position
- Breathing ring modulation when breathing-band power detected
- Side panel with RSSI sparkline, feature meters, and classification badge
4. **Python WebSocket bridge** (`v1/src/sensing/ws_server.py`) that:
- Auto-detects ESP32 UDP CSI stream on port 5005 (ADR-018 binary frames)
- Falls back to `WindowsWifiCollector``SimulatedCollector`
- Runs `RssiFeatureExtractor``PresenceClassifier` pipeline
- Broadcasts JSON sensing updates every 500ms on `ws://localhost:8765`
5. **Client-side fallback**: `sensing.service.js` generates simulated data when the WebSocket server is unreachable, so the visualization always works.
## Architecture
```
ESP32 (UDP :5005) ──┐
├──▶ ws_server.py (:8765) ──▶ sensing.service.js ──▶ SensingTab.js
Windows WiFi RSSI ───┘ │ │ │
Feature extraction WebSocket client gaussian-splats.js
+ Classification + Reconnect (Three.js ShaderMaterial)
+ Sim fallback
```
### Data flow
| Source | Collector | Feature Extraction | Output |
|--------|-----------|-------------------|--------|
| ESP32 CSI (ADR-018) | `Esp32UdpCollector` (UDP :5005) | Amplitude mean → pseudo-RSSI → `RssiFeatureExtractor` | `sensing_update` JSON |
| Windows WiFi | `WindowsWifiCollector` (netsh) | RSSI + signal% → `RssiFeatureExtractor` | `sensing_update` JSON |
| Simulated | `SimulatedCollector` | Synthetic RSSI patterns | `sensing_update` JSON |
### Sensing update JSON schema
```json
{
"type": "sensing_update",
"timestamp": 1234567890.123,
"source": "esp32",
"nodes": [{ "node_id": 1, "rssi_dbm": -39, "position": [2,0,1.5], "amplitude": [...], "subcarrier_count": 56 }],
"features": { "mean_rssi": -39.0, "variance": 2.34, "motion_band_power": 0.45, ... },
"classification": { "motion_level": "active", "presence": true, "confidence": 0.87 },
"signal_field": { "grid_size": [20,1,20], "values": [...] }
}
```
## Files
### Created
| File | Purpose |
|------|---------|
| `v1/src/sensing/ws_server.py` | Python asyncio WebSocket server with auto-detect collectors |
| `ui/components/SensingTab.js` | Sensing tab UI with Three.js integration |
| `ui/components/gaussian-splats.js` | Custom GLSL Gaussian splat renderer |
| `ui/services/sensing.service.js` | WebSocket client with reconnect + simulation fallback |
### Modified
| File | Change |
|------|--------|
| `ui/index.html` | Added Sensing nav tab button and content section |
| `ui/app.js` | Sensing-only mode detection, conditional tab init |
| `ui/style.css` | Sensing tab layout and component styles |
| `ui/config/api.config.js` | `AUTO_DETECT: false` (sensing uses own WS) |
| `ui/services/api.service.js` | Short-circuit requests in sensing-only mode |
| `ui/services/health.service.js` | Skip polling when backend unreachable |
| `ui/components/DashboardTab.js` | Graceful failure in sensing-only mode |
## Consequences
### Positive
- UI works with zero heavy dependencies—only `pip install websockets` (+ numpy/scipy already installed)
- ESP32 CSI data flows end-to-end without PyTorch, OpenCV, or database
- Existing DensePose tabs still work when the full backend is running
- Clean console output—no `ERR_CONNECTION_REFUSED` spam in sensing-only mode
### Negative
- Two separate WebSocket endpoints: `:8765` (sensing) and `:8000/api/v1/stream/pose` (DensePose)
- Pose estimation, zone occupancy, and historical data features unavailable in sensing-only mode
- Client-side simulation fallback may mislead users if they don't notice the "Simulated" badge
### Neutral
- Rust Axum backend remains a future option for a unified lightweight server
- The sensing pipeline reuses the existing `RssiFeatureExtractor` and `PresenceClassifier` classes unchanged
## Alternatives Considered
1. **Install minimal FastAPI** (`pip install fastapi uvicorn pydantic`): Starts the server but pose endpoints return errors without PyTorch.
2. **Build Rust backend**: Single binary, but requires libtorch + OpenBLAS build toolchain.
3. **Merge sensing into FastAPI**: Would require FastAPI installed even for sensing-only use.
Option 1 was rejected because it still shows broken tabs. The chosen approach cleanly separates concerns.

View File

@@ -0,0 +1,157 @@
# ADR-020: Migrate AI/Model Inference to Rust with RuVector and ONNX Runtime
| Field | Value |
|-------|-------|
| **Status** | Accepted |
| **Date** | 2026-02-28 |
| **Deciders** | ruv |
| **Relates to** | ADR-016 (RuVector Integration), ADR-017 (RuVector-Signal-MAT), ADR-019 (Sensing-Only UI) |
## Context
The current Python DensePose backend requires ~2GB+ of dependencies:
| Python Dependency | Size | Purpose |
|-------------------|------|---------|
| PyTorch | ~2.0 GB | Neural network inference |
| torchvision | ~500 MB | Model loading, transforms |
| OpenCV | ~100 MB | Image processing |
| SQLAlchemy + asyncpg | ~20 MB | Database |
| scikit-learn | ~50 MB | Classification |
| **Total** | **~2.7 GB** | |
This makes the DensePose backend impractical for edge deployments, CI pipelines, and developer laptops where users only need WiFi sensing + pose estimation.
Meanwhile, the Rust port at `rust-port/wifi-densepose-rs/` already has:
- **12 workspace crates** covering core, signal, nn, api, db, config, hardware, wasm, cli, mat, train
- **5 RuVector crates** (v2.0.4, published on crates.io) integrated into signal, mat, and train crates
- **3 NN backends**: ONNX Runtime (default), tch (PyTorch C++), Candle (pure Rust)
- **Axum web framework** with WebSocket support in the MAT crate
- **Signal processing pipeline**: CSI processor, BVP, Fresnel geometry, spectrogram, subcarrier selection, motion detection, Hampel filter, phase sanitizer
## Decision
Adopt the Rust workspace as the **primary backend** for AI/model inference and signal processing, replacing the Python FastAPI stack for production deployments.
### Phase 1: ONNX Runtime Default (No libtorch)
Use the `wifi-densepose-nn` crate with `default-features = ["onnx"]` only. This avoids the libtorch C++ dependency entirely.
| Component | Rust Crate | Replaces Python |
|-----------|-----------|-----------------|
| CSI processing | `wifi-densepose-signal::csi_processor` | `v1/src/sensing/feature_extractor.py` |
| Motion detection | `wifi-densepose-signal::motion` | `v1/src/sensing/classifier.py` |
| BVP extraction | `wifi-densepose-signal::bvp` | N/A (new capability) |
| Fresnel geometry | `wifi-densepose-signal::fresnel` | N/A (new capability) |
| Subcarrier selection | `wifi-densepose-signal::subcarrier_selection` | N/A (new capability) |
| Spectrogram | `wifi-densepose-signal::spectrogram` | N/A (new capability) |
| Pose inference | `wifi-densepose-nn::onnx` | PyTorch + torchvision |
| DensePose mapping | `wifi-densepose-nn::densepose` | Python DensePose |
| REST API | `wifi-densepose-mat::api` (Axum) | FastAPI |
| WebSocket stream | `wifi-densepose-mat::api::websocket` | `ws_server.py` |
| Survivor detection | `wifi-densepose-mat::detection` | N/A (new capability) |
| Vital signs | `wifi-densepose-mat::ml` | N/A (new capability) |
### Phase 2: RuVector Signal Intelligence
The 5 RuVector crates provide subpolynomial algorithms already wired into the Rust signal pipeline:
| Crate | Algorithm | Use in Pipeline |
|-------|-----------|-----------------|
| `ruvector-mincut` | Subpolynomial min-cut | Dynamic subcarrier partitioning (sensitive vs insensitive) |
| `ruvector-attn-mincut` | Attention-gated min-cut | Noise-suppressed spectrogram generation |
| `ruvector-attention` | Sensitivity-weighted attention | Body velocity profile extraction |
| `ruvector-solver` | Sparse Fresnel solver | TX-body-RX distance estimation |
| `ruvector-temporal-tensor` | Compressed temporal buffers | Breathing + heartbeat spectrogram storage |
These replace the Python `RssiFeatureExtractor` with hardware-aware, subcarrier-level feature extraction.
### Phase 3: Unified Axum Server
Replace both the Python FastAPI backend (port 8000) and the Python sensing WebSocket (port 8765) with a single Rust Axum server:
```
ESP32 (UDP :5005) ──▶ Rust Axum server (:8000) ──▶ UI (browser)
├── /health/* (health checks)
├── /api/v1/pose/* (pose estimation)
├── /api/v1/stream/* (WebSocket pose stream)
├── /ws/sensing (sensing WebSocket — replaces :8765)
└── /ws/mat/stream (MAT domain events)
```
### Build Configuration
```toml
# Lightweight build — no libtorch, no OpenBLAS
cargo build --release -p wifi-densepose-mat --no-default-features --features "std,api,onnx"
# Full build with all backends
cargo build --release --features "all-backends"
```
### Dependency Comparison
| | Python Backend | Rust Backend (ONNX only) |
|---|---|---|
| Install size | ~2.7 GB | ~50 MB binary |
| Runtime memory | ~500 MB | ~20 MB |
| Startup time | 3-5s | <100ms |
| Dependencies | 30+ pip packages | Single static binary |
| GPU support | CUDA via PyTorch | CUDA via ONNX Runtime |
| Model format | .pt/.pth (PyTorch) | .onnx (portable) |
| Cross-compile | Difficult | `cargo build --target` |
| WASM target | No | Yes (`wifi-densepose-wasm`) |
### Model Conversion
Export existing PyTorch models to ONNX for the Rust backend:
```python
# One-time conversion (Python)
import torch
model = torch.load("model.pth")
torch.onnx.export(model, dummy_input, "model.onnx", opset_version=17)
```
The `wifi-densepose-nn::onnx` module loads `.onnx` files directly.
## Consequences
### Positive
- Single ~50MB static binary replaces ~2.7GB Python environment
- ~20MB runtime memory vs ~500MB
- Sub-100ms startup vs 3-5 seconds
- Single port serves all endpoints (API, WebSocket sensing, WebSocket pose)
- RuVector subpolynomial algorithms run natively (no FFI overhead)
- WASM build target enables browser-side inference
- Cross-compilation for ARM (Raspberry Pi), ESP32-S3, etc.
### Negative
- ONNX model conversion required (one-time step per model)
- Developers need Rust toolchain for backend changes
- Python sensing pipeline (`ws_server.py`) remains useful for rapid prototyping
- `ndarray-linalg` requires OpenBLAS or system LAPACK for some signal crates
### Migration Path
1. Keep Python `ws_server.py` as fallback for development/prototyping
2. Build Rust binary with `cargo build --release -p wifi-densepose-mat`
3. UI detects which backend is running and adapts (existing `sensingOnlyMode` logic)
4. Deprecate Python backend once Rust API reaches feature parity
## Verification
```bash
# Build the Rust workspace (ONNX-only, no libtorch)
cd rust-port/wifi-densepose-rs
cargo check --workspace 2>&1
# Build release binary
cargo build --release -p wifi-densepose-mat --no-default-features --features "std,api"
# Run tests
cargo test --workspace
# Binary size
ls -lh target/release/wifi-densepose-mat
```