Compare commits

..

3 Commits

Author SHA1 Message Date
Yossi Elkrief
20a9b21027 feat: Live screen with 3D Gaussian splat WebView 2026-03-02 12:57:43 +02:00
Yossi Elkrief
779bf8ff43 feat: Phase 3 — services, stores, navigation, design system
Services: ws.service, api.service, simulation.service, rssi.service (android+ios)
Stores: poseStore, settingsStore, matStore (Zustand)
Types: sensing, mat, api, navigation
Hooks: usePoseStream, useRssiScanner, useServerReachability
Theme: colors, typography, spacing, ThemeContext
Navigation: MainTabs (5 tabs), RootNavigator, types
Components: GaugeArc, SparklineChart, OccupancyGrid, StatusDot, ConnectionBanner, SignalBar, +more
Utils: ringBuffer, colorMap, formatters, urlValidator

Verified: tsc 0 errors, jest passes
2026-03-02 12:53:45 +02:00
Yossi Elkrief
fbd7d837c7 feat: Expo mobile scaffold — Phase 2 complete (118-file structure)
Expo SDK 51 TypeScript scaffold with all architecture files.
Verified: tsc 0 errors, jest passes.
2026-03-02 12:45:40 +02:00
198 changed files with 20174 additions and 28213 deletions

167
CLAUDE.md
View File

@@ -4,49 +4,13 @@
WiFi-based human pose estimation using Channel State Information (CSI).
Dual codebase: Python v1 (`v1/`) and Rust port (`rust-port/wifi-densepose-rs/`).
### Key Rust Crates
| Crate | Description |
|-------|-------------|
| `wifi-densepose-core` | Core types, traits, error types, CSI frame primitives |
| `wifi-densepose-signal` | SOTA signal processing + RuvSense multistatic sensing (14 modules) |
| `wifi-densepose-nn` | Neural network inference (ONNX, PyTorch, Candle backends) |
| `wifi-densepose-train` | Training pipeline with ruvector integration + ruview_metrics |
| `wifi-densepose-mat` | Mass Casualty Assessment Tool — disaster survivor detection |
| `wifi-densepose-hardware` | ESP32 aggregator, TDM protocol, channel hopping firmware |
| `wifi-densepose-ruvector` | RuVector v2.0.4 integration + cross-viewpoint fusion (5 modules) |
| `wifi-densepose-api` | REST API (Axum) |
| `wifi-densepose-db` | Database layer (Postgres, SQLite, Redis) |
| `wifi-densepose-config` | Configuration management |
| `wifi-densepose-wasm` | WebAssembly bindings for browser deployment |
| `wifi-densepose-cli` | CLI tool (`wifi-densepose` binary) |
| `wifi-densepose-sensing-server` | Lightweight Axum server for WiFi sensing UI |
| `wifi-densepose-wifiscan` | Multi-BSSID WiFi scanning (ADR-022) |
| `wifi-densepose-vitals` | ESP32 CSI-grade vital sign extraction (ADR-021) |
### RuvSense Modules (`signal/src/ruvsense/`)
| Module | Purpose |
|--------|---------|
| `multiband.rs` | Multi-band CSI frame fusion, cross-channel coherence |
| `phase_align.rs` | Iterative LO phase offset estimation, circular mean |
| `multistatic.rs` | Attention-weighted fusion, geometric diversity |
| `coherence.rs` | Z-score coherence scoring, DriftProfile |
| `coherence_gate.rs` | Accept/PredictOnly/Reject/Recalibrate gate decisions |
| `pose_tracker.rs` | 17-keypoint Kalman tracker with AETHER re-ID embeddings |
| `field_model.rs` | SVD room eigenstructure, perturbation extraction |
| `tomography.rs` | RF tomography, ISTA L1 solver, voxel grid |
| `longitudinal.rs` | Welford stats, biomechanics drift detection |
| `intention.rs` | Pre-movement lead signals (200-500ms) |
| `cross_room.rs` | Environment fingerprinting, transition graph |
| `gesture.rs` | DTW template matching gesture classifier |
| `adversarial.rs` | Physically impossible signal detection, multi-link consistency |
### Cross-Viewpoint Fusion (`ruvector/src/viewpoint/`)
| Module | Purpose |
|--------|---------|
| `attention.rs` | CrossViewpointAttention, GeometricBias, softmax with G_bias |
| `geometry.rs` | GeometricDiversityIndex, Cramer-Rao bounds, Fisher Information |
| `coherence.rs` | Phase phasor coherence, hysteresis gate |
| `fusion.rs` | MultistaticArray aggregate root, domain events |
- `wifi-densepose-signal` — SOTA signal processing (conjugate mult, Hampel, Fresnel, BVP, spectrogram)
- `wifi-densepose-train` — Training pipeline with ruvector integration (ADR-016)
- `wifi-densepose-mat` — Disaster detection module (MAT, multi-AP, triage)
- `wifi-densepose-nn` — Neural network inference (DensePose head, RCNN)
- `wifi-densepose-hardware` — ESP32 aggregator, hardware interfaces
### RuVector v2.0.4 Integration (ADR-016 complete, ADR-017 proposed)
All 5 ruvector crates integrated in workspace:
@@ -57,105 +21,33 @@ All 5 ruvector crates integrated in workspace:
- `ruvector-attention``model.rs` (apply_spatial_attention) + `bvp.rs`
### Architecture Decisions
32 ADRs in `docs/adr/` (ADR-001 through ADR-032). Key ones:
All ADRs in `docs/adr/` (ADR-001 through ADR-017). Key ones:
- ADR-014: SOTA signal processing (Accepted)
- ADR-015: MM-Fi + Wi-Pose training datasets (Accepted)
- ADR-016: RuVector training pipeline integration (Accepted — complete)
- ADR-017: RuVector signal + MAT integration (Proposed — next target)
- ADR-024: Contrastive CSI embedding / AETHER (Accepted)
- ADR-027: Cross-environment domain generalization / MERIDIAN (Accepted)
- ADR-028: ESP32 capability audit + witness verification (Accepted)
- ADR-029: RuvSense multistatic sensing mode (Proposed)
- ADR-030: RuvSense persistent field model (Proposed)
- ADR-031: RuView sensing-first RF mode (Proposed)
- ADR-032: Multistatic mesh security hardening (Proposed)
### Build & Test Commands (this repo)
```bash
# Rust — full workspace tests (1,031+ tests, ~2 min)
# Rust — check training crate (no GPU needed)
cd rust-port/wifi-densepose-rs
cargo test --workspace --no-default-features
# Rust — single crate check (no GPU needed)
cargo check -p wifi-densepose-train --no-default-features
# Rust — publish crates (dependency order)
cargo publish -p wifi-densepose-core --no-default-features
cargo publish -p wifi-densepose-signal --no-default-features
# ... see crate publishing order below
# Rust — run all tests
cargo test -p wifi-densepose-train --no-default-features
# Python — deterministic proof verification (SHA-256)
# Rust — full workspace check
cargo check --workspace --no-default-features
# Python — proof verification
python v1/data/proof/verify.py
# Python — test suite
cd v1 && python -m pytest tests/ -x -q
```
### Crate Publishing Order
Crates must be published in dependency order:
1. `wifi-densepose-core` (no internal deps)
2. `wifi-densepose-vitals` (no internal deps)
3. `wifi-densepose-wifiscan` (no internal deps)
4. `wifi-densepose-hardware` (no internal deps)
5. `wifi-densepose-config` (no internal deps)
6. `wifi-densepose-db` (no internal deps)
7. `wifi-densepose-signal` (depends on core)
8. `wifi-densepose-nn` (no internal deps, workspace only)
9. `wifi-densepose-ruvector` (no internal deps, workspace only)
10. `wifi-densepose-train` (depends on signal, nn)
11. `wifi-densepose-mat` (depends on core, signal, nn)
12. `wifi-densepose-api` (no internal deps)
13. `wifi-densepose-wasm` (depends on mat)
14. `wifi-densepose-sensing-server` (depends on wifiscan)
15. `wifi-densepose-cli` (depends on mat)
### Validation & Witness Verification (ADR-028)
**After any significant code change, run the full validation:**
```bash
# 1. Rust tests — must be 1,031+ passed, 0 failed
cd rust-port/wifi-densepose-rs
cargo test --workspace --no-default-features
# 2. Python proof — must print VERDICT: PASS
cd ../..
python v1/data/proof/verify.py
# 3. Generate witness bundle (includes both above + firmware hashes)
bash scripts/generate-witness-bundle.sh
# 4. Self-verify the bundle — must be 7/7 PASS
cd dist/witness-bundle-ADR028-*/
bash VERIFY.sh
```
**If the Python proof hash changes** (e.g., numpy/scipy version update):
```bash
# Regenerate the expected hash, then verify it passes
python v1/data/proof/verify.py --generate-hash
python v1/data/proof/verify.py
```
**Witness bundle contents** (`dist/witness-bundle-ADR028-<sha>.tar.gz`):
- `WITNESS-LOG-028.md` — 33-row attestation matrix with evidence per capability
- `ADR-028-esp32-capability-audit.md` — Full audit findings
- `proof/verify.py` + `expected_features.sha256` — Deterministic pipeline proof
- `test-results/rust-workspace-tests.log` — Full cargo test output
- `firmware-manifest/source-hashes.txt` — SHA-256 of all 7 ESP32 firmware files
- `crate-manifest/versions.txt` — All 15 crates with versions
- `VERIFY.sh` — One-command self-verification for recipients
**Key proof artifacts:**
- `v1/data/proof/verify.py` — Trust Kill Switch: feeds reference signal through production pipeline, hashes output
- `v1/data/proof/expected_features.sha256` — Published expected hash
- `v1/data/proof/sample_csi_data.json` — 1,000 synthetic CSI frames (seed=42)
- `docs/WITNESS-LOG-028.md` — 11-step reproducible verification procedure
- `docs/adr/ADR-028-esp32-capability-audit.md` — Complete audit record
### Branch
Default branch: `main`
Active feature branch: `ruvsense-full-implementation` (PR #77)
All development on: `claude/validate-code-quality-WNrNw`
---
@@ -173,13 +65,8 @@ Active feature branch: `ruvsense-full-implementation` (PR #77)
## File Organization
- NEVER save to root folder — use the directories below
- `docs/adr/` — Architecture Decision Records (32 ADRs)
- `docs/ddd/` — Domain-Driven Design models
- `rust-port/wifi-densepose-rs/crates/` — Rust workspace crates (15 crates)
- `rust-port/wifi-densepose-rs/crates/wifi-densepose-signal/src/ruvsense/` — RuvSense multistatic modules (14 files)
- `rust-port/wifi-densepose-rs/crates/wifi-densepose-ruvector/src/viewpoint/` — Cross-viewpoint fusion (5 files)
- `rust-port/wifi-densepose-rs/crates/wifi-densepose-hardware/src/esp32/` — ESP32 TDM protocol
- `firmware/esp32-csi-node/main/` — ESP32 C firmware (channel hopping, NVS config, TDM)
- `docs/adr/` — Architecture Decision Records
- `rust-port/wifi-densepose-rs/crates/` — Rust workspace crates (signal, train, mat, nn, hardware)
- `v1/src/` — Python source (core, hardware, services, api)
- `v1/data/proof/` — Deterministic CSI proof bundles
- `.claude-flow/` — Claude Flow coordination state (committed for team sharing)
@@ -206,18 +93,14 @@ Active feature branch: `ruvsense-full-implementation` (PR #77)
Before merging any PR, verify each item applies and is addressed:
1. **Rust tests pass**`cargo test --workspace --no-default-features` (1,031+ passed, 0 failed)
2. **Python proof passes**`python v1/data/proof/verify.py` (VERDICT: PASS)
3. **README.md**Update platform tables, crate descriptions, hardware tables, feature summaries if scope changed
4. **CLAUDE.md** — Update crate table, ADR list, module tables, version if scope changed
5. **CHANGELOG.md** — Add entry under `[Unreleased]` with what was added/fixed/changed
6. **User guide** (`docs/user-guide.md`) — Update if new data sources, CLI flags, or setup steps were added
7. **ADR index** — Update ADR count in README docs table if a new ADR was created
8. **Witness bundle**Regenerate if tests or proof hash changed: `bash scripts/generate-witness-bundle.sh`
9. **Docker Hub image** — Only rebuild if Dockerfile, dependencies, or runtime behavior changed
10. **Crate publishing** — Only needed if a crate is published to crates.io and its public API changed
11. **`.gitignore`** — Add any new build artifacts or binaries
12. **Security audit** — Run security review for new modules touching hardware/network boundaries
1. **Tests pass**`cargo test` (Rust) and `python -m pytest` (Python) green
2. **README.md** — Update platform tables, crate descriptions, hardware tables, feature summaries if scope changed
3. **CHANGELOG.md**Add entry under `[Unreleased]` with what was added/fixed/changed
4. **User guide** (`docs/user-guide.md`) — Update if new data sources, CLI flags, or setup steps were added
5. **ADR index** — Update ADR count in README docs table if a new ADR was created
6. **Docker Hub image** — Only rebuild if Dockerfile, dependencies, or runtime behavior changed (not needed for platform-gated code that doesn't affect the Linux container)
7. **Crate publishing** — Only needed if a crate is published to crates.io and its public API changed (workspace-internal crates don't need publishing)
8. **`.gitignore`** — Add any new build artifacts or binaries
## Build & Test

174
README.md
View File

@@ -6,7 +6,7 @@ WiFi DensePose turns commodity WiFi signals into real-time human pose estimation
[![Rust 1.85+](https://img.shields.io/badge/rust-1.85+-orange.svg)](https://www.rust-lang.org/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Tests: 1100+](https://img.shields.io/badge/tests-1100%2B-brightgreen.svg)](https://github.com/ruvnet/wifi-densepose)
[![Tests: 542+](https://img.shields.io/badge/tests-542%2B-brightgreen.svg)](https://github.com/ruvnet/wifi-densepose)
[![Docker: 132 MB](https://img.shields.io/badge/docker-132%20MB-blue.svg)](https://hub.docker.com/r/ruvnet/wifi-densepose)
[![Vital Signs](https://img.shields.io/badge/vital%20signs-breathing%20%2B%20heartbeat-red.svg)](#vital-sign-detection)
[![ESP32 Ready](https://img.shields.io/badge/ESP32--S3-CSI%20streaming-purple.svg)](#esp32-s3-hardware-pipeline)
@@ -49,8 +49,7 @@ docker run -p 3000:3000 ruvnet/wifi-densepose:latest
| [User Guide](docs/user-guide.md) | Step-by-step guide: installation, first run, API usage, hardware setup, training |
| [WiFi-Mat User Guide](docs/wifi-mat-user-guide.md) | Disaster response module: search & rescue, START triage |
| [Build Guide](docs/build-guide.md) | Building from source (Rust and Python) |
| [Architecture Decisions](docs/adr/) | 33 ADRs covering signal processing, training, hardware, security, domain generalization, multistatic sensing, CRV signal-line integration |
| [DDD Domain Model](docs/ddd/ruvsense-domain-model.md) | RuvSense bounded contexts, aggregates, domain events, and ubiquitous language |
| [Architecture Decisions](docs/adr/) | 27 ADRs covering signal processing, training, hardware, security, domain generalization |
---
@@ -67,8 +66,6 @@ See people, breathing, and heartbeats through walls — using only WiFi signals
| 👥 | **Multi-Person** | Tracks multiple people simultaneously, each with independent pose and vitals — no hard software limit (physics: ~3-5 per AP with 56 subcarriers, more with multi-AP) |
| 🧱 | **Through-Wall** | WiFi passes through walls, furniture, and debris — works where cameras cannot |
| 🚑 | **Disaster Response** | Detects trapped survivors through rubble and classifies injury severity (START triage) |
| 📡 | **Multistatic Mesh** | 4-6 ESP32 nodes fuse 12+ TX-RX links for 360-degree coverage, <30mm jitter, zero identity swaps ([ADR-029](docs/adr/ADR-029-ruvsense-multistatic-sensing-mode.md)) |
| 🌐 | **Persistent Field Model** | Room eigenstructure via SVD enables RF tomography, drift detection, intention prediction, and adversarial detection ([ADR-030](docs/adr/ADR-030-ruvsense-persistent-field-model.md)) |
### Intelligence
@@ -79,8 +76,6 @@ The system learns on its own and gets smarter over time — no hand-tuning, no l
| 🧠 | **Self-Learning** | Teaches itself from raw WiFi data — no labeled training sets, no cameras needed to bootstrap ([ADR-024](docs/adr/ADR-024-contrastive-csi-embedding-model.md)) |
| 🎯 | **AI Signal Processing** | Attention networks, graph algorithms, and smart compression replace hand-tuned thresholds — adapts to each room automatically ([RuVector](https://github.com/ruvnet/ruvector)) |
| 🌍 | **Works Everywhere** | Train once, deploy in any room — adversarial domain generalization strips environment bias so models transfer across rooms, buildings, and hardware ([ADR-027](docs/adr/ADR-027-cross-environment-domain-generalization.md)) |
| 👁️ | **Cross-Viewpoint Fusion** | Learned attention fuses multiple viewpoints with geometric bias — reduces body occlusion and depth ambiguity that physics prevents any single sensor from solving ([ADR-031](docs/adr/ADR-031-ruview-sensing-first-rf-mode.md)) |
| 🔮 | **Signal-Line Protocol** | `ruvector-crv` 6-stage CRV pipeline maps CSI sensing to Poincare ball embeddings, GNN topology, SNN temporal encoding, and MinCut partitioning | -- |
### Performance & Deployment
@@ -89,7 +84,7 @@ Fast enough for real-time use, small enough for edge devices, simple enough for
| | Feature | What It Means |
|---|---------|---------------|
| ⚡ | **Real-Time** | Analyzes WiFi signals in under 100 microseconds per frame — fast enough for live monitoring |
| 🦀 | **810x Faster** | Complete Rust rewrite: 54,000 frames/sec pipeline, 132 MB Docker image, 1,031+ tests |
| 🦀 | **810x Faster** | Complete Rust rewrite: 54,000 frames/sec pipeline, 132 MB Docker image, 542+ tests |
| 🐳 | **One-Command Setup** | `docker pull ruvnet/wifi-densepose:latest` — live sensing in 30 seconds, no toolchain needed |
| 📦 | **Portable Models** | Trained models package into a single `.rvf` file — runs on edge, cloud, or browser (WASM) |
@@ -102,23 +97,15 @@ WiFi routers flood every room with radio waves. When a person moves — or even
```
WiFi Router → radio waves pass through room → hit human body → scatter
ESP32 mesh (4-6 nodes) captures CSI on channels 1/6/11 via TDM protocol
ESP32 / WiFi NIC captures 56+ subcarrier amplitudes & phases (CSI) at 20 Hz
Multi-Band Fusion: 3 channels × 56 subcarriers = 168 virtual subcarriers per link
Signal Processing cleans noise, removes interference, extracts motion signatures
Multistatic Fusion: N×(N-1) links → attention-weighted cross-viewpoint embedding
AI Backbone (RuVector) applies attention, graph algorithms, and compression
Coherence Gate: accept/reject measurements → stable for days without tuning
Neural Network maps processed signals → 17 body keypoints + vital signs
Signal Processing: Hampel, SpotFi, Fresnel, BVP, spectrogram → clean features
AI Backbone (RuVector): attention, graph algorithms, compression, field model
Signal-Line Protocol (CRV): 6-stage gestalt → sensory → topology → coherence → search → model
Neural Network: processed signals → 17 body keypoints + vital signs + room model
Output: real-time pose, breathing, heart rate, room fingerprint, drift alerts
Output: real-time pose, breathing rate, heart rate, presence, room fingerprint
```
No training cameras required — the [Self-Learning system (ADR-024)](docs/adr/ADR-024-contrastive-csi-embedding-model.md) bootstraps from raw WiFi data alone. [MERIDIAN (ADR-027)](docs/adr/ADR-027-cross-environment-domain-generalization.md) ensures the model works in any room, not just the one it trained in.
@@ -379,135 +366,6 @@ cd dist/witness-bundle-ADR028-*/ && bash VERIFY.sh
</details>
<details>
<summary><strong>📡 Multistatic Sensing (ADR-029/030/031 — Project RuvSense + RuView)</strong> — Multiple ESP32 nodes fuse viewpoints for production-grade pose, tracking, and exotic sensing</summary>
A single WiFi receiver can track people, but has blind spots — limbs behind the torso are invisible, depth is ambiguous, and two people at similar range create overlapping signals. RuvSense solves this by coordinating multiple ESP32 nodes into a **multistatic mesh** where every node acts as both transmitter and receiver, creating N×(N-1) measurement links from N devices.
**What it does in plain terms:**
- 4 ESP32-S3 nodes ($48 total) provide 12 TX-RX measurement links covering 360 degrees
- Each node hops across WiFi channels 1/6/11, tripling effective bandwidth from 20→60 MHz
- Coherence gating rejects noisy frames automatically — no manual tuning, stable for days
- Two-person tracking at 20 Hz with zero identity swaps over 10 minutes
- The room itself becomes a persistent model — the system remembers, predicts, and explains
**Three ADRs, one pipeline:**
| ADR | Codename | What it adds |
|-----|----------|-------------|
| [ADR-029](docs/adr/ADR-029-ruvsense-multistatic-sensing-mode.md) | **RuvSense** | Channel hopping, TDM protocol, multi-node fusion, coherence gating, 17-keypoint Kalman tracker |
| [ADR-030](docs/adr/ADR-030-ruvsense-persistent-field-model.md) | **RuvSense Field** | Room electromagnetic eigenstructure (SVD), RF tomography, longitudinal drift detection, intention prediction, gesture recognition, adversarial detection |
| [ADR-031](docs/adr/ADR-031-ruview-sensing-first-rf-mode.md) | **RuView** | Cross-viewpoint attention with geometric bias, viewpoint diversity optimization, embedding-level fusion |
**Architecture**
```
4x ESP32-S3 nodes ($48) TDM: each transmits in turn, all others receive
│ Channel hop: ch1→ch6→ch11 per dwell (50ms)
Per-Node Signal Processing Phase sanitize → Hampel → BVP → subcarrier select
│ (ADR-014, unchanged per viewpoint)
Multi-Band Frame Fusion 3 channels × 56 subcarriers = 168 virtual subcarriers
│ Cross-channel phase alignment via NeumannSolver
Multistatic Viewpoint Fusion N nodes → attention-weighted fusion → single embedding
│ Geometric bias from node placement angles
Coherence Gate Accept / PredictOnly / Reject / Recalibrate
│ Prevents model drift, stable for days
Persistent Field Model SVD baseline → body = observation - environment
│ RF tomography, drift detection, intention signals
Pose Tracker + DensePose 17-keypoint Kalman, re-ID via AETHER embeddings
Multi-person min-cut separation, zero ID swaps
```
**Seven Exotic Sensing Tiers (ADR-030)**
| Tier | Capability | What it detects |
|------|-----------|-----------------|
| 1 | Field Normal Modes | Room electromagnetic eigenstructure via SVD |
| 2 | Coarse RF Tomography | 3D occupancy volume from link attenuations |
| 3 | Intention Lead Signals | Pre-movement prediction 200-500ms before action |
| 4 | Longitudinal Biomechanics | Personal movement changes over days/weeks |
| 5 | Cross-Room Continuity | Identity preserved across rooms without cameras |
| 6 | Invisible Interaction | Multi-user gesture control through walls |
| 7 | Adversarial Detection | Physically impossible signal identification |
**Acceptance Test**
| Metric | Threshold | What it proves |
|--------|-----------|---------------|
| Torso keypoint jitter | < 30mm RMS | Precision sufficient for applications |
| Identity swaps | 0 over 10 minutes (12,000 frames) | Reliable multi-person tracking |
| Update rate | 20 Hz (50ms cycle) | Real-time response |
| Breathing SNR | > 10 dB at 3m | Small-motion sensitivity confirmed |
**New Rust modules (9,000+ lines)**
| Crate | New modules | Purpose |
|-------|------------|---------|
| `wifi-densepose-signal` | `ruvsense/` (10 modules) | Multiband fusion, phase alignment, multistatic fusion, coherence, field model, tomography, longitudinal drift, intention detection |
| `wifi-densepose-ruvector` | `viewpoint/` (5 modules) | Cross-viewpoint attention with geometric bias, diversity index, coherence gating, fusion orchestrator |
| `wifi-densepose-hardware` | `esp32/tdm.rs` | TDM sensing protocol, sync beacons, clock drift compensation |
**Firmware extensions (C, backward-compatible)**
| File | Addition |
|------|---------|
| `csi_collector.c` | Channel hop table, timer-driven hop, NDP injection stub |
| `nvs_config.c` | 5 new NVS keys: hop_count, channel_list, dwell_ms, tdm_slot, tdm_node_count |
**DDD Domain Model** — 6 bounded contexts: Multistatic Sensing, Coherence, Pose Tracking, Field Model, Cross-Room Identity, Adversarial Detection. Full specification: [`docs/ddd/ruvsense-domain-model.md`](docs/ddd/ruvsense-domain-model.md).
See the ADR documents for full architectural details, GOAP integration plans, and research references.
</details>
<details>
<summary><b>🔮 Signal-Line Protocol (CRV)</b></summary>
### 6-Stage CSI Signal Line
Maps the CRV (Coordinate Remote Viewing) signal-line methodology to WiFi CSI processing via `ruvector-crv`:
| Stage | CRV Name | WiFi CSI Mapping | ruvector Component |
|-------|----------|-----------------|-------------------|
| I | Ideograms | Raw CSI gestalt (manmade/natural/movement/energy) | Poincare ball hyperbolic embeddings |
| II | Sensory | Amplitude textures, phase patterns, frequency colors | Multi-head attention vectors |
| III | Dimensional | AP mesh spatial topology, node geometry | GNN graph topology |
| IV | Emotional/AOL | Coherence gating — signal vs noise separation | SNN temporal encoding |
| V | Interrogation | Cross-stage probing — query pose against CSI history | Differentiable search |
| VI | 3D Model | Composite person estimation, MinCut partitioning | Graph partitioning |
**Cross-Session Convergence**: When multiple AP clusters observe the same person, CRV convergence analysis finds agreement in their signal embeddings — directly mapping to cross-room identity continuity.
```rust
use wifi_densepose_ruvector::crv::WifiCrvPipeline;
let mut pipeline = WifiCrvPipeline::new(WifiCrvConfig::default());
pipeline.create_session("room-a", "person-001")?;
// Process CSI frames through 6-stage pipeline
let result = pipeline.process_csi_frame("room-a", &amplitudes, &phases)?;
// result.gestalt = Movement, confidence = 0.87
// result.sensory_embedding = [0.12, -0.34, ...]
// Cross-room identity matching via convergence
let convergence = pipeline.find_cross_room_convergence("person-001", 0.75)?;
```
**Architecture**:
- `CsiGestaltClassifier` — Maps CSI amplitude/phase patterns to 6 gestalt types
- `CsiSensoryEncoder` — Extracts texture/color/temperature/luminosity features from subcarriers
- `MeshTopologyEncoder` — Encodes AP mesh as GNN graph (Stage III)
- `CoherenceAolDetector` — Maps coherence gate states to AOL noise detection (Stage IV)
- `WifiCrvPipeline` — Orchestrates all 6 stages into unified sensing session
</details>
---
## 📦 Installation
@@ -1574,22 +1432,6 @@ pre-commit install
<details>
<summary><strong>Release history</strong></summary>
### v3.1.0 — 2026-03-02
Multistatic sensing, persistent field model, and cross-viewpoint fusion — the biggest capability jump since v2.0.
- **Project RuvSense (ADR-029)** — Multistatic mesh: TDM protocol, channel hopping (ch1/6/11), multi-band frame fusion, coherence gating, 17-keypoint Kalman tracker with re-ID; 10 new signal modules (5,300+ lines)
- **RuvSense Persistent Field Model (ADR-030)** — 7 exotic sensing tiers: field normal modes (SVD), RF tomography, longitudinal drift detection, intention prediction, cross-room identity, gesture classification, adversarial detection
- **Project RuView (ADR-031)** — Cross-viewpoint attention with geometric bias, Geometric Diversity Index, viewpoint fusion orchestrator; 5 new ruvector modules (2,200+ lines)
- **TDM Hardware Protocol** — ESP32 sensing coordinator: sync beacons, slot scheduling, clock drift compensation (±10ppm), 20 Hz aggregate rate
- **Channel-Hopping Firmware** — ESP32 firmware extended with hop table, timer-driven channel switching, NDP injection stub; NVS config for all TDM parameters; fully backward-compatible
- **DDD Domain Model** — 6 bounded contexts, ubiquitous language, aggregate roots, domain events, full event bus specification
- **`ruvector-crv` 6-stage CRV signal-line integration (ADR-033)** — Maps Coordinate Remote Viewing methodology to WiFi CSI: gestalt classification, sensory encoding, GNN topology, SNN coherence gating, differentiable search, MinCut partitioning; cross-session convergence for multi-room identity continuity
- **ADR-032 multistatic mesh security hardening** — Bounded calibration buffers, atomic counters, division-by-zero guards, NaN-safe normalization across all multistatic modules
- **ADR-033 CRV signal-line sensing integration** — Architecture decision record for the 6-stage CRV pipeline mapping to ruvector components
- **9,000+ lines of new Rust code** across 17 modules with 300+ tests
- **Security hardened** — Bounded buffers, NaN guards, no panics in public APIs, input validation at all boundaries
### v3.0.0 — 2026-03-01
Major release: AETHER contrastive embedding model, AI signal processing backbone, cross-platform adapters, Docker Hub images, and comprehensive README overhaul.

105
claude.md
View File

@@ -4,49 +4,13 @@
WiFi-based human pose estimation using Channel State Information (CSI).
Dual codebase: Python v1 (`v1/`) and Rust port (`rust-port/wifi-densepose-rs/`).
### Key Rust Crates
| Crate | Description |
|-------|-------------|
| `wifi-densepose-core` | Core types, traits, error types, CSI frame primitives |
| `wifi-densepose-signal` | SOTA signal processing + RuvSense multistatic sensing (14 modules) |
| `wifi-densepose-nn` | Neural network inference (ONNX, PyTorch, Candle backends) |
| `wifi-densepose-train` | Training pipeline with ruvector integration + ruview_metrics |
| `wifi-densepose-mat` | Mass Casualty Assessment Tool — disaster survivor detection |
| `wifi-densepose-hardware` | ESP32 aggregator, TDM protocol, channel hopping firmware |
| `wifi-densepose-ruvector` | RuVector v2.0.4 integration + cross-viewpoint fusion (5 modules) |
| `wifi-densepose-api` | REST API (Axum) |
| `wifi-densepose-db` | Database layer (Postgres, SQLite, Redis) |
| `wifi-densepose-config` | Configuration management |
| `wifi-densepose-wasm` | WebAssembly bindings for browser deployment |
| `wifi-densepose-cli` | CLI tool (`wifi-densepose` binary) |
| `wifi-densepose-sensing-server` | Lightweight Axum server for WiFi sensing UI |
| `wifi-densepose-wifiscan` | Multi-BSSID WiFi scanning (ADR-022) |
| `wifi-densepose-vitals` | ESP32 CSI-grade vital sign extraction (ADR-021) |
### RuvSense Modules (`signal/src/ruvsense/`)
| Module | Purpose |
|--------|---------|
| `multiband.rs` | Multi-band CSI frame fusion, cross-channel coherence |
| `phase_align.rs` | Iterative LO phase offset estimation, circular mean |
| `multistatic.rs` | Attention-weighted fusion, geometric diversity |
| `coherence.rs` | Z-score coherence scoring, DriftProfile |
| `coherence_gate.rs` | Accept/PredictOnly/Reject/Recalibrate gate decisions |
| `pose_tracker.rs` | 17-keypoint Kalman tracker with AETHER re-ID embeddings |
| `field_model.rs` | SVD room eigenstructure, perturbation extraction |
| `tomography.rs` | RF tomography, ISTA L1 solver, voxel grid |
| `longitudinal.rs` | Welford stats, biomechanics drift detection |
| `intention.rs` | Pre-movement lead signals (200-500ms) |
| `cross_room.rs` | Environment fingerprinting, transition graph |
| `gesture.rs` | DTW template matching gesture classifier |
| `adversarial.rs` | Physically impossible signal detection, multi-link consistency |
### Cross-Viewpoint Fusion (`ruvector/src/viewpoint/`)
| Module | Purpose |
|--------|---------|
| `attention.rs` | CrossViewpointAttention, GeometricBias, softmax with G_bias |
| `geometry.rs` | GeometricDiversityIndex, Cramer-Rao bounds, Fisher Information |
| `coherence.rs` | Phase phasor coherence, hysteresis gate |
| `fusion.rs` | MultistaticArray aggregate root, domain events |
- `wifi-densepose-signal` — SOTA signal processing (conjugate mult, Hampel, Fresnel, BVP, spectrogram)
- `wifi-densepose-train` — Training pipeline with ruvector integration (ADR-016)
- `wifi-densepose-mat` — Disaster detection module (MAT, multi-AP, triage)
- `wifi-densepose-nn` — Neural network inference (DensePose head, RCNN)
- `wifi-densepose-hardware` — ESP32 aggregator, hardware interfaces
### RuVector v2.0.4 Integration (ADR-016 complete, ADR-017 proposed)
All 5 ruvector crates integrated in workspace:
@@ -57,7 +21,7 @@ All 5 ruvector crates integrated in workspace:
- `ruvector-attention``model.rs` (apply_spatial_attention) + `bvp.rs`
### Architecture Decisions
32 ADRs in `docs/adr/` (ADR-001 through ADR-032). Key ones:
28 ADRs in `docs/adr/` (ADR-001 through ADR-028). Key ones:
- ADR-014: SOTA signal processing (Accepted)
- ADR-015: MM-Fi + Wi-Pose training datasets (Accepted)
- ADR-016: RuVector training pipeline integration (Accepted — complete)
@@ -65,25 +29,16 @@ All 5 ruvector crates integrated in workspace:
- ADR-024: Contrastive CSI embedding / AETHER (Accepted)
- ADR-027: Cross-environment domain generalization / MERIDIAN (Accepted)
- ADR-028: ESP32 capability audit + witness verification (Accepted)
- ADR-029: RuvSense multistatic sensing mode (Proposed)
- ADR-030: RuvSense persistent field model (Proposed)
- ADR-031: RuView sensing-first RF mode (Proposed)
- ADR-032: Multistatic mesh security hardening (Proposed)
### Build & Test Commands (this repo)
```bash
# Rust — full workspace tests (1,031+ tests, ~2 min)
# Rust — full workspace tests (1,031 tests, ~2 min)
cd rust-port/wifi-densepose-rs
cargo test --workspace --no-default-features
# Rust — single crate check (no GPU needed)
cargo check -p wifi-densepose-train --no-default-features
# Rust — publish crates (dependency order)
cargo publish -p wifi-densepose-core --no-default-features
cargo publish -p wifi-densepose-signal --no-default-features
# ... see crate publishing order below
# Python — deterministic proof verification (SHA-256)
python v1/data/proof/verify.py
@@ -91,24 +46,6 @@ python v1/data/proof/verify.py
cd v1 && python -m pytest tests/ -x -q
```
### Crate Publishing Order
Crates must be published in dependency order:
1. `wifi-densepose-core` (no internal deps)
2. `wifi-densepose-vitals` (no internal deps)
3. `wifi-densepose-wifiscan` (no internal deps)
4. `wifi-densepose-hardware` (no internal deps)
5. `wifi-densepose-config` (no internal deps)
6. `wifi-densepose-db` (no internal deps)
7. `wifi-densepose-signal` (depends on core)
8. `wifi-densepose-nn` (no internal deps, workspace only)
9. `wifi-densepose-ruvector` (no internal deps, workspace only)
10. `wifi-densepose-train` (depends on signal, nn)
11. `wifi-densepose-mat` (depends on core, signal, nn)
12. `wifi-densepose-api` (no internal deps)
13. `wifi-densepose-wasm` (depends on mat)
14. `wifi-densepose-sensing-server` (depends on wifiscan)
15. `wifi-densepose-cli` (depends on mat)
### Validation & Witness Verification (ADR-028)
**After any significant code change, run the full validation:**
@@ -155,7 +92,6 @@ python v1/data/proof/verify.py
### Branch
Default branch: `main`
Active feature branch: `ruvsense-full-implementation` (PR #77)
---
@@ -173,13 +109,8 @@ Active feature branch: `ruvsense-full-implementation` (PR #77)
## File Organization
- NEVER save to root folder — use the directories below
- `docs/adr/` — Architecture Decision Records (32 ADRs)
- `docs/ddd/` — Domain-Driven Design models
- `rust-port/wifi-densepose-rs/crates/` — Rust workspace crates (15 crates)
- `rust-port/wifi-densepose-rs/crates/wifi-densepose-signal/src/ruvsense/` — RuvSense multistatic modules (14 files)
- `rust-port/wifi-densepose-rs/crates/wifi-densepose-ruvector/src/viewpoint/` — Cross-viewpoint fusion (5 files)
- `rust-port/wifi-densepose-rs/crates/wifi-densepose-hardware/src/esp32/` — ESP32 TDM protocol
- `firmware/esp32-csi-node/main/` — ESP32 C firmware (channel hopping, NVS config, TDM)
- `docs/adr/` — Architecture Decision Records
- `rust-port/wifi-densepose-rs/crates/` — Rust workspace crates (signal, train, mat, nn, hardware)
- `v1/src/` — Python source (core, hardware, services, api)
- `v1/data/proof/` — Deterministic CSI proof bundles
- `.claude-flow/` — Claude Flow coordination state (committed for team sharing)
@@ -209,15 +140,13 @@ Before merging any PR, verify each item applies and is addressed:
1. **Rust tests pass**`cargo test --workspace --no-default-features` (1,031+ passed, 0 failed)
2. **Python proof passes**`python v1/data/proof/verify.py` (VERDICT: PASS)
3. **README.md** — Update platform tables, crate descriptions, hardware tables, feature summaries if scope changed
4. **CLAUDE.md**Update crate table, ADR list, module tables, version if scope changed
5. **CHANGELOG.md** — Add entry under `[Unreleased]` with what was added/fixed/changed
6. **User guide** (`docs/user-guide.md`) — Update if new data sources, CLI flags, or setup steps were added
7. **ADR index** — Update ADR count in README docs table if a new ADR was created
8. **Witness bundle** — Regenerate if tests or proof hash changed: `bash scripts/generate-witness-bundle.sh`
9. **Docker Hub image** — Only rebuild if Dockerfile, dependencies, or runtime behavior changed
10. **Crate publishing**Only needed if a crate is published to crates.io and its public API changed
11. **`.gitignore`** — Add any new build artifacts or binaries
12. **Security audit** — Run security review for new modules touching hardware/network boundaries
4. **CHANGELOG.md**Add entry under `[Unreleased]` with what was added/fixed/changed
5. **User guide** (`docs/user-guide.md`) — Update if new data sources, CLI flags, or setup steps were added
6. **ADR index** — Update ADR count in README docs table if a new ADR was created
7. **Witness bundle** — Regenerate if tests or proof hash changed: `bash scripts/generate-witness-bundle.sh`
8. **Docker Hub image** — Only rebuild if Dockerfile, dependencies, or runtime behavior changed
9. **Crate publishing** — Only needed if a crate is published to crates.io and its public API changed
10. **`.gitignore`** — Add any new build artifacts or binaries
## Build & Test

View File

@@ -1,400 +0,0 @@
# ADR-029: Project RuvSense -- Sensing-First RF Mode for Multistatic WiFi DensePose
| Field | Value |
|-------|-------|
| **Status** | Proposed |
| **Date** | 2026-03-02 |
| **Deciders** | ruv |
| **Codename** | **RuvSense** -- RuVector-Enhanced Sensing for Multistatic Fidelity |
| **Relates to** | ADR-012 (ESP32 Mesh), ADR-014 (SOTA Signal Processing), ADR-016 (RuVector Training), ADR-017 (RuVector Signal+MAT), ADR-018 (ESP32 Implementation), ADR-024 (AETHER Embeddings), ADR-026 (Survivor Track Lifecycle), ADR-027 (MERIDIAN Generalization) |
---
## 1. Context
### 1.1 The Fidelity Gap
Current WiFi-DensePose achieves functional pose estimation from a single ESP32 AP, but three fidelity metrics prevent production deployment:
| Metric | Current (Single ESP32) | Required (Production) | Root Cause |
|--------|------------------------|----------------------|------------|
| Torso keypoint jitter | ~15cm RMS | <3cm RMS | Single viewpoint, 20 MHz bandwidth, no temporal smoothing |
| Multi-person separation | Fails >2 people, frequent ID swaps | 4+ people, zero swaps over 10 min | Underdetermined with 1 TX-RX link; no person-specific features |
| Small motion sensitivity | Gross movement only | Breathing at 3m, heartbeat at 1.5m | Insufficient phase sensitivity at 2.4 GHz; noise floor too high |
| Update rate | ~10 Hz effective | 20 Hz | Single-channel serial CSI collection |
| Temporal stability | Drifts within hours | Stable over days | No coherence gating; model absorbs environmental drift |
### 1.2 The Insight: Sensing-First RF Mode on Existing Silicon
You do not need to invent a new WiFi standard. The winning move is a **sensing-first RF mode** that rides on existing silicon (ESP32-S3), existing bands (2.4/5 GHz), and existing regulations (802.11n NDP frames). The fidelity improvement comes from three physical levers:
1. **Bandwidth**: Channel-hopping across 2.4 GHz channels 1/6/11 triples effective bandwidth from 20 MHz to 60 MHz, 3x multipath separation
2. **Carrier frequency**: Dual-band sensing (2.4 + 5 GHz) doubles phase sensitivity to small motion
3. **Viewpoints**: Multistatic ESP32 mesh (4 nodes = 12 TX-RX links) provides 360-degree geometric diversity
### 1.3 Acceptance Test
**Two people in a room, 20 Hz update rate, stable tracks for 10 minutes with no identity swaps and low jitter in the torso keypoints.**
Quantified:
- Torso keypoint jitter < 30mm RMS (hips, shoulders, spine)
- Zero identity swaps over 600 seconds (12,000 frames)
- 20 Hz output rate (50 ms cycle time)
- Breathing SNR > 10dB at 3m (validates small-motion sensitivity)
---
## 2. Decision
### 2.1 Architecture Overview
Implement RuvSense as a new bounded context within `wifi-densepose-signal`, consisting of 6 modules:
```
wifi-densepose-signal/src/ruvsense/
├── mod.rs // Module exports, RuvSense pipeline orchestrator
├── multiband.rs // Multi-band CSI frame fusion (§2.2)
├── phase_align.rs // Cross-channel phase alignment (§2.3)
├── multistatic.rs // Multi-node viewpoint fusion (§2.4)
├── coherence.rs // Coherence metric computation (§2.5)
├── coherence_gate.rs // Gated update policy (§2.6)
└── pose_tracker.rs // 17-keypoint Kalman tracker with re-ID (§2.7)
```
### 2.2 Channel-Hopping Firmware (ESP32-S3)
Modify the ESP32 firmware (`firmware/esp32-csi-node/main/csi_collector.c`) to cycle through non-overlapping channels at configurable dwell times:
```c
// Channel hop table (populated from NVS at boot)
static uint8_t s_hop_channels[6] = {1, 6, 11, 36, 40, 44};
static uint8_t s_hop_count = 3; // default: 2.4 GHz only
static uint32_t s_dwell_ms = 50; // 50ms per channel
```
At 100 Hz raw CSI rate with 50 ms dwell across 3 channels, each channel yields ~33 frames/second. The existing ADR-018 binary frame format already carries `channel_freq_mhz` at offset 8, so no wire format change is needed.
**NDP frame injection:** `esp_wifi_80211_tx()` injects deterministic Null Data Packet frames (preamble-only, no payload, ~24 us airtime) at GPIO-triggered intervals. This is sensing-first: the primary RF emission purpose is CSI measurement, not data communication.
### 2.3 Multi-Band Frame Fusion
Aggregate per-channel CSI frames into a wideband virtual snapshot:
```rust
/// Fused multi-band CSI from one node at one time slot.
pub struct MultiBandCsiFrame {
pub node_id: u8,
pub timestamp_us: u64,
/// One canonical-56 row per channel, ordered by center frequency.
pub channel_frames: Vec<CanonicalCsiFrame>,
/// Center frequencies (MHz) for each channel row.
pub frequencies_mhz: Vec<u32>,
/// Cross-channel coherence score (0.0-1.0).
pub coherence: f32,
}
```
Cross-channel phase alignment uses `ruvector-solver::NeumannSolver` to solve for the channel-dependent phase rotation introduced by the ESP32 local oscillator during channel hops. The system:
```
[Φ₁, Φ₆, Φ₁₁] = [Φ_body + δ₁, Φ_body + δ₆, Φ_body + δ₁₁]
```
NeumannSolver fits the `δ` offsets from the static subcarrier components (which should have zero body-caused phase shift), then removes them.
### 2.4 Multistatic Viewpoint Fusion
With N ESP32 nodes, collect N `MultiBandCsiFrame` per time slot and fuse with geometric diversity:
**TDMA Sensing Schedule (4 nodes):**
| Slot | TX | RX₁ | RX₂ | RX₃ | Duration |
|------|-----|-----|-----|-----|----------|
| 0 | Node A | B | C | D | 4 ms |
| 1 | Node B | A | C | D | 4 ms |
| 2 | Node C | A | B | D | 4 ms |
| 3 | Node D | A | B | C | 4 ms |
| 4 | -- | Processing + fusion | | | 30 ms |
| **Total** | | | | | **50 ms = 20 Hz** |
Synchronization: GPIO pulse from aggregator node at cycle start. Clock drift at ±10ppm over 50 ms is ~0.5 us, well within the 1 ms guard interval.
**Cross-node fusion** uses `ruvector-attn-mincut::attn_mincut` where time-frequency cells from different nodes attend to each other. Cells showing correlated motion energy across nodes (body reflection) are amplified; cells with single-node energy (local multipath artifact) are suppressed.
**Multi-person separation** via `ruvector-mincut::DynamicMinCut`:
1. Build cross-link temporal correlation graph (nodes = TX-RX links, edges = correlation coefficient)
2. `DynamicMinCut` partitions into K clusters (one per detected person)
3. Attention fusion (§5.3 of research doc) runs independently per cluster
### 2.5 Coherence Metric
Per-link coherence quantifies consistency with recent history:
```rust
pub fn coherence_score(
current: &[f32],
reference: &[f32],
variance: &[f32],
) -> f32 {
current.iter().zip(reference.iter()).zip(variance.iter())
.map(|((&c, &r), &v)| {
let z = (c - r).abs() / v.sqrt().max(1e-6);
let weight = 1.0 / (v + 1e-6);
((-0.5 * z * z).exp(), weight)
})
.fold((0.0, 0.0), |(sc, sw), (c, w)| (sc + c * w, sw + w))
.pipe(|(sc, sw)| sc / sw)
}
```
The static/dynamic decomposition uses `ruvector-solver` to separate environmental drift (slow, global) from body motion (fast, subcarrier-specific).
### 2.6 Coherence-Gated Update Policy
```rust
pub enum GateDecision {
/// Coherence > 0.85: Full Kalman measurement update
Accept(Pose),
/// 0.5 < coherence < 0.85: Kalman predict only (3x inflated noise)
PredictOnly,
/// Coherence < 0.5: Reject measurement entirely
Reject,
/// >10s continuous low coherence: Trigger SONA recalibration (ADR-005)
Recalibrate,
}
```
When `Recalibrate` fires:
1. Freeze output at last known good pose
2. Collect 200 frames (10s) of unlabeled CSI
3. Run AETHER contrastive TTT (ADR-024) to adapt encoder
4. Update SONA LoRA weights (ADR-005), <1ms per update
5. Resume sensing with adapted model
### 2.7 Pose Tracker (17-Keypoint Kalman with Re-ID)
Lift the Kalman + lifecycle + re-ID infrastructure from `wifi-densepose-mat/src/tracking/` (ADR-026) into the RuvSense bounded context, extended for 17-keypoint skeletons:
| Parameter | Value | Rationale |
|-----------|-------|-----------|
| State dimension | 6 per keypoint (x,y,z,vx,vy,vz) | Constant-velocity model |
| Process noise σ_a | 0.3 m/s² | Normal walking acceleration |
| Measurement noise σ_obs | 0.08 m | Target <8cm RMS at torso |
| Mahalanobis gate | χ²(3) = 9.0 | 3σ ellipsoid (same as ADR-026) |
| Birth hits | 2 frames (100ms at 20Hz) | Reject single-frame noise |
| Loss misses | 5 frames (250ms) | Brief occlusion tolerance |
| Re-ID feature | AETHER 128-dim embedding | Body-shape discriminative (ADR-024) |
| Re-ID window | 5 seconds | Sufficient for crossing recovery |
**Track assignment** uses `ruvector-mincut`'s `DynamicPersonMatcher` (already integrated in `metrics.rs`, ADR-016) with joint position + embedding cost:
```
cost(track_i, det_j) = 0.6 * mahalanobis(track_i, det_j.position)
+ 0.4 * (1 - cosine_sim(track_i.embedding, det_j.embedding))
```
---
## 3. GOAP Integration Plan (Goal-Oriented Action Planning)
### 3.1 Action Dependency Graph
```
Phase 1: Foundation
Action 1: Channel-Hopping Firmware ──────────────────────┐
│ │
v │
Action 2: Multi-Band Frame Fusion ──→ Action 6: Coherence │
│ Metric │
v │ │
Action 3: Multistatic Mesh v │
│ Action 7: Coherence │
v Gate │
Phase 2: Tracking │ │
Action 4: Pose Tracker ←────────────────┘ │
│ │
v │
Action 5: End-to-End Pipeline @ 20 Hz ←────────────────────┘
v
Phase 4: Hardening
Action 8: AETHER Track Re-ID
v
Action 9: ADR-029 Documentation (this document)
```
### 3.2 Cost and RuVector Mapping
| # | Action | Cost | Preconditions | RuVector Crates | Effects |
|---|--------|------|---------------|-----------------|---------|
| 1 | Channel-hopping firmware | 4/10 | ESP32 firmware exists | None (pure C) | `bandwidth_extended = true` |
| 2 | Multi-band frame fusion | 5/10 | Action 1 | `solver`, `attention` | `fused_multi_band_frame = true` |
| 3 | Multistatic mesh aggregation | 5/10 | Action 2 | `mincut`, `attn-mincut` | `multistatic_mesh = true` |
| 4 | Pose tracker | 4/10 | Action 3, 7 | `mincut` | `pose_tracker = true` |
| 5 | End-to-end pipeline | 6/10 | Actions 2-4 | `temporal-tensor`, `attention` | `20hz_update = true` |
| 6 | Coherence metric | 3/10 | Action 2 | `solver` | `coherence_metric = true` |
| 7 | Coherence gate | 3/10 | Action 6 | `attn-mincut` | `coherence_gating = true` |
| 8 | AETHER re-ID | 4/10 | Actions 4, 7 | `attention` | `identity_stable = true` |
| 9 | ADR documentation | 2/10 | All above | None | Decision documented |
**Total cost: 36 units. Minimum viable path to acceptance test: Actions 1-5 + 6-7 = 30 units.**
### 3.3 Latency Budget (50ms cycle)
| Stage | Budget | Method |
|-------|--------|--------|
| UDP receive + parse | <1 ms | ADR-018 binary, 148 bytes, zero-alloc |
| Multi-band fusion | ~2 ms | NeumannSolver on 2×2 phase alignment |
| Multistatic fusion | ~3 ms | attn_mincut on 3-6 nodes × 64 velocity bins |
| Model inference | ~30-40 ms | CsiToPoseTransformer (lightweight, no ResNet) |
| Kalman update | <1 ms | 17 independent 6D filters, stack-allocated |
| **Total** | **~37-47 ms** | **Fits in 50 ms** |
---
## 4. Hardware Bill of Materials
| Component | Qty | Unit Cost | Purpose |
|-----------|-----|-----------|---------|
| ESP32-S3-DevKitC-1 | 4 | $10 | TX/RX sensing nodes |
| ESP32-S3-DevKitC-1 | 1 | $10 | Aggregator (or x86/RPi host) |
| External 5dBi antenna | 4-8 | $3 | Improved gain, directional coverage |
| USB-C hub (4 port) | 1 | $15 | Power distribution |
| Wall mount brackets | 4 | $2 | Ceiling/wall installation |
| **Total** | | **$73-91** | Complete 4-node mesh |
---
## 5. RuVector v2.0.4 Integration Map
All five published crates are exercised:
| Crate | Actions | Integration Point | Algorithmic Advantage |
|-------|---------|-------------------|----------------------|
| `ruvector-solver` | 2, 6 | Phase alignment; coherence matrix decomposition | O(√n) Neumann convergence |
| `ruvector-attention` | 2, 5, 8 | Cross-channel weighting; ring buffer; embedding similarity | Sublinear attention for small d |
| `ruvector-mincut` | 3, 4 | Viewpoint diversity partitioning; track assignment | O(n^1.5 log n) dynamic updates |
| `ruvector-attn-mincut` | 3, 7 | Cross-node spectrogram fusion; coherence gating | Attention + mincut in one pass |
| `ruvector-temporal-tensor` | 5 | Compressed sensing window ring buffer | 50-75% memory reduction |
---
## 6. IEEE 802.11bf Alignment
RuvSense's TDMA sensing schedule is forward-compatible with IEEE 802.11bf (WLAN Sensing, published 2024):
| RuvSense Concept | 802.11bf Equivalent |
|-----------------|---------------------|
| TX slot | Sensing Initiator |
| RX slot | Sensing Responder |
| TDMA cycle | Sensing Measurement Instance |
| NDP frame | Sensing NDP |
| Aggregator | Sensing Session Owner |
When commercial APs support 802.11bf, the ESP32 mesh can interoperate by translating SSP slots into 802.11bf Sensing Trigger frames.
---
## 7. Dependency Changes
### Firmware (C)
New files:
- `firmware/esp32-csi-node/main/sensing_schedule.h`
- `firmware/esp32-csi-node/main/sensing_schedule.c`
Modified files:
- `firmware/esp32-csi-node/main/csi_collector.c` (add channel hopping, link tagging)
- `firmware/esp32-csi-node/main/main.c` (add GPIO sync, TDMA timer)
### Rust
New module: `crates/wifi-densepose-signal/src/ruvsense/` (6 files, ~1500 lines estimated)
Modified files:
- `crates/wifi-densepose-signal/src/lib.rs` (export `ruvsense` module)
- `crates/wifi-densepose-signal/Cargo.toml` (no new deps; all ruvector crates already present per ADR-017)
- `crates/wifi-densepose-sensing-server/src/main.rs` (wire RuvSense pipeline into WebSocket output)
No new workspace dependencies. All ruvector crates are already in the workspace `Cargo.toml`.
---
## 8. Implementation Priority
| Priority | Actions | Weeks | Milestone |
|----------|---------|-------|-----------|
| P0 | 1 (firmware) | 2 | Channel-hopping ESP32 prototype |
| P0 | 2 (multi-band) | 2 | Wideband virtual frames |
| P1 | 3 (multistatic) | 2 | Multi-node fusion |
| P1 | 4 (tracker) | 1 | 17-keypoint Kalman |
| P1 | 6, 7 (coherence) | 1 | Gated updates |
| P2 | 5 (end-to-end) | 2 | 20 Hz pipeline |
| P2 | 8 (AETHER re-ID) | 1 | Identity hardening |
| P3 | 9 (docs) | 0.5 | This ADR finalized |
| **Total** | | **~10 weeks** | **Acceptance test** |
---
## 9. Consequences
### 9.1 Positive
- **3x bandwidth improvement** without hardware changes (channel hopping on existing ESP32)
- **12 independent viewpoints** from 4 commodity $10 nodes (C(4,2) × 2 links)
- **20 Hz update rate** with Kalman-smoothed output for sub-30mm torso jitter
- **Days-long stability** via coherence gating + SONA recalibration
- **All five ruvector crates exercised** — consistent algorithmic foundation
- **$73-91 total BOM** — accessible for research and production
- **802.11bf forward-compatible** — investment protected as commercial sensing arrives
- **Cognitum upgrade path** — same software stack, swap ESP32 for higher-bandwidth front end
### 9.2 Negative
- **4-node deployment** requires physical installation and calibration of node positions
- **TDMA scheduling** reduces per-node CSI rate (each node only transmits 1/4 of the time)
- **Channel hopping** introduces ~1-5ms gaps during `esp_wifi_set_channel()` transitions
- **5 GHz CSI on ESP32-S3** may not be available (ESP32-C6 supports it natively)
- **Coherence gate** may reject valid measurements during fast body motion (mitigation: gate only on static-subcarrier coherence)
### 9.3 Risks
| Risk | Probability | Impact | Mitigation |
|------|-------------|--------|------------|
| ESP32 channel hop causes CSI gaps | Medium | Reduced effective rate | Measure gap duration; increase dwell if >5ms |
| 5 GHz CSI unavailable on S3 | High | Lose frequency diversity | Fallback: 3-channel 2.4 GHz still provides 3x BW; ESP32-C6 for dual-band |
| Model inference >40ms | Medium | Miss 20 Hz target | Run model at 10 Hz; Kalman predict at 20 Hz interpolates |
| Two-person separation fails at 3 nodes | Low | Identity swaps | AETHER re-ID recovers; increase to 4-6 nodes |
| Coherence gate false-triggers | Low | Missed updates | Gate on environmental coherence only, not body-motion subcarriers |
---
## 10. Related ADRs
| ADR | Relationship |
|-----|-------------|
| ADR-012 | **Extended**: RuvSense adds TDMA multistatic to single-AP mesh |
| ADR-014 | **Used**: All 6 SOTA algorithms applied per-link |
| ADR-016 | **Extended**: New ruvector integration points for multi-link fusion |
| ADR-017 | **Extended**: Coherence gating adds temporal stability layer |
| ADR-018 | **Modified**: Firmware gains channel hopping, TDMA schedule, HT40 |
| ADR-022 | **Complementary**: RuvSense is the ESP32 equivalent of Windows multi-BSSID |
| ADR-024 | **Used**: AETHER embeddings for person re-identification |
| ADR-026 | **Reused**: Kalman + lifecycle infrastructure lifted to RuvSense |
| ADR-027 | **Used**: GeometryEncoder, HardwareNormalizer, FiLM conditioning |
---
## 11. References
1. IEEE 802.11bf-2024. "WLAN Sensing." IEEE Standards Association.
2. Geng, J., Huang, D., De la Torre, F. (2023). "DensePose From WiFi." arXiv:2301.00250.
3. Yan, K. et al. (2024). "Person-in-WiFi 3D." CVPR 2024, pp. 969-978.
4. Chen, L. et al. (2026). "PerceptAlign: Geometry-Aware WiFi Sensing." arXiv:2601.12252.
5. Kotaru, M. et al. (2015). "SpotFi: Decimeter Level Localization Using WiFi." SIGCOMM.
6. Zheng, Y. et al. (2019). "Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi." MobiSys.
7. Zeng, Y. et al. (2019). "FarSense: Pushing the Range Limit of WiFi-based Respiration Sensing." MobiCom.
8. AM-FM (2026). "A Foundation Model for Ambient Intelligence Through WiFi." arXiv:2602.11200.
9. Espressif ESP-CSI. https://github.com/espressif/esp-csi

View File

@@ -1,364 +0,0 @@
# ADR-030: RuvSense Persistent Field Model — Longitudinal Drift Detection and Exotic Sensing Tiers
| Field | Value |
|-------|-------|
| **Status** | Proposed |
| **Date** | 2026-03-02 |
| **Deciders** | ruv |
| **Codename** | **RuvSense Field** — Persistent Electromagnetic World Model |
| **Relates to** | ADR-029 (RuvSense Multistatic), ADR-005 (SONA Self-Learning), ADR-024 (AETHER Embeddings), ADR-016 (RuVector Integration), ADR-026 (Survivor Track Lifecycle), ADR-027 (MERIDIAN Generalization) |
---
## 1. Context
### 1.1 Beyond Pose Estimation
ADR-029 establishes RuvSense as a sensing-first multistatic mesh achieving 20 Hz DensePose with <30mm jitter. That treats WiFi as a **momentary pose estimator**. The next leap: treat the electromagnetic field as a **persistent world model** that remembers, predicts, and explains.
The most exotic capabilities come from this shift in abstraction level:
- The room is the model, not the person
- People are structured perturbations to a baseline
- Changes are deltas from a known state, not raw measurements
- Time is a first-class dimension — the system remembers days, not frames
### 1.2 The Seven Capability Tiers
| Tier | Capability | Foundation |
|------|-----------|-----------|
| 1 | **Field Normal Modes** — Room electromagnetic eigenstructure | Baseline calibration + SVD |
| 2 | **Coarse RF Tomography** — 3D occupancy volume from link attenuations | Sparse tomographic inversion |
| 3 | **Intention Lead Signals** — Pre-movement prediction (200-500ms lead) | Temporal embedding trajectory analysis |
| 4 | **Longitudinal Biomechanics Drift** — Personal baseline deviation over days | Welford statistics + HNSW memory |
| 5 | **Cross-Room Continuity** — Identity persistence across spaces without optics | Environment fingerprinting + transition graph |
| 6 | **Invisible Interaction Layer** — Multi-user gesture control through walls/darkness | Per-person CSI perturbation classification |
| 7 | **Adversarial Detection** — Physically impossible signal identification | Multi-link consistency + field model constraints |
### 1.3 Signals, Not Diagnoses
RF sensing detects **biophysical proxies**, not medical conditions:
| Detectable Signal | Not Detectable |
|-------------------|---------------|
| Breathing rate variability | COPD diagnosis |
| Gait asymmetry shift (18% over 14 days) | Parkinson's disease |
| Posture instability increase | Neurological condition |
| Micro-tremor onset | Specific tremor etiology |
| Activity level decline | Depression or pain diagnosis |
The output is: "Your movement symmetry has shifted 18 percent over 14 days." That is actionable without being diagnostic. The evidence chain (stored embeddings, drift statistics, coherence scores) is fully traceable.
### 1.4 Acceptance Tests
**Tier 0 (ADR-029):** Two people, 20 Hz, 10 min stable tracks, zero ID swaps, <30mm torso jitter.
**Tier 1-4 (this ADR):** Seven-day run, no manual tuning. System flags one real environmental change and one real human drift event, produces traceable explanation using stored embeddings plus graph constraints.
**Tier 5-7 (appliance):** Thirty-day local run, no camera. Detects meaningful drift with <5% false alarm rate.
---
## 2. Decision
### 2.1 Implement Field Normal Modes as the Foundation
Add a `field_model` module to `wifi-densepose-signal/src/ruvsense/` that learns the room's electromagnetic baseline during unoccupied periods and decomposes all subsequent observations into environmental drift + body perturbation.
```
wifi-densepose-signal/src/ruvsense/
├── mod.rs // (existing, extend)
├── field_model.rs // NEW: Field normal mode computation + perturbation extraction
├── tomography.rs // NEW: Coarse RF tomography from link attenuations
├── longitudinal.rs // NEW: Personal baseline + drift detection
├── intention.rs // NEW: Pre-movement lead signal detector
├── cross_room.rs // NEW: Cross-room identity continuity
├── gesture.rs // NEW: Gesture classification from CSI perturbations
├── adversarial.rs // NEW: Physically impossible signal detection
└── (existing files...)
```
### 2.2 Core Architecture: The Persistent Field Model
```
Time
┌────────────────────────────────┐
│ Field Normal Modes (Tier 1) │
│ Room baseline + SVD modes │
│ ruvector-solver │
└────────────┬───────────────────┘
│ Body perturbation (environmental drift removed)
┌───────┴───────┐
│ │
▼ ▼
┌──────────┐ ┌──────────────┐
│ Pose │ │ RF Tomography│
│ (ADR-029)│ │ (Tier 2) │
│ 20 Hz │ │ Occupancy vol│
└────┬─────┘ └──────────────┘
┌──────────────────────────────┐
│ AETHER Embedding (ADR-024) │
│ 128-dim contrastive vector │
└────────────┬─────────────────┘
┌───────┼───────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌─────┐ ┌──────────┐
│Intention│ │Track│ │Cross-Room│
│Lead │ │Re-ID│ │Continuity│
│(Tier 3)│ │ │ │(Tier 5) │
└────────┘ └──┬──┘ └──────────┘
┌──────────────────────────────┐
│ RuVector Longitudinal Memory │
│ HNSW + graph + Welford stats│
│ (Tier 4) │
└──────────────┬───────────────┘
┌───────┴───────┐
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Drift Reports│ │ Adversarial │
│ (Level 1-3) │ │ Detection │
│ │ │ (Tier 7) │
└──────────────┘ └──────────────┘
```
### 2.3 Field Normal Modes (Tier 1)
**What it is:** The room's electromagnetic eigenstructure — the stable propagation paths, reflection coefficients, and interference patterns when nobody is present.
**How it works:**
1. During quiet periods (empty room, overnight), collect 10 minutes of CSI across all links
2. Compute per-link baseline (mean CSI vector)
3. Compute environmental variation modes via SVD (temperature, humidity, time-of-day effects)
4. Store top-K modes (K=3-5 typically captures >95% of environmental variance)
5. At runtime: subtract baseline, project out environmental modes, keep body perturbation
```rust
pub struct FieldNormalMode {
pub baseline: Vec<Vec<Complex<f32>>>, // [n_links × n_subcarriers]
pub environmental_modes: Vec<Vec<f32>>, // [n_modes × n_subcarriers]
pub mode_energies: Vec<f32>, // eigenvalues
pub calibrated_at: u64,
pub geometry_hash: u64,
}
```
**RuVector integration:**
- `ruvector-solver` → Low-rank SVD for mode extraction
- `ruvector-temporal-tensor` → Compressed baseline history storage
- `ruvector-attn-mincut` → Identify which subcarriers belong to which mode
### 2.4 Longitudinal Drift Detection (Tier 4)
**The defensible pipeline:**
```
RF → AETHER contrastive embedding
→ RuVector longitudinal memory (HNSW + graph)
→ Coherence-gated drift detection (Welford statistics)
→ Risk flag with traceable evidence
```
**Three monitoring levels:**
| Level | Signal Type | Example Output |
|-------|------------|----------------|
| **1: Physiological** | Raw biophysical metrics | "Breathing rate: 18.3 BPM today, 7-day avg: 16.1" |
| **2: Drift** | Personal baseline deviation | "Gait symmetry shifted 18% over 14 days" |
| **3: Risk correlation** | Pattern-matched concern | "Pattern consistent with increased fall risk" |
**Storage model:**
```rust
pub struct PersonalBaseline {
pub person_id: PersonId,
pub gait_symmetry: WelfordStats,
pub stability_index: WelfordStats,
pub breathing_regularity: WelfordStats,
pub micro_tremor: WelfordStats,
pub activity_level: WelfordStats,
pub embedding_centroid: Vec<f32>, // [128]
pub observation_days: u32,
pub updated_at: u64,
}
```
**RuVector integration:**
- `ruvector-temporal-tensor` → Compressed daily summaries (50-75% memory savings)
- HNSW → Embedding similarity search across longitudinal record
- `ruvector-attention` → Per-metric drift significance weighting
- `ruvector-mincut` → Temporal segmentation (detect changepoints in metric series)
### 2.5 Regulatory Classification
| Classification | What You Claim | Regulatory Path |
|---------------|---------------|-----------------|
| **Consumer wellness** (recommended first) | Activity metrics, breathing rate, stability score | Self-certification, FCC Part 15 |
| **Clinical decision support** (future) | Fall risk alert, respiratory pattern concern | FDA Class II 510(k) or De Novo |
| **Regulated medical device** (requires clinical partner) | Diagnostic claims for specific conditions | FDA Class II/III + clinical trials |
**Decision: Start as consumer wellness.** Build 12+ months of real-world longitudinal data. The dataset itself becomes the asset for future regulatory submissions.
---
## 3. Appliance Product Categories
### 3.1 Invisible Guardian
Wall-mounted wellness monitor for elderly care and independent living. No camera, no microphone, no reconstructable data. Stores embeddings and structural deltas only.
| Spec | Value |
|------|-------|
| Nodes | 4 ESP32-S3 pucks per room |
| Processing | Central hub (RPi 5 or x86) |
| Power | PoE or USB-C |
| Output | Risk flags, drift alerts, occupancy timeline |
| BOM | $73-91 (ESP32 mesh) + $35-80 (hub) |
| Validation | 30-day autonomous run, <5% false alarm rate |
### 3.2 Spatial Digital Twin Node
Live electromagnetic room model for smart buildings and workplace analytics.
| Spec | Value |
|------|-------|
| Output | Occupancy heatmap, flow vectors, dwell time, anomaly events |
| Integration | MQTT/REST API for BMS and CAFM |
| Retention | 30-day rolling, GDPR-compliant |
| Vertical | Smart buildings, retail, workspace optimization |
### 3.3 RF Interaction Surface
Multi-user gesture interface. No cameras. Works in darkness, smoke, through clothing.
| Spec | Value |
|------|-------|
| Gestures | Wave, point, beckon, push, circle + custom |
| Users | Up to 4 simultaneous |
| Latency | <100ms gesture recognition |
| Vertical | Smart home, hospitality, accessibility |
### 3.4 Pre-Incident Drift Monitor
Longitudinal biomechanics tracker for rehabilitation and occupational health.
| Spec | Value |
|------|-------|
| Baseline | 7-day calibration per person |
| Alert | Metric drift >2sigma for >3 days |
| Evidence | Stored embedding trajectory + statistical report |
| Vertical | Elderly care, rehab, occupational health |
### 3.5 Vertical Recommendation for First Hardware SKU
**Invisible Guardian** — the elderly care wellness monitor. Rationale:
1. Largest addressable market with immediate revenue (aging population, care facility demand)
2. Lowest regulatory bar (consumer wellness, no diagnostic claims)
3. Privacy advantage over cameras is a selling point, not a limitation
4. 30-day autonomous operation validates all tiers (field model, drift detection, coherence gating)
5. $108-171 BOM allows $299-499 retail with healthy margins
---
## 4. RuVector Integration Map (Extended)
All five crates are exercised across the exotic tiers:
| Tier | Crate | API | Role |
|------|-------|-----|------|
| 1 (Field) | `ruvector-solver` | `NeumannSolver` + SVD | Environmental mode decomposition |
| 1 (Field) | `ruvector-temporal-tensor` | `TemporalTensorCompressor` | Baseline history storage |
| 1 (Field) | `ruvector-attn-mincut` | `attn_mincut` | Mode-subcarrier assignment |
| 2 (Tomo) | `ruvector-solver` | `NeumannSolver` (L1) | Sparse tomographic inversion |
| 3 (Intent) | `ruvector-attention` | `ScaledDotProductAttention` | Temporal trajectory weighting |
| 3 (Intent) | `ruvector-temporal-tensor` | `CompressedCsiBuffer` | 2-second embedding history |
| 4 (Drift) | `ruvector-temporal-tensor` | `TemporalTensorCompressor` | Daily summary compression |
| 4 (Drift) | `ruvector-attention` | `ScaledDotProductAttention` | Metric drift significance |
| 4 (Drift) | `ruvector-mincut` | `DynamicMinCut` | Temporal changepoint detection |
| 5 (Cross-Room) | `ruvector-attention` | HNSW | Room and person fingerprint matching |
| 5 (Cross-Room) | `ruvector-mincut` | `MinCutBuilder` | Transition graph partitioning |
| 6 (Gesture) | `ruvector-attention` | `ScaledDotProductAttention` | Gesture template matching |
| 7 (Adversarial) | `ruvector-solver` | `NeumannSolver` | Physical plausibility verification |
| 7 (Adversarial) | `ruvector-attn-mincut` | `attn_mincut` | Multi-link consistency check |
---
## 5. Implementation Priority
| Priority | Tier | Module | Weeks | Dependency |
|----------|------|--------|-------|------------|
| P0 | 1 | `field_model.rs` | 2 | ADR-029 multistatic mesh operational |
| P0 | 4 | `longitudinal.rs` | 2 | Tier 1 baseline + AETHER embeddings |
| P1 | 2 | `tomography.rs` | 1 | Tier 1 perturbation extraction |
| P1 | 3 | `intention.rs` | 2 | Tier 1 + temporal embedding history |
| P2 | 5 | `cross_room.rs` | 2 | Tier 4 person profiles + multi-room deployment |
| P2 | 6 | `gesture.rs` | 1 | Tier 1 perturbation + per-person separation |
| P3 | 7 | `adversarial.rs` | 1 | Tier 1 field model + multi-link consistency |
**Total exotic tier: ~11 weeks after ADR-029 acceptance test passes.**
---
## 6. Consequences
### 6.1 Positive
- **Room becomes self-sensing**: Field normal modes provide a persistent baseline that explains change as structured deltas
- **7-day autonomous operation**: Coherence gating + SONA adaptation + longitudinal memory eliminate manual tuning
- **Privacy by design**: No images, no audio, no reconstructable data — only embeddings and statistical summaries
- **Traceable evidence**: Every drift alert links to stored embeddings, timestamps, and graph constraints
- **Multiple product categories**: Same software stack, different packaging — Guardian, Twin, Interaction, Drift Monitor
- **Regulatory clarity**: Consumer wellness first, clinical decision support later with accumulated dataset
- **Security primitive**: Coherence gating detects adversarial injection, not just quality issues
### 6.2 Negative
- **7-day calibration** required for personal baselines (system is less useful during initial period)
- **Empty-room calibration** needed for field normal modes (may not always be available)
- **Storage growth**: Longitudinal memory grows ~1 KB/person/day (manageable but non-zero)
- **Statistical power**: Drift detection requires 14+ days of data for meaningful z-scores
- **Multi-room**: Cross-room continuity requires hardware in all rooms (cost scales linearly)
### 6.3 Risks
| Risk | Probability | Impact | Mitigation |
|------|-------------|--------|------------|
| Field modes drift faster than expected | Medium | False perturbation detections | Reduce mode update interval from 24h to 4h |
| Personal baselines too variable | Medium | High false alarm rate for drift | Widen sigma threshold from 2σ to 3σ; require 5+ days |
| Cross-room matching fails for similar body types | Low | Identity confusion | Require temporal proximity (<60s) plus spatial adjacency |
| Gesture recognition insufficient SNR | Medium | <80% accuracy | Restrict to near-field (<2m) initially |
| Adversarial injection via coordinated WiFi injection | Very Low | Spoofed occupancy | Multi-link consistency check makes single-link spoofing detectable |
---
## 7. Related ADRs
| ADR | Relationship |
|-----|-------------|
| ADR-029 | **Prerequisite**: Multistatic mesh is the sensing substrate for all exotic tiers |
| ADR-005 (SONA) | **Extended**: SONA recalibration triggered by coherence gate → now also by drift events |
| ADR-016 (RuVector) | **Extended**: All 5 crates exercised across 7 exotic tiers |
| ADR-024 (AETHER) | **Critical dependency**: Embeddings are the representation for all longitudinal memory |
| ADR-026 (Tracking) | **Extended**: Track lifecycle now spans days (not minutes) for drift detection |
| ADR-027 (MERIDIAN) | **Used**: Room geometry encoding for field normal mode conditioning |
---
## 8. References
1. IEEE 802.11bf-2024. "WLAN Sensing." IEEE Standards Association.
2. FDA. "General Wellness: Policy for Low Risk Devices." Guidance Document, 2019.
3. EU MDR 2017/745. "Medical Device Regulation." Official Journal of the European Union.
4. Welford, B.P. (1962). "Note on a Method for Calculating Corrected Sums of Squares." Technometrics.
5. Chen, L. et al. (2026). "PerceptAlign: Geometry-Aware WiFi Sensing." arXiv:2601.12252.
6. AM-FM (2026). "A Foundation Model for Ambient Intelligence Through WiFi." arXiv:2602.11200.
7. Geng, J. et al. (2023). "DensePose From WiFi." arXiv:2301.00250.

View File

@@ -1,369 +0,0 @@
# ADR-031: Project RuView -- Sensing-First RF Mode for Multistatic Fidelity Enhancement
| Field | Value |
|-------|-------|
| **Status** | Proposed |
| **Date** | 2026-03-02 |
| **Deciders** | ruv |
| **Codename** | **RuView** -- RuVector Viewpoint-Integrated Enhancement |
| **Relates to** | ADR-012 (ESP32 Mesh), ADR-014 (SOTA Signal), ADR-016 (RuVector Integration), ADR-017 (RuVector Signal+MAT), ADR-021 (Vital Signs), ADR-024 (AETHER Embeddings), ADR-027 (MERIDIAN Cross-Environment) |
---
## 1. Context
### 1.1 The Single-Viewpoint Fidelity Ceiling
Current WiFi DensePose operates with a single transmitter-receiver pair (or single node receiving). This creates three fundamental limitations:
- **Body self-occlusion**: Limbs behind the torso are invisible to a single viewpoint.
- **Depth ambiguity**: Motion along the RF propagation axis (toward/away from receiver) produces minimal phase change.
- **Multi-person confusion**: Two people at similar range but different angles create overlapping CSI signatures.
The ESP32 mesh (ADR-012) partially addresses this via feature-level fusion across 3-6 nodes, but feature-level fusion cannot learn optimal fusion weights -- it uses hand-crafted aggregation (max, mean, coherent sum).
### 1.2 Three Fidelity Levers
1. **Bandwidth**: More bandwidth produces better multipath separability. Currently limited to 20 MHz (ESP32 HT20). Wider channels (80/160 MHz) are available on commodity 802.11ac/ax APs.
2. **Carrier frequency**: Higher frequency produces more phase sensitivity. 2.4 GHz sees macro-motion; 5 GHz sees micro-motion; 60 GHz sees vital signs.
3. **Viewpoints**: More viewpoints from different angles reduces geometric ambiguity. This is the lever RuView pulls.
### 1.3 Why "Sensing-First RF Mode"
RuView is NOT a new WiFi standard. It is a sensing-first protocol that rides on existing silicon, bands, and regulations. The key insight: instead of upgrading the RF hardware, upgrade the observability by coordinating multiple commodity receivers.
### 1.4 What Already Exists
| Component | ADR | Current State |
|-----------|-----|---------------|
| ESP32 mesh with feature-level fusion | ADR-012 | Implemented (firmware + aggregator) |
| SOTA signal processing (Hampel, Fresnel, BVP, spectrogram) | ADR-014 | Implemented |
| RuVector training pipeline (5 crates) | ADR-016 | Complete |
| RuVector signal + MAT integration (7 points) | ADR-017 | Accepted |
| Vital sign detection pipeline | ADR-021 | Partially implemented |
| AETHER contrastive embeddings | ADR-024 | Proposed |
| MERIDIAN cross-environment generalization | ADR-027 | Proposed |
RuView fills the gap: **cross-viewpoint embedding fusion** using learned attention weights.
---
## 2. Decision
Introduce RuView as a cross-viewpoint embedding fusion layer that operates on top of AETHER per-viewpoint embeddings. RuView adds a new bounded context (ViewpointFusion) and extends three existing crates.
### 2.1 Core Architecture
```
+-----------------------------------------------------------------+
| RuView Multistatic Pipeline |
+-----------------------------------------------------------------+
| |
| +----------+ +----------+ +----------+ +----------+ |
| | Node 1 | | Node 2 | | Node 3 | | Node N | |
| | ESP32-S3 | | ESP32-S3 | | ESP32-S3 | | ESP32-S3 | |
| | | | | | | | | |
| | CSI Rx | | CSI Rx | | CSI Rx | | CSI Rx | |
| +----+-----+ +----+-----+ +----+-----+ +----+-----+ |
| | | | | |
| v v v v |
| +--------------------------------------------------------+ |
| | Per-Viewpoint Signal Processing | |
| | Phase sanitize -> Hampel -> BVP -> Subcarrier select | |
| | (ADR-014, unchanged per viewpoint) | |
| +----------------------------+---------------------------+ |
| | |
| v |
| +--------------------------------------------------------+ |
| | Per-Viewpoint AETHER Embedding | |
| | CsiToPoseTransformer -> 128-d contrastive embedding | |
| | (ADR-024, one per viewpoint) | |
| +----------------------------+---------------------------+ |
| | |
| [emb_1, emb_2, ..., emb_N] |
| | |
| v |
| +--------------------------------------------------------+ |
| | * RuView Cross-Viewpoint Fusion * | |
| | | |
| | Q = W_q * X, K = W_k * X, V = W_v * X | |
| | A = softmax((QK^T + G_bias) / sqrt(d)) | |
| | fused = A * V | |
| | | |
| | G_bias: geometric bias from viewpoint pair geometry | |
| | (ruvector-attention: ScaledDotProductAttention) | |
| +----------------------------+---------------------------+ |
| | |
| fused_embedding |
| | |
| v |
| +--------------------------------------------------------+ |
| | DensePose Regression Head | |
| | Keypoint head: [B,17,H,W] | |
| | Part/UV head: [B,25,H,W] + [B,48,H,W] | |
| +--------------------------------------------------------+ |
+-----------------------------------------------------------------+
```
### 2.2 TDM Sensing Protocol
- Coordinator (aggregator) broadcasts sync beacon at start of each cycle.
- Each node transmits in assigned time slot; all others receive.
- 6 nodes x 1.4 ms/slot = 8.4 ms cycle -> ~119 Hz aggregate, ~20 Hz per bistatic pair.
- Clock drift handled at feature level (no cross-node phase alignment).
### 2.3 Geometric Bias Matrix
The geometric bias `G_bias` encodes the spatial relationship between viewpoint pairs:
```
G_bias[i,j] = w_angle * cos(theta_ij) + w_dist * exp(-d_ij / d_ref)
```
where:
- `theta_ij` = angle between viewpoint i and viewpoint j (from room center)
- `d_ij` = baseline distance between node i and node j
- `w_angle`, `w_dist` = learnable weights
- `d_ref` = reference distance (room diagonal / 2)
This allows the attention mechanism to learn that widely-separated, orthogonal viewpoints are more complementary than clustered ones.
### 2.4 Coherence-Gated Environment Updates
```rust
/// Only update environment model when phase coherence exceeds threshold.
pub fn coherence_gate(
phase_diffs: &[f32], // delta-phi over T recent frames
threshold: f32, // typically 0.7
) -> bool {
// Complex mean of unit phasors
let (sum_cos, sum_sin) = phase_diffs.iter()
.fold((0.0f32, 0.0f32), |(c, s), &dp| {
(c + dp.cos(), s + dp.sin())
});
let n = phase_diffs.len() as f32;
let coherence = ((sum_cos / n).powi(2) + (sum_sin / n).powi(2)).sqrt();
coherence > threshold
}
```
### 2.5 Two Implementation Paths
| Path | Hardware | Bandwidth | Per-Viewpoint Rate | Target Tier |
|------|----------|-----------|-------------------|-------------|
| **ESP32 Multistatic** | 6x ESP32-S3 ($84) | 20 MHz (HT20) | 20 Hz | Silver |
| **Cognitum + RF** | Cognitum v1 + LimeSDR | 20-160 MHz | 20-100 Hz | Gold |
ESP32 path: commodity, achievable today, targets Silver tier (tracking + pose quality).
Cognitum path: higher fidelity, targets Gold tier (tracking + pose + vitals).
---
## 3. DDD Design
### 3.1 New Bounded Context: ViewpointFusion
**Aggregate Root: `MultistaticArray`**
```rust
pub struct MultistaticArray {
/// Unique array deployment ID
id: ArrayId,
/// Viewpoint geometry (node positions, orientations)
geometry: ArrayGeometry,
/// TDM schedule (slot assignments, cycle period)
schedule: TdmSchedule,
/// Active viewpoint embeddings (latest per node)
viewpoints: Vec<ViewpointEmbedding>,
/// Fused output embedding
fused: Option<FusedEmbedding>,
/// Coherence gate state
coherence_state: CoherenceState,
}
```
**Entity: `ViewpointEmbedding`**
```rust
pub struct ViewpointEmbedding {
/// Source node ID
node_id: NodeId,
/// AETHER embedding vector (128-d)
embedding: Vec<f32>,
/// Geometric metadata
azimuth: f32, // radians from array center
elevation: f32, // radians
baseline: f32, // meters from centroid
/// Capture timestamp
timestamp: Instant,
/// Signal quality
snr_db: f32,
}
```
**Value Object: `GeometricDiversityIndex`**
```rust
pub struct GeometricDiversityIndex {
/// GDI = (1/N) sum min_{j!=i} |theta_i - theta_j|
value: f32,
/// Effective independent viewpoints (after correlation discount)
n_effective: f32,
/// Worst viewpoint pair (most redundant)
worst_pair: (NodeId, NodeId),
}
```
**Domain Events:**
```rust
pub enum ViewpointFusionEvent {
ViewpointCaptured { node_id: NodeId, timestamp: Instant, snr_db: f32 },
TdmCycleCompleted { cycle_id: u64, viewpoints_received: usize },
FusionCompleted { fused_embedding: Vec<f32>, gdi: f32 },
CoherenceGateTriggered { coherence: f32, accepted: bool },
GeometryUpdated { new_gdi: f32, n_effective: f32 },
}
```
### 3.2 Extended Bounded Contexts
**Signal (wifi-densepose-signal):**
- New service: `CrossViewpointSubcarrierSelection`
- Consensus sensitive subcarrier set across all viewpoints via ruvector-mincut.
- Input: per-viewpoint sensitivity scores. Output: globally-sensitive + locally-sensitive partition.
**Hardware (wifi-densepose-hardware):**
- New protocol: `TdmSensingProtocol`
- Coordinator logic: beacon generation, slot scheduling, clock drift compensation.
- Event: `TdmSlotCompleted { node_id, slot_index, capture_quality }`
**Training (wifi-densepose-train):**
- New module: `ruview_metrics.rs`
- Three-metric acceptance test: PCK/OKS (joint error), MOTA (multi-person separation), vital sign accuracy.
- Tiered pass/fail: Bronze/Silver/Gold.
---
## 4. Implementation Plan (File-Level)
### 4.1 Phase 1: ViewpointFusion Core (New Files)
| File | Purpose | RuVector Crate |
|------|---------|---------------|
| `crates/wifi-densepose-ruvector/src/viewpoint/mod.rs` | Module root, re-exports | -- |
| `crates/wifi-densepose-ruvector/src/viewpoint/attention.rs` | Cross-viewpoint scaled dot-product attention with geometric bias | ruvector-attention |
| `crates/wifi-densepose-ruvector/src/viewpoint/geometry.rs` | GeometricDiversityIndex, Cramer-Rao bound estimation | ruvector-solver |
| `crates/wifi-densepose-ruvector/src/viewpoint/coherence.rs` | Coherence gating for environment stability | -- (pure math) |
| `crates/wifi-densepose-ruvector/src/viewpoint/fusion.rs` | MultistaticArray aggregate, orchestrates fusion pipeline | ruvector-attention + ruvector-attn-mincut |
### 4.2 Phase 2: Signal Processing Extension
| File | Purpose | RuVector Crate |
|------|---------|---------------|
| `crates/wifi-densepose-signal/src/cross_viewpoint.rs` | Cross-viewpoint subcarrier consensus via min-cut | ruvector-mincut |
### 4.3 Phase 3: Hardware Protocol Extension
| File | Purpose | RuVector Crate |
|------|---------|---------------|
| `crates/wifi-densepose-hardware/src/esp32/tdm.rs` | TDM sensing protocol coordinator | -- (protocol logic) |
### 4.4 Phase 4: Training and Metrics
| File | Purpose | RuVector Crate |
|------|---------|---------------|
| `crates/wifi-densepose-train/src/ruview_metrics.rs` | Three-metric acceptance test (PCK/OKS, MOTA, vital sign accuracy) | ruvector-mincut (person matching) |
---
## 5. Three-Metric Acceptance Test
### 5.1 Metric 1: Joint Error (PCK / OKS)
| Criterion | Threshold |
|-----------|-----------|
| PCK@0.2 (all 17 keypoints) | >= 0.70 |
| PCK@0.2 (torso: shoulders + hips) | >= 0.80 |
| Mean OKS | >= 0.50 |
| Torso jitter RMS (10s window) | < 3 cm |
| Per-keypoint max error (95th percentile) | < 15 cm |
### 5.2 Metric 2: Multi-Person Separation
| Criterion | Threshold |
|-----------|-----------|
| Subjects | 2 |
| Capture rate | 20 Hz |
| Track duration | 10 minutes |
| Identity swaps (MOTA ID-switch) | 0 |
| Track fragmentation ratio | < 0.05 |
| False track creation | 0/min |
### 5.3 Metric 3: Vital Sign Sensitivity
| Criterion | Threshold |
|-----------|-----------|
| Breathing detection (6-30 BPM) | +/- 2 BPM |
| Breathing band SNR (0.1-0.5 Hz) | >= 6 dB |
| Heartbeat detection (40-120 BPM) | +/- 5 BPM (aspirational) |
| Heartbeat band SNR (0.8-2.0 Hz) | >= 3 dB (aspirational) |
| Micro-motion resolution | 1 mm at 3m |
### 5.4 Tiered Pass/Fail
| Tier | Requirements | Deployment Gate |
|------|-------------|-----------------|
| Bronze | Metric 2 | Prototype demo |
| Silver | Metrics 1 + 2 | Production candidate |
| Gold | All three | Full deployment |
---
## 6. Consequences
### 6.1 Positive
- **Fundamental geometric improvement**: Viewpoint diversity reduces body self-occlusion and depth ambiguity -- these are physics, not model, limitations.
- **Uses existing silicon**: ESP32-S3, commodity WiFi, no custom RF hardware required for Silver tier.
- **Learned fusion weights**: Embedding-level fusion (Tier 3) outperforms hand-crafted feature-level fusion (Tier 2).
- **Composes with existing ADRs**: AETHER (per-viewpoint), MERIDIAN (cross-environment), and RuView (cross-viewpoint) are orthogonal -- they compose freely.
- **IEEE 802.11bf aligned**: TDM protocol maps to 802.11bf sensing sessions, enabling future migration to standard-compliant APs.
- **Commodity price point**: $84 for 6-node Silver-tier deployment.
### 6.2 Negative
- **TDM rate reduction**: N viewpoints leads to per-viewpoint rate divided by N. With 6 nodes at 120 Hz aggregate, each viewpoint sees 20 Hz.
- **More complex aggregator**: Embedding fusion + geometric bias learning adds ~25K parameters on top of per-viewpoint AETHER model.
- **Placement planning required**: Geometric Diversity Index optimization requires intentional node placement (not random scatter).
- **Clock drift limits TDM precision**: ESP32 crystal drift (20-50 ppm) limits slot precision to ~1 ms, which is sufficient for feature-level fusion but not signal-level coherent combining.
- **Training data**: Cross-viewpoint training requires multi-receiver CSI captures, which are not available in existing public datasets (MM-Fi, Wi-Pose).
### 6.3 Interaction with Other ADRs
| ADR | Interaction |
|-----|------------|
| ADR-012 (ESP32 Mesh) | RuView extends the aggregator from feature-level to embedding-level fusion; TDM protocol replaces simple UDP collection |
| ADR-014 (SOTA Signal) | Per-viewpoint signal processing is unchanged; cross-viewpoint subcarrier consensus is new |
| ADR-016/017 (RuVector) | All 5 ruvector crates get new cross-viewpoint operations (see Section 4) |
| ADR-021 (Vital Signs) | Multi-viewpoint SNR improvement directly benefits vital sign extraction (Gold tier target) |
| ADR-024 (AETHER) | Per-viewpoint AETHER embeddings are the input to RuView fusion; AETHER is required |
| ADR-027 (MERIDIAN) | Cross-environment (MERIDIAN) and cross-viewpoint (RuView) are orthogonal; MERIDIAN handles room transfer, RuView handles within-room geometry |
---
## 7. References
1. IEEE 802.11bf (2024). "WLAN Sensing." IEEE Standards Association.
2. Kotaru, M. et al. (2015). "SpotFi: Decimeter Level Localization Using WiFi." SIGCOMM 2015.
3. Zeng, Y. et al. (2019). "FarSense: Pushing the Range Limit of WiFi-based Respiration Sensing with CSI Ratio of Two Antennas." MobiCom 2019.
4. Zheng, Y. et al. (2019). "Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi." (Widar 3.0) MobiSys 2019.
5. Yan, K. et al. (2024). "Person-in-WiFi 3D: End-to-End Multi-Person 3D Pose Estimation with Wi-Fi." CVPR 2024.
6. Zhou, Y. et al. (2024). "AdaPose: Towards Cross-Site Device-Free Human Pose Estimation with Commodity WiFi." IEEE IoT Journal. arXiv:2309.16964.
7. Zhou, R. et al. (2025). "DGSense: A Domain Generalization Framework for Wireless Sensing." arXiv:2502.08155.
8. Chen, X. & Yang, J. (2025). "X-Fi: A Modality-Invariant Foundation Model for Multimodal Human Sensing." ICLR 2025. arXiv:2410.10167.
9. AM-FM (2026). "AM-FM: A Foundation Model for Ambient Intelligence Through WiFi." arXiv:2602.11200.
10. Chen, L. et al. (2026). "PerceptAlign: Breaking Coordinate Overfitting." arXiv:2601.12252.
11. Li, J. & Stoica, P. (2007). "MIMO Radar with Colocated Antennas." IEEE Signal Processing Magazine, 24(5):106-114.
12. ADR-012 through ADR-027 (internal).

View File

@@ -1,507 +0,0 @@
# ADR-032: Multistatic Mesh Security Hardening
| Field | Value |
|-------|-------|
| **Status** | Accepted |
| **Date** | 2026-03-01 |
| **Deciders** | ruv |
| **Relates to** | ADR-029 (RuvSense Multistatic), ADR-030 (Persistent Field Model), ADR-031 (RuView Sensing-First RF), ADR-018 (ESP32 Implementation), ADR-012 (ESP32 Mesh) |
---
## 1. Context
### 1.1 Security Audit of ADR-029/030/031
A security audit of the RuvSense multistatic sensing stack (ADR-029 through ADR-031) identified seven findings across the TDM synchronization layer, CSI frame transport, NDP injection, coherence gating, cross-room tracking, NVS credential handling, and firmware concurrency model. Three severity levels were assigned: HIGH (1 finding), MEDIUM (3 findings), LOW (3 findings).
The findings fall into three categories:
1. **Missing cryptographic authentication** -- The TDM SyncBeacon and CSI frame formats lack any message authentication, allowing rogue nodes to inject spoofed beacons or frames into the mesh.
2. **Unbounded or unprotected resources** -- The NDP injection path has no rate limiter, the coherence gate recalibration state has no timeout cap, and the cross-room transition log grows without bound.
3. **Memory safety on embedded targets** -- NVS credential buffers are not zeroed after use, and static mutable globals in the CSI collector are accessed from both ESP32-S3 cores without synchronization.
### 1.2 Threat Model
The primary threat actor is a rogue ESP32 node on the same LAN subnet or within WiFi range of the mesh. The attack surface is the UDP broadcast plane used for sync beacons, CSI frames, and NDP injection.
| Threat | STRIDE | Impact | Exploitability |
|--------|--------|--------|----------------|
| Fake SyncBeacon injection | Spoofing, Tampering | Full mesh desynchronization, no pose output | Low skill, rogue ESP32 on LAN |
| CSI frame spoofing | Spoofing, Tampering | Corrupted pose estimation, phantom occupants | Low skill, UDP packet injection |
| NDP RF flooding | Denial of Service | Channel saturation, loss of CSI data | Low skill, repeated NDP calls |
| Coherence gate stall | Denial of Service | Indefinite recalibration, frozen output | Requires sustained interference |
| Transition log exhaustion | Denial of Service | OOM on aggregator after extended operation | Passive, no attacker needed |
| Credential stack residue | Information Disclosure | WiFi password recoverable from RAM dump | Physical access to device |
| Dual-core data race | Tampering, DoS | Corrupted CSI frames, undefined behavior | Passive, no attacker needed |
### 1.3 Design Constraints
- ESP32-S3 has limited CPU budget: cryptographic operations must complete within the 1 ms guard interval between TDM slots.
- HMAC-SHA256 on ESP32-S3 (hardware-accelerated via `mbedtls`) completes in approximately 15 us for 24-byte payloads -- well within budget.
- SipHash-2-4 completes in approximately 2 us for 64-byte payloads on ESP32-S3 -- suitable for per-frame MAC.
- No TLS or TCP is available on the sensing data path (UDP broadcast for latency).
- Pre-shared key (PSK) model is acceptable because all nodes in a mesh deployment are provisioned by the same operator.
---
## 2. Decision
Harden the multistatic mesh with six measures: beacon authentication, frame integrity, NDP rate limiting, bounded buffers, memory safety, and key management. All changes are backward-compatible: unauthenticated frames are accepted during a migration window controlled by a `security_level` NVS parameter.
### 2.1 Beacon Authentication Protocol (H-1)
**Finding:** The 16-byte `SyncBeacon` wire format (`crates/wifi-densepose-hardware/src/esp32/tdm.rs`) has no cryptographic authentication. A rogue node can inject fake beacons to desynchronize the TDM mesh.
**Solution:** Extend the SyncBeacon wire format from 16 bytes to 28 bytes by adding a 4-byte monotonic nonce and an 8-byte HMAC-SHA256 truncated tag.
```
Authenticated SyncBeacon wire format (28 bytes):
[0..7] cycle_id (LE u64)
[8..11] cycle_period_us (LE u32)
[12..13] drift_correction (LE i16)
[14..15] reserved
[16..19] nonce (LE u32, monotonically increasing)
[20..27] hmac_tag (HMAC-SHA256 truncated to 8 bytes)
```
**HMAC computation:**
```
key = 16-byte pre-shared mesh key (stored in NVS, namespace "mesh_sec")
message = beacon[0..20] (first 20 bytes: payload + nonce)
tag = HMAC-SHA256(key, message)[0..8] (truncated to 8 bytes)
```
**Nonce and replay protection:**
- The coordinator maintains a monotonically increasing 32-bit nonce counter, incremented on every beacon.
- Each receiver maintains a `last_accepted_nonce` per sender. A beacon is accepted only if `nonce > last_accepted_nonce - REPLAY_WINDOW`, where `REPLAY_WINDOW = 16` (accounts for packet reordering over UDP).
- Nonce overflow (after 2^32 beacons at 20 Hz = ~6.8 years) triggers a mandatory key rotation.
**Implementation location:** `crates/wifi-densepose-hardware/src/esp32/tdm.rs` -- extend `SyncBeacon::to_bytes()` and `SyncBeacon::from_bytes()` to produce/consume the 28-byte authenticated format. Add `SyncBeacon::verify()` method.
### 2.2 CSI Frame Integrity (M-3)
**Finding:** The ADR-018 CSI frame format has no cryptographic MAC. Frames can be spoofed or tampered with in transit.
**Solution:** Add an 8-byte SipHash-2-4 tag to the CSI frame header. SipHash is chosen over HMAC-SHA256 for per-frame MAC because it is 7x faster on ESP32 for short messages (approximately 2 us vs 15 us) and provides sufficient integrity for non-secret data.
```
Extended CSI frame header (28 bytes, was 20):
[0..3] Magic: 0xC5110002 (bumped from 0xC5110001 to signal auth)
[4] Node ID
[5] Number of antennas
[6..7] Number of subcarriers (LE u16)
[8..11] Frequency MHz (LE u32)
[12..15] Sequence number (LE u32)
[16] RSSI (i8)
[17] Noise floor (i8)
[18..19] Reserved
[20..27] siphash_tag (SipHash-2-4 over [0..20] + IQ data)
```
**SipHash key derivation:**
```
siphash_key = HMAC-SHA256(mesh_key, "csi-frame-siphash")[0..16]
```
The SipHash key is derived once at boot from the mesh key and cached in memory.
**Implementation locations:**
- `firmware/esp32-csi-node/main/csi_collector.c` -- compute SipHash tag in `csi_serialize_frame()`, bump magic constant.
- `crates/wifi-densepose-hardware/src/esp32/` -- add frame verification in the aggregator's frame parser.
### 2.3 NDP Injection Rate Limiter (M-4)
**Finding:** `csi_inject_ndp_frame()` in `firmware/esp32-csi-node/main/csi_collector.c` has no rate limiter. Uncontrolled NDP injection can flood the RF channel.
**Solution:** Token-bucket rate limiter with configurable parameters stored in NVS.
```c
// Token bucket parameters (defaults)
#define NDP_RATE_MAX_TOKENS 20 // burst capacity
#define NDP_RATE_REFILL_HZ 20 // sustained rate: 20 NDP/sec
#define NDP_RATE_REFILL_US (1000000 / NDP_RATE_REFILL_HZ)
typedef struct {
uint32_t tokens; // current token count
uint32_t max_tokens; // bucket capacity
uint32_t refill_interval_us; // microseconds per token
int64_t last_refill_us; // last refill timestamp
} ndp_rate_limiter_t;
```
`csi_inject_ndp_frame()` returns `ESP_ERR_NOT_ALLOWED` when the bucket is empty. The rate limiter parameters are configurable via NVS keys `ndp_max_tokens` and `ndp_refill_hz`.
**Implementation location:** `firmware/esp32-csi-node/main/csi_collector.c` -- add `ndp_rate_limiter_t` state and check in `csi_inject_ndp_frame()`.
### 2.4 Coherence Gate Recalibration Timeout (M-5)
**Finding:** The `Recalibrate` state in `crates/wifi-densepose-signal/src/ruvsense/coherence_gate.rs` can be held indefinitely. A sustained interference source could keep the system in perpetual recalibration, preventing any output.
**Solution:** Add a configurable `max_recalibrate_duration` to `GatePolicyConfig` (default: 30 seconds = 600 frames at 20 Hz). When the recalibration duration exceeds this cap, the gate transitions to a `ForcedAccept` state with inflated noise (10x), allowing degraded-but-available output.
```rust
pub enum GateDecision {
Accept { noise_multiplier: f32 },
PredictOnly,
Reject,
Recalibrate { stale_frames: u64 },
/// Recalibration timed out. Accept with heavily inflated noise.
ForcedAccept { noise_multiplier: f32, stale_frames: u64 },
}
```
New config field:
```rust
pub struct GatePolicyConfig {
// ... existing fields ...
/// Maximum frames in Recalibrate before forcing accept. Default: 600 (30s at 20Hz).
pub max_recalibrate_frames: u64,
/// Noise multiplier for ForcedAccept. Default: 10.0.
pub forced_accept_noise: f32,
}
```
**Implementation location:** `crates/wifi-densepose-signal/src/ruvsense/coherence_gate.rs` -- extend `GateDecision` enum, modify `GatePolicy::evaluate()`.
### 2.5 Bounded Transition Log (L-1)
**Finding:** `CrossRoomTracker` in `crates/wifi-densepose-signal/src/ruvsense/cross_room.rs` stores transitions in an unbounded `Vec<TransitionEvent>`. Over extended operation (days/weeks), this grows without limit.
**Solution:** Replace the `transitions: Vec<TransitionEvent>` with a ring buffer that evicts the oldest entry when capacity is reached.
```rust
pub struct CrossRoomConfig {
// ... existing fields ...
/// Maximum transitions retained in the ring buffer. Default: 1000.
pub max_transitions: usize,
}
```
The ring buffer is implemented as a `VecDeque<TransitionEvent>` with a capacity check on push. When `transitions.len() >= max_transitions`, `transitions.pop_front()` before pushing. This preserves the append-only audit trail semantics (events are never mutated, only evicted by age).
**Implementation location:** `crates/wifi-densepose-signal/src/ruvsense/cross_room.rs` -- change `transitions: Vec<TransitionEvent>` to `transitions: VecDeque<TransitionEvent>`, add eviction logic in `match_entry()`.
### 2.6 NVS Password Buffer Zeroing (L-4)
**Finding:** `nvs_config_load()` in `firmware/esp32-csi-node/main/nvs_config.c` reads the WiFi password into a stack buffer `buf` which is not zeroed after use. On ESP32-S3, stack memory is not automatically cleared, leaving credentials recoverable via physical memory dump.
**Solution:** Zero the stack buffer after each NVS string read using `explicit_bzero()` (available in ESP-IDF via newlib). If `explicit_bzero` is unavailable, use `memset` with a volatile pointer to prevent compiler optimization.
```c
/* After each nvs_get_str that may contain credentials: */
explicit_bzero(buf, sizeof(buf));
/* Portable fallback: */
static void secure_zero(void *ptr, size_t len) {
volatile unsigned char *p = (volatile unsigned char *)ptr;
while (len--) { *p++ = 0; }
}
```
Apply to all three `nvs_get_str` call sites in `nvs_config_load()` (ssid, password, target_ip).
**Implementation location:** `firmware/esp32-csi-node/main/nvs_config.c` -- add `explicit_bzero(buf, sizeof(buf))` after each `nvs_get_str` block.
### 2.7 Atomic Access for Static Mutable State (L-5)
**Finding:** `csi_collector.c` uses static mutable globals (`s_sequence`, `s_cb_count`, `s_send_ok`, `s_send_fail`, `s_hop_index`) accessed from both cores of the ESP32-S3 without synchronization. The CSI callback runs on the WiFi task (pinned to core 0 by default), while the main application and hop timer may run on core 1.
**Solution:** Use C11 `_Atomic` qualifiers for all shared counters, and a FreeRTOS mutex for the hop table state which requires multi-variable consistency.
```c
#include <stdatomic.h>
static _Atomic uint32_t s_sequence = 0;
static _Atomic uint32_t s_cb_count = 0;
static _Atomic uint32_t s_send_ok = 0;
static _Atomic uint32_t s_send_fail = 0;
static _Atomic uint8_t s_hop_index = 0;
/* Hop table protected by mutex (multi-variable consistency) */
static SemaphoreHandle_t s_hop_mutex = NULL;
```
The mutex is created in `csi_collector_init()` and taken/released around hop table reads in `csi_hop_next_channel()` and writes in `csi_collector_set_hop_table()`.
**Implementation location:** `firmware/esp32-csi-node/main/csi_collector.c` -- add `_Atomic` qualifiers, create and use `s_hop_mutex`.
### 2.8 Key Management
All cryptographic operations use a single 16-byte pre-shared mesh key stored in NVS.
**Provisioning:**
```
NVS namespace: "mesh_sec"
NVS key: "mesh_key"
NVS type: blob (16 bytes)
```
The key is provisioned during node setup via the existing `scripts/provision.py` tool, which is extended to generate a random 16-byte key and flash it to all nodes in a deployment.
**Key derivation:**
```
beacon_hmac_key = mesh_key (direct, 16 bytes)
frame_siphash_key = HMAC-SHA256(mesh_key, "csi-frame-siphash")[0..16] (derived, 16 bytes)
```
**Key rotation:**
- Manual rotation via management command: `provision.py rotate-key --deployment <id>`.
- The coordinator broadcasts a key rotation event (signed with the old key) containing the new key encrypted with the old key.
- Nodes accept the new key and switch after confirming the next beacon is signed with the new key.
- Rotation is recommended every 90 days or after any node is decommissioned.
**Security level NVS parameter:**
```
NVS key: "sec_level"
Values:
0 = permissive (accept unauthenticated frames, log warning)
1 = transitional (accept both authenticated and unauthenticated)
2 = enforcing (reject unauthenticated frames)
Default: 1 (transitional, for backward compatibility during rollout)
```
---
## 3. Implementation Plan (File-Level)
### 3.1 Phase 1: Beacon Authentication and Key Management
| File | Change | Priority |
|------|--------|----------|
| `crates/wifi-densepose-hardware/src/esp32/tdm.rs` | Extend `SyncBeacon` to 28-byte authenticated format, add `verify()`, nonce tracking, replay window | P0 |
| `firmware/esp32-csi-node/main/nvs_config.c` | Add `mesh_key` and `sec_level` NVS reads | P0 |
| `firmware/esp32-csi-node/main/nvs_config.h` | Add `mesh_key[16]` and `sec_level` to `nvs_config_t` | P0 |
| `scripts/provision.py` | Add `--mesh-key` generation and `rotate-key` command | P0 |
### 3.2 Phase 2: Frame Integrity and Rate Limiting
| File | Change | Priority |
|------|--------|----------|
| `firmware/esp32-csi-node/main/csi_collector.c` | Add SipHash-2-4 tag to frame serialization, NDP rate limiter, `_Atomic` qualifiers, hop mutex | P1 |
| `firmware/esp32-csi-node/main/csi_collector.h` | Update `CSI_HEADER_SIZE` to 28, add rate limiter config | P1 |
| `crates/wifi-densepose-hardware/src/esp32/` | Add frame verification in aggregator parser | P1 |
### 3.3 Phase 3: Bounded Buffers and Gate Hardening
| File | Change | Priority |
|------|--------|----------|
| `crates/wifi-densepose-signal/src/ruvsense/cross_room.rs` | Replace `Vec` with `VecDeque`, add `max_transitions` config | P1 |
| `crates/wifi-densepose-signal/src/ruvsense/coherence_gate.rs` | Add `ForcedAccept` variant, `max_recalibrate_frames` config | P1 |
### 3.4 Phase 4: Memory Safety
| File | Change | Priority |
|------|--------|----------|
| `firmware/esp32-csi-node/main/nvs_config.c` | Add `explicit_bzero()` after credential reads | P2 |
| `firmware/esp32-csi-node/main/csi_collector.c` | `_Atomic` counters, `s_hop_mutex` (if not done in Phase 2) | P2 |
---
## 4. Acceptance Criteria
### 4.1 Beacon Authentication (H-1)
| ID | Criterion | Test Method |
|----|-----------|-------------|
| H1-1 | `SyncBeacon::to_bytes()` produces 28-byte output with valid HMAC tag | Unit test: serialize, verify tag matches recomputed HMAC |
| H1-2 | `SyncBeacon::verify()` rejects beacons with incorrect HMAC tag | Unit test: flip one bit in tag, verify returns `Err` |
| H1-3 | `SyncBeacon::verify()` rejects beacons with replayed nonce outside window | Unit test: submit nonce = last_accepted - REPLAY_WINDOW - 1, verify rejection |
| H1-4 | `SyncBeacon::verify()` accepts beacons within replay window | Unit test: submit nonce = last_accepted - REPLAY_WINDOW + 1, verify acceptance |
| H1-5 | Coordinator nonce increments monotonically across cycles | Unit test: call `begin_cycle()` 100 times, verify strict monotonicity |
| H1-6 | Backward compatibility: `sec_level=0` accepts unauthenticated 16-byte beacons | Integration test: mixed old/new nodes |
### 4.2 Frame Integrity (M-3)
| ID | Criterion | Test Method |
|----|-----------|-------------|
| M3-1 | CSI frame with magic `0xC5110002` includes valid 8-byte SipHash tag | Unit test: serialize frame, verify tag |
| M3-2 | Frame verification rejects frames with tampered IQ data | Unit test: flip one byte in IQ payload, verify rejection |
| M3-3 | SipHash computation completes in < 10 us on ESP32-S3 | Benchmark on target hardware |
| M3-4 | Frame parser accepts old magic `0xC5110001` when `sec_level < 2` | Unit test: backward compatibility |
### 4.3 NDP Rate Limiter (M-4)
| ID | Criterion | Test Method |
|----|-----------|-------------|
| M4-1 | `csi_inject_ndp_frame()` succeeds for first `max_tokens` calls | Unit test: call 20 times rapidly, all succeed |
| M4-2 | Call 21 returns `ESP_ERR_NOT_ALLOWED` when bucket is empty | Unit test: exhaust bucket, verify error |
| M4-3 | Bucket refills at configured rate | Unit test: exhaust, wait `refill_interval_us`, verify one token available |
| M4-4 | NVS override of `ndp_max_tokens` and `ndp_refill_hz` is respected | Integration test: set NVS values, verify behavior |
### 4.4 Coherence Gate Timeout (M-5)
| ID | Criterion | Test Method |
|----|-----------|-------------|
| M5-1 | `GatePolicy::evaluate()` returns `Recalibrate` at `max_stale_frames` | Unit test: existing behavior preserved |
| M5-2 | `GatePolicy::evaluate()` returns `ForcedAccept` at `max_recalibrate_frames` | Unit test: feed `max_recalibrate_frames + 1` low-coherence frames |
| M5-3 | `ForcedAccept` noise multiplier equals `forced_accept_noise` (default 10.0) | Unit test: verify noise_multiplier field |
| M5-4 | Default `max_recalibrate_frames` = 600 (30s at 20 Hz) | Unit test: verify default config |
### 4.5 Bounded Transition Log (L-1)
| ID | Criterion | Test Method |
|----|-----------|-------------|
| L1-1 | `CrossRoomTracker::transition_count()` never exceeds `max_transitions` | Unit test: insert 1500 transitions with max_transitions=1000, verify count=1000 |
| L1-2 | Oldest transitions are evicted first (FIFO) | Unit test: verify first transition is the (N-999)th inserted |
| L1-3 | Default `max_transitions` = 1000 | Unit test: verify default config |
### 4.6 NVS Password Zeroing (L-4)
| ID | Criterion | Test Method |
|----|-----------|-------------|
| L4-1 | Stack buffer `buf` is zeroed after each `nvs_get_str` call | Code review + static analysis (no runtime test feasible) |
| L4-2 | `explicit_bzero` is used (not plain `memset`) to prevent compiler optimization | Code review: verify function call is `explicit_bzero` or volatile-pointer pattern |
### 4.7 Atomic Static State (L-5)
| ID | Criterion | Test Method |
|----|-----------|-------------|
| L5-1 | `s_sequence`, `s_cb_count`, `s_send_ok`, `s_send_fail` are declared `_Atomic` | Code review |
| L5-2 | `s_hop_mutex` is created in `csi_collector_init()` | Code review + integration test: init succeeds |
| L5-3 | `csi_hop_next_channel()` and `csi_collector_set_hop_table()` acquire/release mutex | Code review |
| L5-4 | No data races detected under ThreadSanitizer (host-side test build) | `cargo test` with TSAN on host (for Rust side); QEMU or hardware test for C side |
---
## 5. Consequences
### 5.1 Positive
- **Rogue node protection**: HMAC-authenticated beacons prevent mesh desynchronization by unauthorized nodes.
- **Frame integrity**: SipHash MAC detects in-transit tampering of CSI data, preventing phantom occupant injection.
- **RF availability**: Token-bucket rate limiter prevents NDP flooding from consuming the shared wireless medium.
- **Bounded memory**: Ring buffer on transition log and timeout cap on recalibration prevent resource exhaustion during long-running deployments.
- **Credential hygiene**: Zeroed buffers reduce the window for credential recovery from physical memory access.
- **Thread safety**: Atomic operations and mutex eliminate undefined behavior on dual-core ESP32-S3.
- **Backward compatible**: `sec_level` parameter allows gradual rollout without breaking existing deployments.
### 5.2 Negative
- **12 bytes added to SyncBeacon**: 28 bytes vs 16 bytes (75% increase, but still fits in a single UDP packet with room to spare).
- **8 bytes added to CSI frame header**: 28 bytes vs 20 bytes (40% increase in header; negligible relative to IQ payload of 128-512 bytes).
- **CPU overhead**: HMAC-SHA256 adds approximately 15 us per beacon (once per 50 ms cycle = 0.03% CPU). SipHash adds approximately 2 us per frame (at 100 Hz = 0.02% CPU).
- **Key management complexity**: Mesh key must be provisioned to all nodes and rotated periodically. Lost key requires re-provisioning all nodes.
- **Mutex contention**: Hop table mutex may add up to 1 us latency to channel hop path. Within guard interval budget.
### 5.3 Risks
| Risk | Probability | Impact | Mitigation |
|------|-------------|--------|------------|
| HMAC computation exceeds guard interval on older ESP32 (non-S3) | Low | Beacon authentication unusable on legacy hardware | Hardware-accelerated SHA256 is available on all ESP32 variants; benchmark confirms < 50 us |
| Key compromise via side-channel on ESP32 | Very Low | Full mesh authentication bypass | Keys stored in eFuse (ESP32-S3 supports) or encrypted NVS partition |
| ForcedAccept mode produces unacceptably noisy poses | Medium | Degraded pose quality during sustained interference | 10x noise multiplier is configurable; operator can increase or disable |
| SipHash collision (64-bit tag) | Very Low | Single forged frame accepted | 2^-64 probability per frame; attacker cannot iterate at protocol speed |
---
## 6. QUIC Transport Layer (ADR-032a Amendment)
### 6.1 Motivation
The original ADR-032 design (Sections 2.1--2.2) uses manual HMAC-SHA256 and SipHash-2-4 over plain UDP. While correct and efficient on constrained ESP32 hardware, this approach has operational drawbacks:
- **Manual key rotation**: Requires custom key exchange protocol and coordinator broadcast.
- **No congestion control**: Plain UDP has no backpressure; burst CSI traffic can overwhelm the aggregator.
- **No connection migration**: Node roaming (e.g., repositioning an ESP32) requires manual reconnect.
- **Duplicate replay-window code**: Custom nonce tracking duplicates QUIC's built-in replay protection.
### 6.2 Decision: Adopt `midstreamer-quic` for Aggregator Uplinks
For aggregator-class nodes (Raspberry Pi, x86 gateway) that have sufficient CPU and memory, replace the manual crypto layer with `midstreamer-quic` v0.1.0, which provides:
| Capability | Manual (ADR-032 original) | QUIC (`midstreamer-quic`) |
|---|---|---|
| Authentication | HMAC-SHA256 truncated 8B | TLS 1.3 AEAD (AES-128-GCM) |
| Frame integrity | SipHash-2-4 tag | QUIC packet-level AEAD |
| Replay protection | Manual nonce + window | QUIC packet numbers (monotonic) |
| Key rotation | Custom coordinator broadcast | TLS 1.3 `KeyUpdate` message |
| Congestion control | None | QUIC cubic/BBR |
| Connection migration | Not supported | QUIC connection ID migration |
| Multi-stream | N/A | QUIC streams (beacon, CSI, control) |
**Constrained devices (ESP32-S3) retain the manual crypto path** from Sections 2.1--2.2 as a fallback. The `SecurityMode` enum selects the transport:
```rust
pub enum SecurityMode {
/// Manual HMAC/SipHash over plain UDP (ESP32-S3, ADR-032 original).
ManualCrypto,
/// QUIC transport with TLS 1.3 (aggregator-class nodes).
QuicTransport,
}
```
### 6.3 QUIC Stream Mapping
Three dedicated QUIC streams separate traffic by priority:
| Stream ID | Purpose | Direction | Priority |
|---|---|---|---|
| 0 | Sync beacons | Coordinator -> Nodes | Highest (TDM timing-critical) |
| 1 | CSI frames | Nodes -> Aggregator | High (sensing data) |
| 2 | Control plane | Bidirectional | Normal (config, key rotation, health) |
### 6.4 Additional Midstreamer Integrations
Beyond QUIC transport, three additional midstreamer crates enhance the sensing pipeline:
1. **`midstreamer-scheduler` v0.1.0** -- Replaces manual timer-based TDM slot scheduling with an ultra-low-latency real-time task scheduler. Provides deterministic slot firing with sub-microsecond jitter.
2. **`midstreamer-temporal-compare` v0.1.0** -- Enhances gesture DTW matching (ADR-030 Tier 6) with temporal sequence comparison primitives. Provides optimized Sakoe-Chiba band DTW, LCS, and edit-distance kernels.
3. **`midstreamer-attractor` v0.1.0** -- Enhances longitudinal drift detection (ADR-030 Tier 4) with dynamical systems analysis. Detects phase-space attractor shifts that indicate biomechanical regime changes before they manifest as simple metric drift.
### 6.5 Fallback Strategy
The QUIC transport layer is additive, not a replacement:
- **ESP32-S3 nodes**: Continue using manual HMAC/SipHash over UDP (Sections 2.1--2.2). These devices lack the memory for a full TLS 1.3 stack.
- **Aggregator nodes**: Use `midstreamer-quic` by default. Fall back to manual crypto if QUIC handshake fails (e.g., network partitions).
- **Mixed deployments**: The aggregator auto-detects whether an incoming connection is QUIC (by TLS ClientHello) or plain UDP (by magic byte) and routes accordingly.
### 6.6 Acceptance Criteria (QUIC)
| ID | Criterion | Test Method |
|----|-----------|-------------|
| Q-1 | QUIC connection established between two nodes within 100ms | Integration test: connect, measure handshake time |
| Q-2 | Beacon stream delivers beacons with < 1ms jitter | Unit test: send 1000 beacons, measure inter-arrival variance |
| Q-3 | CSI stream achieves >= 95% of plain UDP throughput | Benchmark: criterion comparison |
| Q-4 | Connection migration succeeds after simulated IP change | Integration test: rebind, verify stream continuity |
| Q-5 | Fallback to manual crypto when QUIC unavailable | Unit test: reject QUIC, verify ManualCrypto path |
| Q-6 | SecurityMode::ManualCrypto produces identical wire format to ADR-032 original | Unit test: byte-level comparison |
---
## 7. Related ADRs
| ADR | Relationship |
|-----|-------------|
| ADR-029 (RuvSense Multistatic) | **Hardened**: TDM beacon and CSI frame authentication, NDP rate limiting, QUIC transport |
| ADR-030 (Persistent Field Model) | **Protected**: Coherence gate timeout; transition log bounded; gesture DTW enhanced (midstreamer-temporal-compare); drift detection enhanced (midstreamer-attractor) |
| ADR-031 (RuView RF Mode) | **Hardened**: Authenticated beacons protect cross-viewpoint synchronization via QUIC streams |
| ADR-018 (ESP32 Implementation) | **Extended**: CSI frame header bumped to v2 with SipHash tag; backward-compatible magic check |
| ADR-012 (ESP32 Mesh) | **Hardened**: Mesh key management, NVS credential zeroing, atomic firmware state, QUIC connection migration |
---
## 8. References
1. Aumasson, J.-P. & Bernstein, D.J. (2012). "SipHash: a fast short-input PRF." INDOCRYPT 2012.
2. Krawczyk, H. et al. (1997). "HMAC: Keyed-Hashing for Message Authentication." RFC 2104.
3. ESP-IDF mbedtls SHA256 hardware acceleration. Espressif Documentation.
4. Espressif. "ESP32-S3 Technical Reference Manual." Section 26: SHA Accelerator.
5. Turner, J. (2006). "Token Bucket Rate Limiting." RFC 2697 (adapted).
6. ADR-029 through ADR-031 (internal).
7. `midstreamer-quic` v0.1.0 -- QUIC multi-stream support. crates.io.
8. `midstreamer-scheduler` v0.1.0 -- Ultra-low-latency real-time task scheduler. crates.io.
9. `midstreamer-temporal-compare` v0.1.0 -- Temporal sequence comparison. crates.io.
10. `midstreamer-attractor` v0.1.0 -- Dynamical systems analysis. crates.io.
11. Iyengar, J. & Thomson, M. (2021). "QUIC: A UDP-Based Multiplexed and Secure Transport." RFC 9000.

View File

@@ -1,740 +0,0 @@
# ADR-033: CRV Signal Line Sensing Integration -- Mapping 6-Stage Coordinate Remote Viewing to WiFi-DensePose Pipeline
| Field | Value |
|-------|-------|
| **Status** | Proposed |
| **Date** | 2026-03-01 |
| **Deciders** | ruv |
| **Codename** | **CRV-Sense** -- Coordinate Remote Viewing Signal Line for WiFi Sensing |
| **Relates to** | ADR-016 (RuVector Integration), ADR-017 (RuVector Signal+MAT), ADR-024 (AETHER Embeddings), ADR-029 (RuvSense Multistatic), ADR-030 (Persistent Field Model), ADR-031 (RuView Viewpoint Fusion), ADR-032 (Mesh Security) |
---
## 1. Context
### 1.1 The CRV Signal Line Methodology
Coordinate Remote Viewing (CRV) is a structured 6-stage protocol that progressively refines perception from coarse gestalt impressions (Stage I) through sensory details (Stage II), spatial dimensions (Stage III), noise separation (Stage IV), cross-referencing interrogation (Stage V), to a final composite 3D model (Stage VI). The `ruvector-crv` crate (v0.1.1, published on crates.io) maps these 6 stages to vector database subsystems: Poincare ball embeddings, multi-head attention, GNN graph topology, SNN temporal encoding, differentiable search, and MinCut partitioning.
The WiFi-DensePose sensing pipeline follows a strikingly similar progressive refinement:
1. Raw CSI arrives as an undifferentiated signal -- the system must first classify the gestalt character of the RF environment.
2. Per-subcarrier amplitude/phase/frequency features are extracted -- analogous to sensory impressions.
3. The AP mesh forms a spatial topology with node positions and link geometry -- a dimensional sketch.
4. Coherence gating separates valid signal from noise and interference -- analytically overlaid artifacts must be detected and removed.
5. Pose estimation queries earlier CSI features for cross-referencing -- interrogation of the accumulated evidence.
6. Final multi-person partitioning produces the composite DensePose output -- the 3D model.
This structural isomorphism is not accidental. Both CRV and WiFi sensing solve the same abstract problem: extract structured information from a noisy, high-dimensional signal space through progressive refinement with explicit noise separation.
### 1.2 The ruvector-crv Crate (v0.1.1)
The `ruvector-crv` crate provides the following public API:
| Component | Purpose | Upstream Dependency |
|-----------|---------|-------------------|
| `CrvSessionManager` | Session lifecycle: create, add stage data, convergence analysis | -- |
| `StageIEncoder` | Poincare ball hyperbolic embeddings for gestalt primitives | -- (internal hyperbolic math) |
| `StageIIEncoder` | Multi-head attention for sensory vectors | `ruvector-attention` |
| `StageIIIEncoder` | GNN graph topology encoding | `ruvector-gnn` |
| `StageIVEncoder` | SNN temporal encoding for AOL (Analytical Overlay) detection | -- (internal SNN) |
| `StageVEngine` | Differentiable search and cross-referencing | -- (internal soft attention) |
| `StageVIModeler` | MinCut partitioning for composite model | `ruvector-mincut` |
| `ConvergenceResult` | Cross-session agreement analysis | -- |
| `CrvConfig` | Configuration (384-d default, curvature, AOL threshold, SNN params) | -- |
Key types: `GestaltType` (Manmade/Natural/Movement/Energy/Water/Land), `SensoryModality` (Texture/Color/Temperature/Sound/...), `AOLDetection` (content + anomaly score), `SignalLineProbe` (query + attention weights), `TargetPartition` (MinCut cluster + centroid).
### 1.3 What Already Exists in WiFi-DensePose
The following modules already implement pieces of the pipeline that CRV stages map onto:
| Existing Module | Location | Relevant CRV Stage |
|----------------|----------|-------------------|
| `multiband.rs` | `wifi-densepose-signal/src/ruvsense/` | Stage I (gestalt from multi-band CSI) |
| `phase_align.rs` | `wifi-densepose-signal/src/ruvsense/` | Stage II (phase feature extraction) |
| `multistatic.rs` | `wifi-densepose-signal/src/ruvsense/` | Stage III (AP mesh spatial topology) |
| `coherence_gate.rs` | `wifi-densepose-signal/src/ruvsense/` | Stage IV (signal-vs-noise separation) |
| `field_model.rs` | `wifi-densepose-signal/src/ruvsense/` | Stage V (persistent field for querying) |
| `pose_tracker.rs` | `wifi-densepose-signal/src/ruvsense/` | Stage VI (person tracking output) |
| Viewpoint fusion | `wifi-densepose-ruvector/src/viewpoint/` | Cross-session (multi-viewpoint convergence) |
The `wifi-densepose-ruvector` crate already depends on `ruvector-crv` in its `Cargo.toml`. This ADR defines how to wrap the CRV API with WiFi-DensePose domain types.
### 1.4 The Key Insight: Cross-Session Convergence = Cross-Room Identity
CRV's convergence analysis compares independent sessions targeting the same coordinate to find agreement in their embeddings. In WiFi-DensePose, different AP clusters in different rooms are independent "viewers" of the same person. When a person moves from Room A to Room B, the CRV convergence mechanism can find agreement between the Room A embedding trail and the Room B initial embeddings -- establishing identity continuity without cameras.
---
## 2. Decision
### 2.1 The 6-Stage CRV-to-WiFi Mapping
Create a new `crv` module in the `wifi-densepose-ruvector` crate that wraps `ruvector-crv` with WiFi-DensePose domain types. Each CRV stage maps to a specific point in the sensing pipeline.
```
+-------------------------------------------------------------------+
| CRV-Sense Pipeline (6 Stages) |
+-------------------------------------------------------------------+
| |
| Raw CSI frames from ESP32 mesh (ADR-029) |
| | |
| v |
| +----------------------------------------------------------+ |
| | Stage I: CSI Gestalt Classification | |
| | CsiGestaltClassifier | |
| | Input: raw CSI frame (amplitude envelope + phase slope) | |
| | Output: GestaltType (Manmade/Natural/Movement/Energy) | |
| | Encoder: StageIEncoder (Poincare ball embedding) | |
| | Module: ruvsense/multiband.rs | |
| +----------------------------+-----------------------------+ |
| | |
| v |
| +----------------------------------------------------------+ |
| | Stage II: CSI Sensory Feature Extraction | |
| | CsiSensoryEncoder | |
| | Input: per-subcarrier CSI | |
| | Output: amplitude textures, phase patterns, freq colors | |
| | Encoder: StageIIEncoder (multi-head attention vectors) | |
| | Module: ruvsense/phase_align.rs | |
| +----------------------------+-----------------------------+ |
| | |
| v |
| +----------------------------------------------------------+ |
| | Stage III: AP Mesh Spatial Topology | |
| | MeshTopologyEncoder | |
| | Input: node positions, link SNR, baseline distances | |
| | Output: GNN graph embedding of mesh geometry | |
| | Encoder: StageIIIEncoder (GNN topology) | |
| | Module: ruvsense/multistatic.rs | |
| +----------------------------+-----------------------------+ |
| | |
| v |
| +----------------------------------------------------------+ |
| | Stage IV: Coherence Gating (AOL Detection) | |
| | CoherenceAolDetector | |
| | Input: phase coherence scores, gate decisions | |
| | Output: AOL-flagged frames removed, clean signal kept | |
| | Encoder: StageIVEncoder (SNN temporal encoding) | |
| | Module: ruvsense/coherence_gate.rs | |
| +----------------------------+-----------------------------+ |
| | |
| v |
| +----------------------------------------------------------+ |
| | Stage V: Pose Interrogation | |
| | PoseInterrogator | |
| | Input: pose hypothesis + accumulated CSI features | |
| | Output: soft attention over CSI history, top candidates | |
| | Engine: StageVEngine (differentiable search) | |
| | Module: ruvsense/field_model.rs | |
| +----------------------------+-----------------------------+ |
| | |
| v |
| +----------------------------------------------------------+ |
| | Stage VI: Multi-Person Partitioning | |
| | PersonPartitioner | |
| | Input: all person embedding clusters | |
| | Output: MinCut-separated person partitions + centroids | |
| | Modeler: StageVIModeler (MinCut partitioning) | |
| | Module: training pipeline (ruvector-mincut) | |
| +----------------------------+-----------------------------+ |
| | |
| v |
| +----------------------------------------------------------+ |
| | Cross-Session: Multi-Room Convergence | |
| | MultiViewerConvergence | |
| | Input: per-room embedding trails for candidate persons | |
| | Output: cross-room identity matches + confidence | |
| | Engine: CrvSessionManager::find_convergence() | |
| | Module: ruvsense/cross_room.rs | |
| +----------------------------------------------------------+ |
+-------------------------------------------------------------------+
```
### 2.2 Stage I: CSI Gestalt Classification
**CRV mapping:** Stage I ideograms classify the target's fundamental character (Manmade/Natural/Movement/Energy). In WiFi sensing, the raw CSI frame's amplitude envelope shape and phase slope direction provide an analogous gestalt classification of the RF environment.
**WiFi domain types:**
```rust
/// CSI-domain gestalt types mapped from CRV GestaltType.
///
/// The CRV taxonomy maps to RF phenomenology:
/// - Manmade: structured multipath (walls, furniture, metallic reflectors)
/// - Natural: diffuse scattering (vegetation, irregular surfaces)
/// - Movement: Doppler-shifted components (human motion, fan, pet)
/// - Energy: high-amplitude transients (microwave, motor, interference)
/// - Water: slow fading envelope (humidity change, condensation)
/// - Land: static baseline (empty room, no perturbation)
pub struct CsiGestaltClassifier {
encoder: StageIEncoder,
config: CrvConfig,
}
impl CsiGestaltClassifier {
/// Classify a raw CSI frame into a gestalt type.
///
/// Extracts three features from the CSI frame:
/// 1. Amplitude envelope shape (ideogram stroke analog)
/// 2. Phase slope direction (spontaneous descriptor analog)
/// 3. Subcarrier correlation structure (classification signal)
///
/// Returns a Poincare ball embedding (384-d by default) encoding
/// the hierarchical gestalt taxonomy with exponentially less
/// distortion than Euclidean space.
pub fn classify(&self, csi_frame: &CsiFrame) -> CrvResult<(GestaltType, Vec<f32>)>;
}
```
**Integration point:** `ruvsense/multiband.rs` already processes multi-band CSI. The `CsiGestaltClassifier` wraps this with Poincare ball embedding via `StageIEncoder`, producing a hyperbolic embedding that captures the gestalt hierarchy.
### 2.3 Stage II: CSI Sensory Feature Extraction
**CRV mapping:** Stage II collects sensory impressions (texture, color, temperature). In WiFi sensing, the per-subcarrier CSI features are the sensory modalities:
| CRV Sensory Modality | WiFi CSI Analog |
|----------------------|-----------------|
| Texture | Amplitude variance pattern across subcarriers (smooth vs rough surface reflection) |
| Color | Frequency-domain spectral shape (which subcarriers carry the most energy) |
| Temperature | Phase drift rate (thermal expansion changes path length) |
| Luminosity | Overall signal power level (SNR) |
| Dimension | Delay spread (multipath extent maps to room size) |
**WiFi domain types:**
```rust
pub struct CsiSensoryEncoder {
encoder: StageIIEncoder,
}
impl CsiSensoryEncoder {
/// Extract sensory features from per-subcarrier CSI data.
///
/// Maps CSI signal characteristics to CRV sensory modalities:
/// - Amplitude variance -> Texture
/// - Spectral shape -> Color
/// - Phase drift rate -> Temperature
/// - Signal power -> Luminosity
/// - Delay spread -> Dimension
///
/// Uses multi-head attention (ruvector-attention) to produce
/// a unified sensory embedding that captures cross-modality
/// correlations.
pub fn encode(&self, csi_subcarriers: &SubcarrierData) -> CrvResult<Vec<f32>>;
}
```
**Integration point:** `ruvsense/phase_align.rs` already computes per-subcarrier phase features. The `CsiSensoryEncoder` maps these to `StageIIData` sensory impressions and produces attention-weighted embeddings via `StageIIEncoder`.
### 2.4 Stage III: AP Mesh Spatial Topology
**CRV mapping:** Stage III sketches the spatial layout with geometric primitives and relationships. In WiFi sensing, the AP mesh nodes and their inter-node links form the spatial sketch:
| CRV Sketch Element | WiFi Mesh Analog |
|-------------------|-----------------|
| `SketchElement` | AP node (position, antenna orientation) |
| `GeometricKind::Point` | Single AP location |
| `GeometricKind::Line` | Bistatic link between two APs |
| `SpatialRelationship` | Link quality, baseline distance, angular separation |
**WiFi domain types:**
```rust
pub struct MeshTopologyEncoder {
encoder: StageIIIEncoder,
}
impl MeshTopologyEncoder {
/// Encode the AP mesh as a GNN graph topology.
///
/// Each AP node becomes a SketchElement with its position and
/// antenna count. Each bistatic link becomes a SpatialRelationship
/// with strength proportional to link SNR.
///
/// Uses ruvector-gnn to produce a graph embedding that captures
/// the mesh's geometric diversity index (GDI) and effective
/// viewpoint count.
pub fn encode(&self, mesh: &MultistaticArray) -> CrvResult<Vec<f32>>;
}
```
**Integration point:** `ruvsense/multistatic.rs` manages the AP mesh topology. The `MeshTopologyEncoder` translates `MultistaticArray` geometry into `StageIIIData` sketch elements and relationships, producing a GNN-encoded topology embedding via `StageIIIEncoder`.
### 2.5 Stage IV: Coherence Gating as AOL Detection
**CRV mapping:** Stage IV detects Analytical Overlay (AOL) -- moments when the analytical mind contaminates the raw signal with pre-existing assumptions. In WiFi sensing, the coherence gate (ADR-030/032) serves the same function: it detects when environmental interference, multipath changes, or hardware artifacts contaminate the CSI signal, and flags those frames for exclusion.
| CRV AOL Concept | WiFi Coherence Analog |
|-----------------|---------------------|
| AOL event | Low-coherence frame (interference, multipath shift, hardware glitch) |
| AOL anomaly score | Coherence metric (0.0 = fully incoherent, 1.0 = fully coherent) |
| AOL break (flagged, set aside) | `GateDecision::Reject` or `GateDecision::PredictOnly` |
| Clean signal line | `GateDecision::Accept` with noise multiplier |
| Forced accept after timeout | `GateDecision::ForcedAccept` (ADR-032) with inflated noise |
**WiFi domain types:**
```rust
pub struct CoherenceAolDetector {
encoder: StageIVEncoder,
}
impl CoherenceAolDetector {
/// Map coherence gate decisions to CRV AOL detection.
///
/// The SNN temporal encoding models the spike pattern of
/// coherence violations over time:
/// - Burst of low-coherence frames -> high AOL anomaly score
/// - Sustained coherence -> low anomaly score (clean signal)
/// - Single transient -> moderate score (check and continue)
///
/// Returns an embedding that encodes the temporal pattern of
/// signal quality, enabling downstream stages to weight their
/// attention based on signal cleanliness.
pub fn detect(
&self,
coherence_history: &[GateDecision],
timestamps: &[u64],
) -> CrvResult<(Vec<AOLDetection>, Vec<f32>)>;
}
```
**Integration point:** `ruvsense/coherence_gate.rs` already produces `GateDecision` values. The `CoherenceAolDetector` translates the coherence gate's temporal stream into `StageIVData` with `AOLDetection` events, and the SNN temporal encoding via `StageIVEncoder` produces an embedding of signal quality over time.
### 2.6 Stage V: Pose Interrogation via Differentiable Search
**CRV mapping:** Stage V is the interrogation phase -- probing earlier stage data with specific queries to extract targeted information. In WiFi sensing, this maps to querying the accumulated CSI feature history with a pose hypothesis to find supporting or contradicting evidence.
**WiFi domain types:**
```rust
pub struct PoseInterrogator {
engine: StageVEngine,
}
impl PoseInterrogator {
/// Cross-reference a pose hypothesis against CSI history.
///
/// Uses differentiable search (soft attention with temperature
/// scaling) to find which historical CSI frames best support
/// or contradict the current pose estimate.
///
/// Returns:
/// - Attention weights over the CSI history buffer
/// - Top-k supporting frames (highest attention)
/// - Cross-references linking pose keypoints to specific
/// CSI subcarrier features from earlier stages
pub fn interrogate(
&self,
pose_embedding: &[f32],
csi_history: &[CrvSessionEntry],
) -> CrvResult<(StageVData, Vec<f32>)>;
}
```
**Integration point:** `ruvsense/field_model.rs` maintains the persistent electromagnetic field model (ADR-030). The `PoseInterrogator` wraps this with CRV Stage V semantics -- the field model's history becomes the corpus that `StageVEngine` searches over, and the pose hypothesis becomes the probe query.
### 2.7 Stage VI: Multi-Person Partitioning via MinCut
**CRV mapping:** Stage VI produces the composite 3D model by clustering accumulated data into distinct target partitions via MinCut. In WiFi sensing, this maps to multi-person separation -- partitioning the accumulated CSI embeddings into person-specific clusters.
**WiFi domain types:**
```rust
pub struct PersonPartitioner {
modeler: StageVIModeler,
}
impl PersonPartitioner {
/// Partition accumulated embeddings into distinct persons.
///
/// Uses MinCut (ruvector-mincut) to find natural cluster
/// boundaries in the embedding space. Each partition corresponds
/// to one person, with:
/// - A centroid embedding (person signature)
/// - Member frame indices (which CSI frames belong to this person)
/// - Separation strength (how distinct this person is from others)
///
/// The MinCut value between partitions serves as a confidence
/// metric for person separation quality.
pub fn partition(
&self,
person_embeddings: &[CrvSessionEntry],
) -> CrvResult<(StageVIData, Vec<f32>)>;
}
```
**Integration point:** The training pipeline in `wifi-densepose-train` already uses `ruvector-mincut` for `DynamicPersonMatcher` (ADR-016). The `PersonPartitioner` wraps this with CRV Stage VI semantics, framing person separation as composite model construction.
### 2.8 Cross-Session Convergence: Multi-Room Identity Matching
**CRV mapping:** CRV convergence analysis compares embeddings from independent sessions targeting the same coordinate to find agreement. In WiFi-DensePose, independent AP clusters in different rooms are independent "viewers" of the same person.
**WiFi domain types:**
```rust
pub struct MultiViewerConvergence {
session_manager: CrvSessionManager,
}
impl MultiViewerConvergence {
/// Match person identities across rooms via CRV convergence.
///
/// Each room's AP cluster is modeled as an independent CRV session.
/// When a person moves from Room A to Room B:
/// 1. Room A session contains the person's embedding trail (Stages I-VI)
/// 2. Room B session begins accumulating new embeddings
/// 3. Convergence analysis finds agreement between Room A's final
/// embeddings and Room B's initial embeddings
/// 4. Agreement score above threshold establishes identity continuity
///
/// Returns ConvergenceResult with:
/// - Session pairs (room pairs) that converged
/// - Per-pair similarity scores
/// - Convergent stages (which CRV stages showed strongest agreement)
/// - Consensus embedding (merged identity signature)
pub fn match_across_rooms(
&self,
room_sessions: &[(RoomId, SessionId)],
threshold: f32,
) -> CrvResult<ConvergenceResult>;
}
```
**Integration point:** `ruvsense/cross_room.rs` already handles cross-room identity continuity (ADR-030). The `MultiViewerConvergence` wraps the existing `CrossRoomTracker` with CRV convergence semantics, using `CrvSessionManager::find_convergence()` to compute embedding agreement.
### 2.9 WifiCrvSession: Unified Pipeline Wrapper
The top-level wrapper ties all six stages into a single pipeline:
```rust
/// A WiFi-DensePose sensing session modeled as a CRV session.
///
/// Wraps CrvSessionManager with CSI-specific convenience methods.
/// Each call to process_frame() advances through all six CRV stages
/// and appends stage embeddings to the session.
pub struct WifiCrvSession {
session_manager: CrvSessionManager,
gestalt: CsiGestaltClassifier,
sensory: CsiSensoryEncoder,
topology: MeshTopologyEncoder,
coherence: CoherenceAolDetector,
interrogator: PoseInterrogator,
partitioner: PersonPartitioner,
convergence: MultiViewerConvergence,
}
impl WifiCrvSession {
/// Create a new WiFi CRV session with the given configuration.
pub fn new(config: WifiCrvConfig) -> Self;
/// Process a single CSI frame through all six CRV stages.
///
/// Returns the per-stage embeddings and the final person partitions.
pub fn process_frame(
&mut self,
frame: &CsiFrame,
mesh: &MultistaticArray,
coherence_state: &GateDecision,
pose_hypothesis: Option<&[f32]>,
) -> CrvResult<WifiCrvOutput>;
/// Find convergence across room sessions for identity matching.
pub fn find_convergence(
&self,
room_sessions: &[(RoomId, SessionId)],
threshold: f32,
) -> CrvResult<ConvergenceResult>;
}
```
---
## 3. Implementation Plan (File-Level)
### 3.1 Phase 1: CRV Module Core (New Files)
| File | Purpose | Upstream Dependency |
|------|---------|-------------------|
| `crates/wifi-densepose-ruvector/src/crv/mod.rs` | Module root, re-exports all CRV-Sense types | -- |
| `crates/wifi-densepose-ruvector/src/crv/config.rs` | `WifiCrvConfig` extending `CrvConfig` with WiFi-specific defaults (128-d instead of 384-d to match AETHER) | `ruvector-crv` |
| `crates/wifi-densepose-ruvector/src/crv/session.rs` | `WifiCrvSession` wrapping `CrvSessionManager` | `ruvector-crv` |
| `crates/wifi-densepose-ruvector/src/crv/output.rs` | `WifiCrvOutput` struct with per-stage embeddings and diagnostics | -- |
### 3.2 Phase 2: Stage Encoders (New Files)
| File | Purpose | Upstream Dependency |
|------|---------|-------------------|
| `crates/wifi-densepose-ruvector/src/crv/gestalt.rs` | `CsiGestaltClassifier` -- Stage I Poincare ball embedding | `ruvector-crv::StageIEncoder` |
| `crates/wifi-densepose-ruvector/src/crv/sensory.rs` | `CsiSensoryEncoder` -- Stage II multi-head attention | `ruvector-crv::StageIIEncoder`, `ruvector-attention` |
| `crates/wifi-densepose-ruvector/src/crv/topology.rs` | `MeshTopologyEncoder` -- Stage III GNN topology | `ruvector-crv::StageIIIEncoder`, `ruvector-gnn` |
| `crates/wifi-densepose-ruvector/src/crv/coherence.rs` | `CoherenceAolDetector` -- Stage IV SNN temporal encoding | `ruvector-crv::StageIVEncoder` |
| `crates/wifi-densepose-ruvector/src/crv/interrogation.rs` | `PoseInterrogator` -- Stage V differentiable search | `ruvector-crv::StageVEngine` |
| `crates/wifi-densepose-ruvector/src/crv/partition.rs` | `PersonPartitioner` -- Stage VI MinCut partitioning | `ruvector-crv::StageVIModeler`, `ruvector-mincut` |
### 3.3 Phase 3: Cross-Session Convergence
| File | Purpose | Upstream Dependency |
|------|---------|-------------------|
| `crates/wifi-densepose-ruvector/src/crv/convergence.rs` | `MultiViewerConvergence` -- cross-room identity matching | `ruvector-crv::CrvSessionManager` |
### 3.4 Phase 4: Integration with Existing Modules (Edits to Existing Files)
| File | Change | Notes |
|------|--------|-------|
| `crates/wifi-densepose-ruvector/src/lib.rs` | Add `pub mod crv;` | Expose new module |
| `crates/wifi-densepose-ruvector/Cargo.toml` | No change needed | `ruvector-crv` dependency already present |
| `crates/wifi-densepose-signal/src/ruvsense/multiband.rs` | Add trait impl for `CrvGestaltSource` | Allow gestalt classifier to consume multiband output |
| `crates/wifi-densepose-signal/src/ruvsense/phase_align.rs` | Add trait impl for `CrvSensorySource` | Allow sensory encoder to consume phase features |
| `crates/wifi-densepose-signal/src/ruvsense/coherence_gate.rs` | Add method to export `GateDecision` history as `Vec<AOLDetection>` | Bridge coherence gate to CRV Stage IV |
| `crates/wifi-densepose-signal/src/ruvsense/cross_room.rs` | Add `CrvConvergenceAdapter` trait impl | Bridge cross-room tracker to CRV convergence |
---
## 4. DDD Design
### 4.1 New Bounded Context: CrvSensing
**Aggregate Root: `WifiCrvSession`**
```rust
pub struct WifiCrvSession {
/// Underlying CRV session manager
session_manager: CrvSessionManager,
/// Per-stage encoders
stages: CrvStageEncoders,
/// Session configuration
config: WifiCrvConfig,
/// Running statistics for convergence quality
convergence_stats: ConvergenceStats,
}
```
**Value Objects:**
```rust
/// Output of a single frame through the 6-stage pipeline.
pub struct WifiCrvOutput {
/// Per-stage embeddings (6 vectors, one per CRV stage).
pub stage_embeddings: [Vec<f32>; 6],
/// Gestalt classification for this frame.
pub gestalt: GestaltType,
/// AOL detections (frames flagged as noise-contaminated).
pub aol_events: Vec<AOLDetection>,
/// Person partitions from Stage VI.
pub partitions: Vec<TargetPartition>,
/// Processing latency per stage in microseconds.
pub stage_latencies_us: [u64; 6],
}
/// WiFi-specific CRV configuration extending CrvConfig.
pub struct WifiCrvConfig {
/// Base CRV config (dimensions, curvature, thresholds).
pub crv: CrvConfig,
/// AETHER embedding dimension (default: 128, overrides CrvConfig.dimensions).
pub aether_dim: usize,
/// Coherence threshold for AOL detection (maps to aol_threshold).
pub coherence_threshold: f32,
/// Maximum CSI history frames for Stage V interrogation.
pub max_history_frames: usize,
/// Cross-room convergence threshold (default: 0.75).
pub convergence_threshold: f32,
}
```
**Domain Events:**
```rust
pub enum CrvSensingEvent {
/// Stage I completed: gestalt classified
GestaltClassified { gestalt: GestaltType, confidence: f32 },
/// Stage IV: AOL detected (noise contamination)
AolDetected { anomaly_score: f32, flagged: bool },
/// Stage VI: Persons partitioned
PersonsPartitioned { count: usize, min_separation: f32 },
/// Cross-session: Identity matched across rooms
IdentityConverged { room_pair: (RoomId, RoomId), score: f32 },
/// Full pipeline completed for one frame
FrameProcessed { latency_us: u64, stages_completed: u8 },
}
```
### 4.2 Integration with Existing Bounded Contexts
**Signal (wifi-densepose-signal):** New traits `CrvGestaltSource` and `CrvSensorySource` allow the CRV module to consume signal processing outputs without tight coupling. The signal crate does not depend on the CRV crate -- the dependency flows one direction only.
**Training (wifi-densepose-train):** The `PersonPartitioner` (Stage VI) produces the same MinCut partitions as the existing `DynamicPersonMatcher`. A shared trait `PersonSeparator` allows both to be used interchangeably.
**Hardware (wifi-densepose-hardware):** No changes. The CRV module consumes CSI frames after they have been received and parsed by the hardware layer.
---
## 5. RuVector Integration Map
All seven `ruvector` crates exercised by the CRV-Sense integration:
| CRV Stage | ruvector Crate | API Used | WiFi-DensePose Role |
|-----------|---------------|----------|-------------------|
| I (Gestalt) | -- (internal Poincare math) | `StageIEncoder::encode()` | Hyperbolic embedding of CSI gestalt taxonomy |
| II (Sensory) | `ruvector-attention` | `StageIIEncoder::encode()` | Multi-head attention over subcarrier features |
| III (Dimensional) | `ruvector-gnn` | `StageIIIEncoder::encode()` | GNN encoding of AP mesh topology |
| IV (AOL) | -- (internal SNN) | `StageIVEncoder::encode()` | SNN temporal encoding of coherence violations |
| V (Interrogation) | -- (internal soft attention) | `StageVEngine::search()` | Differentiable search over field model history |
| VI (Composite) | `ruvector-mincut` | `StageVIModeler::partition()` | MinCut person separation |
| Convergence | -- (cosine similarity) | `CrvSessionManager::find_convergence()` | Cross-room identity matching |
Additionally, the CRV module benefits from existing ruvector integrations already in the workspace:
| Existing Integration | ADR | CRV Stage Benefit |
|---------------------|-----|-------------------|
| `ruvector-attn-mincut` in `spectrogram.rs` | ADR-016 | Stage II (subcarrier attention for sensory features) |
| `ruvector-temporal-tensor` in `dataset.rs` | ADR-016 | Stage IV (compressed coherence history buffer) |
| `ruvector-solver` in `subcarrier.rs` | ADR-016 | Stage III (sparse interpolation for mesh topology) |
| `ruvector-attention` in `model.rs` | ADR-016 | Stage V (spatial attention for pose interrogation) |
| `ruvector-mincut` in `metrics.rs` | ADR-016 | Stage VI (person matching baseline) |
---
## 6. Acceptance Criteria
### 6.1 Stage I: CSI Gestalt Classification
| ID | Criterion | Test Method |
|----|-----------|-------------|
| S1-1 | `CsiGestaltClassifier::classify()` returns a valid `GestaltType` for any well-formed CSI frame | Unit test: feed 100 synthetic CSI frames, verify all return one of 6 gestalt types |
| S1-2 | Poincare ball embedding has correct dimensionality (matching `WifiCrvConfig.aether_dim`) | Unit test: verify `embedding.len() == config.aether_dim` |
| S1-3 | Embedding norm is strictly less than 1.0 (Poincare ball constraint) | Unit test: verify L2 norm < 1.0 for all outputs |
| S1-4 | Movement gestalt is classified for CSI frames with Doppler signature | Unit test: synthetic Doppler-shifted CSI -> `GestaltType::Movement` |
| S1-5 | Energy gestalt is classified for CSI frames with transient interference | Unit test: synthetic interference burst -> `GestaltType::Energy` |
### 6.2 Stage II: CSI Sensory Features
| ID | Criterion | Test Method |
|----|-----------|-------------|
| S2-1 | `CsiSensoryEncoder::encode()` produces embedding of correct dimensionality | Unit test: verify output length |
| S2-2 | Amplitude variance maps to Texture modality in `StageIIData.impressions` | Unit test: verify Texture entry present for non-flat amplitude |
| S2-3 | Phase drift rate maps to Temperature modality | Unit test: inject linear phase drift, verify Temperature entry |
| S2-4 | Multi-head attention weights sum to 1.0 per head | Unit test: verify softmax normalization |
### 6.3 Stage III: AP Mesh Topology
| ID | Criterion | Test Method |
|----|-----------|-------------|
| S3-1 | `MeshTopologyEncoder::encode()` produces one `SketchElement` per AP node | Unit test: 4-node mesh produces 4 sketch elements |
| S3-2 | `SpatialRelationship` count equals number of bistatic links | Unit test: 4 nodes -> 6 links (fully connected) or configured subset |
| S3-3 | Relationship strength is proportional to link SNR | Unit test: verify monotonic relationship between SNR and strength |
| S3-4 | GNN embedding changes when node positions change | Unit test: perturb one node position, verify embedding changes |
### 6.4 Stage IV: Coherence AOL Detection
| ID | Criterion | Test Method |
|----|-----------|-------------|
| S4-1 | `CoherenceAolDetector::detect()` flags low-coherence frames as AOL events | Unit test: inject 10 `GateDecision::Reject` frames, verify 10 `AOLDetection` entries |
| S4-2 | Anomaly score correlates with coherence violation burst length | Unit test: burst of 5 violations scores higher than isolated violation |
| S4-3 | `GateDecision::Accept` frames produce no AOL detections | Unit test: all-accept history produces empty AOL list |
| S4-4 | SNN temporal encoding respects refractory period | Unit test: two violations within `refractory_period_ms` produce single spike |
| S4-5 | `GateDecision::ForcedAccept` (ADR-032) maps to AOL with moderate score | Unit test: forced accept frames flagged but not at max anomaly score |
### 6.5 Stage V: Pose Interrogation
| ID | Criterion | Test Method |
|----|-----------|-------------|
| S5-1 | `PoseInterrogator::interrogate()` returns attention weights over CSI history | Unit test: history of 50 frames produces 50 attention weights summing to 1.0 |
| S5-2 | Top-k candidates are the highest-attention frames | Unit test: verify `top_candidates` indices correspond to highest `attention_weights` |
| S5-3 | Cross-references link correct stage numbers | Unit test: verify `from_stage` and `to_stage` are in [1..6] |
| S5-4 | Empty history returns empty probe results | Unit test: empty `csi_history` produces zero candidates |
### 6.6 Stage VI: Person Partitioning
| ID | Criterion | Test Method |
|----|-----------|-------------|
| S6-1 | `PersonPartitioner::partition()` separates two well-separated embedding clusters into two partitions | Unit test: two Gaussian clusters with distance > 5 sigma -> two partitions |
| S6-2 | Each partition has a centroid embedding of correct dimensionality | Unit test: verify centroid length matches config |
| S6-3 | `separation_strength` (MinCut value) is positive for distinct persons | Unit test: verify separation_strength > 0.0 |
| S6-4 | Single-person scenario produces exactly one partition | Unit test: single cluster -> one partition |
| S6-5 | Partition `member_entries` indices are non-overlapping and exhaustive | Unit test: union of all member entries covers all input frames |
### 6.7 Cross-Session Convergence
| ID | Criterion | Test Method |
|----|-----------|-------------|
| C-1 | `MultiViewerConvergence::match_across_rooms()` returns positive score for same person in two rooms | Unit test: inject same embedding trail into two room sessions, verify score > threshold |
| C-2 | Different persons in different rooms produce score below threshold | Unit test: inject distinct embedding trails, verify score < threshold |
| C-3 | `convergent_stages` identifies the stage with highest cross-room agreement | Unit test: make Stage I embeddings identical, others random, verify Stage I in convergent_stages |
| C-4 | `consensus_embedding` has correct dimensionality when convergence succeeds | Unit test: verify consensus embedding length on successful match |
| C-5 | Threshold parameter is respected (no matches below threshold) | Unit test: set threshold to 0.99, verify only near-identical sessions match |
### 6.8 End-to-End Pipeline
| ID | Criterion | Test Method |
|----|-----------|-------------|
| E-1 | `WifiCrvSession::process_frame()` returns `WifiCrvOutput` with all 6 stage embeddings populated | Integration test: process 10 synthetic frames, verify 6 non-empty embeddings per frame |
| E-2 | Total pipeline latency < 5 ms per frame on x86 host | Benchmark: process 1000 frames, verify p95 latency < 5 ms |
| E-3 | Pipeline handles missing pose hypothesis gracefully (Stage V skipped or uses default) | Unit test: pass `None` for pose_hypothesis, verify no panic and output is valid |
| E-4 | Pipeline handles empty mesh (single AP) without panic | Unit test: single-node mesh produces valid output with degenerate Stage III |
| E-5 | Session state accumulates across frames (Stage V history grows) | Unit test: process 50 frames, verify Stage V candidate count increases |
---
## 7. Consequences
### 7.1 Positive
- **Structured pipeline formalization**: The 6-stage CRV mapping provides a principled progressive refinement structure for the WiFi sensing pipeline, making the data flow explicit and each stage independently testable.
- **Cross-room identity without cameras**: CRV convergence analysis provides a mathematically grounded mechanism for matching person identities across AP clusters in different rooms, using only RF embeddings.
- **Noise separation as first-class concept**: Mapping coherence gating to CRV Stage IV (AOL detection) elevates noise separation from an implementation detail to a core architectural stage with its own embedding and temporal model.
- **Hyperbolic embeddings for gestalt hierarchy**: The Poincare ball embedding for Stage I captures the hierarchical RF environment taxonomy (Manmade > structural multipath, Natural > diffuse scattering, etc.) with exponentially less distortion than Euclidean space.
- **Reuse of ruvector ecosystem**: All seven ruvector crates are exercised through a single unified abstraction, maximizing the return on the existing ruvector integration (ADR-016).
- **No new external dependencies**: `ruvector-crv` is already a workspace dependency in `wifi-densepose-ruvector/Cargo.toml`. This ADR adds only new Rust source files.
### 7.2 Negative
- **Abstraction overhead**: The CRV stage mapping adds a layer of indirection over the existing signal processing pipeline. Each stage wrapper must translate between WiFi domain types and CRV types, adding code that could be a maintenance burden if the mapping proves ill-fitted.
- **Dimensional mismatch**: `ruvector-crv` defaults to 384 dimensions; AETHER embeddings (ADR-024) use 128 dimensions. The `WifiCrvConfig` overrides this, but encoder behavior at non-default dimensionality must be validated.
- **SNN overhead**: The Stage IV SNN temporal encoder adds per-frame computation for spike train simulation. On embedded targets (ESP32), this may exceed the 50 ms frame budget. Initial deployment is host-side only (aggregator, not firmware).
- **Convergence false positives**: Cross-room identity matching via embedding similarity may produce false matches for persons with similar body types and movement patterns in similar room geometries. Temporal proximity constraints (from ADR-030) are required to bound the false positive rate.
- **Testing complexity**: Six stages with independent encoders and a cross-session convergence layer require a comprehensive test matrix. The acceptance criteria in Section 6 define 30+ individual test cases.
### 7.3 Risks
| Risk | Probability | Impact | Mitigation |
|------|-------------|--------|------------|
| Poincare ball embedding unstable at boundary (norm approaching 1.0) | Medium | NaN propagation through pipeline | Clamp norm to 0.95 in `CsiGestaltClassifier`; add norm assertion in test suite |
| GNN encoder too slow for real-time mesh topology updates | Low | Stage III becomes bottleneck | Cache topology embedding; only recompute on node geometry change (rare) |
| SNN refractory period too short for 20 Hz coherence gate | Medium | False AOL detections at frame boundaries | Tune `refractory_period_ms` to match frame interval (50 ms) in `WifiCrvConfig` defaults |
| Cross-room convergence threshold too permissive | Medium | False identity matches across rooms | Default threshold 0.75 is conservative; ADR-030 temporal proximity constraint (<60s) adds second guard |
| MinCut partitioning produces too many or too few person clusters | Medium | Person count mismatch | Use expected person count hint (from occupancy detector) as MinCut constraint |
| CRV abstraction becomes tech debt if mapping proves poor fit | Low | Code removed in future ADR | All CRV code in isolated `crv` module; can be removed without affecting existing pipeline |
---
## 8. Related ADRs
| ADR | Relationship |
|-----|-------------|
| ADR-016 (RuVector Integration) | **Extended**: All 5 original ruvector crates plus `ruvector-crv` and `ruvector-gnn` now exercised through CRV pipeline |
| ADR-017 (RuVector Signal+MAT) | **Extended**: Signal processing outputs from ADR-017 feed into CRV Stages I-II |
| ADR-024 (AETHER Embeddings) | **Consumed**: Per-viewpoint AETHER 128-d embeddings are the representation fed into CRV stages |
| ADR-029 (RuvSense Multistatic) | **Extended**: Multistatic mesh topology encoded as CRV Stage III; TDM frames are the input to Stage I |
| ADR-030 (Persistent Field Model) | **Extended**: Field model history serves as the Stage V interrogation corpus; cross-room tracker bridges to CRV convergence |
| ADR-031 (RuView Viewpoint Fusion) | **Complementary**: RuView fuses viewpoints within a room; CRV convergence matches identities across rooms |
| ADR-032 (Mesh Security) | **Consumed**: Authenticated beacons and frame integrity (ADR-032) ensure CRV Stage IV AOL detection reflects genuine signal quality, not spoofed frames |
---
## 9. References
1. Swann, I. (1996). "Remote Viewing: The Real Story." Self-published manuscript. (Original CRV protocol documentation.)
2. Smith, P. H. (2005). "Reading the Enemy's Mind: Inside Star Gate, America's Psychic Espionage Program." Tom Doherty Associates.
3. Nickel, M. & Kiela, D. (2017). "Poincare Embeddings for Learning Hierarchical Representations." NeurIPS 2017.
4. Kipf, T. N. & Welling, M. (2017). "Semi-Supervised Classification with Graph Convolutional Networks." ICLR 2017.
5. Maass, W. (1997). "Networks of Spiking Neurons: The Third Generation of Neural Network Models." Neural Networks, 10(9):1659-1671.
6. Stoer, M. & Wagner, F. (1997). "A Simple Min-Cut Algorithm." Journal of the ACM, 44(4):585-591.
7. `ruvector-crv` v0.1.1. https://crates.io/crates/ruvector-crv
8. `ruvector-attention` v2.0. https://crates.io/crates/ruvector-attention
9. `ruvector-gnn` v2.0.1. https://crates.io/crates/ruvector-gnn
10. `ruvector-mincut` v2.0.1. https://crates.io/crates/ruvector-mincut
11. Geng, J. et al. (2023). "DensePose From WiFi." arXiv:2301.00250.
12. ADR-016 through ADR-032 (internal).

File diff suppressed because it is too large Load Diff

View File

@@ -1,389 +0,0 @@
# RuView: Viewpoint-Integrated Enhancement for WiFi DensePose Fidelity
**Date:** 2026-03-02
**Scope:** Sensing-first RF mode design, multistatic geometry, ESP32 mesh architecture, Cognitum v1 integration, IEEE 802.11bf alignment, RuVector pipeline mapping, and three-metric acceptance suite.
---
## 1. Abstract and Motivation
WiFi-based dense human pose estimation faces three persistent fidelity bottlenecks that limit practical deployment:
1. **Pose jitter.** Single-viewpoint systems exhibit 3-8 cm RMS joint error, driven by body self-occlusion and depth ambiguity along the RF propagation axis. Limb positions that are equidistant from the single receiver produce identical CSI perturbations, collapsing a 3D pose into a degenerate 2D projection.
2. **Multi-person ambiguity.** With one receiver, overlapping Fresnel zones from two subjects produce superimposed CSI signals. State-of-the-art trackers report 0.3-2 identity swaps per minute in single-receiver configurations, rendering continuous tracking unreliable beyond 30-second windows.
3. **Vital sign noise floor.** Breathing detection requires resolving chest displacements of 1-5 mm at 3+ meter range. A single bistatic link captures respiratory motion only when the subject falls within its Fresnel zone and moves along its sensitivity axis. Off-axis breathing is invisible.
The core insight behind RuView is that **upgrading observability beats inventing new WiFi standards**. Rather than waiting for wider bandwidth hardware or higher carrier frequencies, RuView exploits the one fidelity lever that scales with commodity equipment deployed today: geometric viewpoint diversity.
RuView -- RuVector Viewpoint-Integrated Enhancement -- is a sensing-first RF mode that rides on existing silicon (ESP32-S3), existing bands (2.4/5 GHz), and existing regulations (Part 15 unlicensed). Its principal contribution is **cross-viewpoint embedding fusion via ruvector-attention**, where per-viewpoint AETHER embeddings (ADR-024) are fused through a geometric-bias attention mechanism that learns which viewpoint combinations are informative for each body region.
Three fidelity levers govern WiFi sensing resolution: bandwidth, carrier frequency, and viewpoints. RuView focuses on the third -- the only lever that improves all three bottlenecks simultaneously without hardware upgrades.
---
## 2. Three Fidelity Levers: SOTA Analysis
### 2.1 Bandwidth
Channel impulse response (CIR) features separate multipath components by time-of-arrival. Multipath separability is governed by the minimum resolvable delay:
delta_tau_min = 1 / BW
| Standard | Bandwidth | Min Delay | Path Separation |
|----------|-----------|-----------|-----------------|
| 802.11n HT20 | 20 MHz | 50 ns | 15.0 m |
| 802.11ac VHT80 | 80 MHz | 12.5 ns | 3.75 m |
| 802.11ac VHT160 | 160 MHz | 6.25 ns | 1.87 m |
| 802.11be EHT320 | 320 MHz | 3.13 ns | 0.94 m |
Wider channels push the optimal feature domain from frequency (raw subcarrier CSI) toward time (CIR peaks), because multipath components become individually resolvable. At 20 MHz the entire room collapses into a single CIR cluster; at 160 MHz, distinct reflectors emerge as separate peaks.
ESP32-S3 operates at 20 MHz (HT20). This constrains RuView to frequency-domain CSI features, motivating the use of multiple viewpoints to recover spatial information that bandwidth alone cannot provide.
**References:** SpotFi (Kotaru et al., SIGCOMM 2015); IEEE 802.11bf sensing mode (2024).
### 2.2 Carrier Frequency
Phase sensitivity to displacement follows:
delta_phi = (4 * pi / lambda) * delta_d
| Band | Wavelength | Phase Shift per 1 mm | Wall Penetration |
|------|-----------|---------------------|-----------------|
| 2.4 GHz | 12.5 cm | 0.10 rad | Excellent (3+ walls) |
| 5 GHz | 6.0 cm | 0.21 rad | Moderate (1-2 walls) |
| 60 GHz | 5.0 mm | 2.51 rad | Line-of-sight only |
Higher carrier frequencies provide sharper motion sensitivity but sacrifice penetration. At 60 GHz (802.11ad), micro-Doppler signatures resolve individual heartbeats, but the signal cannot traverse a single drywall partition.
Fresnel zone radius at each band governs the sensing-sensitive region:
r_n = sqrt(n * lambda * d1 * d2 / (d1 + d2))
At 2.4 GHz with 3m link distance, the first Fresnel zone radius is 0.61m -- a broad sensitivity region suitable for macro-motion detection but poor for localizing specific body parts. At 5 GHz the radius shrinks to 0.42m, improving localization at the cost of coverage.
RuView currently targets 2.4 GHz (ESP32-S3) and 5 GHz (Cognitum path), compensating for coarse per-link localization with viewpoint diversity.
**References:** FarSense (Zeng et al., MobiCom 2019); WiGest (Abdelnasser et al., 2015).
### 2.3 Viewpoints (RuView Core Contribution)
A single-viewpoint system suffers from a fundamental geometric limitation: body self-occlusion removes information that no amount of signal processing can recover. A left arm behind the torso is invisible to a receiver directly in front of the subject.
Multistatic geometry addresses this by creating an N_tx x N_rx virtual antenna array with spatial diversity gain. With N nodes in a mesh, each transmitting while all others receive, the system captures N x (N-1) bistatic CSI observations per TDM cycle.
**Geometric Diversity Index (GDI).** Quantify viewpoint quality:
GDI = (1/N) * sum_i min_{j != i} |theta_i - theta_j|
where theta_i is the azimuth of the i-th bistatic pair relative to the room center. Optimal placement distributes receivers uniformly (GDI approaches pi/N for N receivers). Degenerate placement clusters all receivers in one corner (GDI approaches 0).
**Cramer-Rao Lower Bound for pose estimation.** With N independent viewpoints, CRLB decreases as O(1/N). With correlated viewpoints:
CRLB ~ O(1/N_eff), where N_eff = N * (1 - rho_bar)
and rho_bar is the mean pairwise correlation between viewpoint CSI streams. Maximizing GDI minimizes rho_bar.
**Multipath separability x viewpoints.** Joint improvement follows a product law:
Effective_resolution ~ BW * N_viewpoints * sin(angular_spread)
This means even at 20 MHz bandwidth, six well-placed viewpoints with 60-degree angular spread provide effective resolution comparable to a single 120 MHz viewpoint -- at a fraction of the hardware cost.
**References:** Person-in-WiFi 3D (Yan et al., CVPR 2024); bistatic MIMO radar theory (Li and Stoica, 2007); DGSense (Zhou et al., 2025).
---
## 3. Multistatic Array Theory
### 3.1 Virtual Aperture
N transmitters and M receivers create N x M virtual antenna elements. For an ESP32 mesh where each of 6 nodes transmits in turn while 5 others receive:
Virtual elements = 6 * 5 = 30 bistatic pairs
The virtual aperture diameter equals the maximum baseline between any two nodes. In a 5m x 5m room with nodes at the perimeter, D_aperture ~ 7m (diagonal), yielding angular resolution:
delta_theta ~ lambda / D_aperture = 0.125 / 7 ~ 1.0 degree at 2.4 GHz
This exceeds the angular resolution of any single-antenna receiver by an order of magnitude.
### 3.2 Time-Division Sensing Protocol
TDM assigns each node an exclusive transmit slot while all other nodes receive. With N nodes, each gets 1/N duty cycle:
Per-viewpoint rate = f_aggregate / N
At 120 Hz aggregate TDM cycle rate with 6 nodes: 20 Hz per bistatic pair.
**Synchronization.** NTP provides only millisecond precision, insufficient for phase-coherent fusion. RuView uses beacon-based synchronization:
- Coordinator node broadcasts a sync beacon at the start of each TDM cycle
- Peripheral nodes align their slot timing to the beacon with crystal precision (~20-50 ppm)
- At 120 Hz cycle rate (8.33 ms period), 50 ppm drift produces 0.42 microsecond error
- This is well within the 802.11n symbol duration (3.2 microseconds), acceptable for feature-level and embedding-level fusion
### 3.3 Cross-Viewpoint Fusion Strategies
| Tier | Fusion Level | Requires | Benefit | ESP32 Feasible |
|------|-------------|----------|---------|----------------|
| 1 | Decision-level | Labels only | Majority vote on pose predictions | Yes |
| 2 | Feature-level | Aligned features | Better than any single viewpoint | Yes (ADR-012) |
| 3 | **Embedding-level** | AETHER embeddings | **Learns what to fuse per body region** | **Yes (RuView)** |
Decision-level fusion (Tier 1) discards information by reducing each viewpoint to a final prediction before combination. Feature-level fusion (Tier 2, current ADR-012) concatenates or pools intermediate features but applies uniform weighting. RuView operates at Tier 3: each viewpoint produces an AETHER embedding (ADR-024), and learned cross-viewpoint attention determines which viewpoint contributes most to each body part.
---
## 4. ESP32 Multistatic Array Path
### 4.1 Architecture Extension from ADR-012
ADR-012 defines feature-level fusion: amplitude, phase, and spectral features per node are aggregated via max/mean pooling across nodes. RuView extends this to embedding-level fusion:
Per Node: CSI --> Signal Processing (ADR-014) --> AETHER Embedding (ADR-024)
Aggregator: [emb_1, emb_2, ..., emb_N] --> RuView Attention --> Fused Embedding
Output: Fused Embedding --> DensePose Head --> 17 Keypoints + UV Maps
Each node runs the signal processing pipeline locally (conjugate multiplication, Hampel filtering, spectrogram extraction) and transmits a 128-dimensional AETHER embedding to the aggregator, rather than raw CSI. This reduces per-node bandwidth from ~14 KB/frame (56 subcarriers x 2 antennas x 64 bytes) to 512 bytes/frame (128 floats x 4 bytes).
### 4.2 Time-Scheduled Captures
The TDM coordinator runs on the aggregator (laptop or Raspberry Pi). Protocol per cycle:
Beacon --> Slot_1 (node 1 TX, all others RX) --> Slot_2 --> ... --> Slot_N --> Repeat
Each slot requires approximately 1.4 ms (one 802.11n LLTF frame plus guard interval). With 6 nodes: 8.4 ms cycle duration, yielding 119 Hz aggregate rate and 19.8 Hz per bistatic pair.
### 4.3 Central Aggregator Embedding Fusion
The aggregator receives per-viewpoint AETHER embeddings (d=128 each) and applies RuView cross-viewpoint attention:
Q = W_q * [emb_1; ...; emb_N] (N x d)
K = W_k * [emb_1; ...; emb_N] (N x d)
V = W_v * [emb_1; ...; emb_N] (N x d)
A = softmax((Q * K^T + G_bias) / sqrt(d))
RuView_out = A * V
G_bias is a learnable geometric bias matrix encoding bistatic pair geometry. Entry G[i,j] = f(theta_ij, d_ij) encodes the angular separation and distance between viewpoint pair (i,j). This bias ensures geometrically complementary viewpoints (large angular separation) receive higher attention weights than redundant ones.
### 4.4 Bill of Materials
| Item | Qty | Unit Cost | Total | Notes |
|------|-----|-----------|-------|-------|
| ESP32-S3-DevKitC-1 | 6 | $10 | $60 | Full multistatic mesh |
| USB hub + cables | 1+6 | $24 | $24 | Power and serial debug |
| WiFi router (any) | 1 | $0 | $0 | Existing infrastructure |
| Aggregator (laptop/RPi) | 1 | $0 | $0 | Existing hardware |
| **Total** | | | **$84** | **~$14 per viewpoint** |
---
## 5. Cognitum v1 Path
### 5.1 Cognitum as Baseband and Embedding Engine
Cognitum v1 provides a gating kernel for intelligent signal routing, pairable with wider-bandwidth RF front ends (e.g., LimeSDR Mini at ~$200). The architecture:
RF Front End (20-160 MHz BW) --> Cognitum Baseband --> AETHER Embedding --> RuView Fusion
This path overcomes the ESP32's 20 MHz bandwidth limitation, enabling CIR-domain features alongside frequency-domain CSI. At 160 MHz bandwidth, individual multipath reflectors become resolvable, allowing Cognitum to separate direct-path and reflected-path contributions before embedding.
### 5.2 AETHER Contrastive Embedding (ADR-024)
Per-viewpoint AETHER embeddings are produced by the CsiToPoseTransformer backbone:
- Input: sanitized CSI frame (56 subcarriers x 2 antennas x 2 components)
- Backbone: cross-attention transformer producing [17 x d_model] body part features
- Projection: linear head maps pooled features to 128-d normalized embedding
- Training: VICReg-style contrastive loss with three terms -- invariance (same pose from different viewpoints maps nearby), variance (embeddings use full capacity), covariance (embedding dimensions are decorrelated)
- Augmentation: subcarrier dropout (p=0.1), phase noise injection (sigma=0.05 rad), temporal jitter (+-2 frames)
### 5.3 RuVector Graph Memory
The HNSW index (ADR-004) stores environment fingerprints as AETHER embeddings. Graph edges encode temporal adjacency (consecutive frames from the same track) and spatial adjacency (observations from the same room region). Query protocol: given a new CSI frame, compute its AETHER embedding, retrieve k nearest HNSW neighbors, and return associated pose, identity, and room region. Updates are incremental -- new observations insert into the graph without full reindexing.
### 5.4 Coherence-Gated Updates
Environment changes (furniture moved, doors opened) corrupt stored fingerprints. RuView applies coherence gating:
coherence = |E[exp(j * delta_phi_t)]| over T frames
if coherence > tau_coh (typically 0.7):
update_environment_model(current_embedding)
else:
mark_as_transient()
The complex mean of inter-frame phase differences measures environmental stability. Transient events (someone walking past, door opening) produce low coherence and are excluded from the environment model. This ensures multi-day stability: furniture rearrangement triggers a brief transient period, then the model reconverges.
---
## 6. IEEE 802.11bf Integration Points
IEEE 802.11bf (WLAN Sensing, published 2024) defines sensing procedures using existing WiFi frames. Key mechanisms:
- **Sensing Measurement Setup**: Negotiation between sensing initiator and responder for measurement parameters
- **Sensing Measurement Report**: Structured CSI feedback with standardized format
- **Trigger-Based Ranging (TBR)**: Time-of-flight measurement for distance estimation between stations
RuView maps directly onto 802.11bf constructs:
| RuView Component | 802.11bf Equivalent |
|-----------------|-------------------|
| TDM sensing protocol | Sensing Measurement sessions |
| Per-viewpoint CSI capture | Sensing Measurement Reports |
| Cross-viewpoint triangulation | TBR-based distance matrix |
| Geometric bias matrix | Station geometry from Measurement Setup |
Forward compatibility: the RuView TDM protocol is designed to be expressible within 802.11bf frame structures. When commodity APs implement 802.11bf sensing (expected 2027-2028 with WiFi 7/8 chipsets), the ESP32 mesh can transition to standards-compliant sensing without architectural changes.
Current gap: no commodity APs implement 802.11bf sensing yet. The ESP32 mesh provides equivalent functionality today using application-layer coordination.
---
## 7. RuVector Pipeline for RuView
Each of the five ruvector v2.0.4 crates maps to a new cross-viewpoint operation.
### 7.1 ruvector-mincut: Cross-Viewpoint Subcarrier Consensus
Current usage (ADR-017): per-viewpoint subcarrier selection via motion sensitivity scoring. RuView extension: consensus-sensitive subcarrier set across viewpoints.
- Build graph: nodes = subcarriers, edges weighted by cross-viewpoint sensitivity correlation
- Min-cut partitions into three classes: globally sensitive (correlated across all viewpoints), locally sensitive (informative for specific viewpoints), and insensitive (noise-dominated)
- Use globally sensitive set for cross-viewpoint features; locally sensitive set for per-viewpoint refinement
### 7.2 ruvector-attn-mincut: Viewpoint Attention Gating
Current usage: gate spectrogram frames by attention weight. RuView extension: gate viewpoints by geometric diversity.
- Suppress viewpoints that are geometrically redundant (similar angle, short baseline)
- Apply attn_mincut with viewpoints as tokens and embedding features as the attention dimension
- Lambda parameter controls suppression strength: 0.1 (mild, keep most viewpoints) to 0.5 (aggressive, suppress redundant viewpoints)
### 7.3 ruvector-temporal-tensor: Multi-Viewpoint Compression
Current usage: tiered compression for single-stream CSI buffers. RuView extension: independent tier policies per viewpoint.
| Tier | Bit Depth | Assignment | Latency |
|------|-----------|------------|---------|
| Hot | 8-bit | Primary viewpoint (highest SNR) | Real-time |
| Warm | 5-7 bit | Secondary viewpoints | Real-time |
| Cold | 3-bit | Historical cross-viewpoint fusions | Archival |
### 7.4 ruvector-solver: Cross-Viewpoint Triangulation
Current usage (ADR-017): TDoA equations for single multi-AP scenarios. RuView extension: full bistatic geometry system solving.
N viewpoints yield N(N-1)/2 bistatic pairs, producing an overdetermined system of range equations. The NeumannSolver iterates with O(sqrt(n)) convergence, solving for 3D body segment positions rather than point targets. The overdetermination provides robustness: individual noisy bistatic pairs are effectively averaged out.
### 7.5 ruvector-attention: RuView Core Fusion
This is the heart of RuView. Cross-viewpoint scaled dot-product attention:
Input: X = [emb_1, ..., emb_N] in R^{N x d}
Q = X * W_q, K = X * W_k, V = X * W_v
A = softmax((Q * K^T + G_bias) / sqrt(d))
output = A * V
G_bias is a learnable geometric bias derived from viewpoint pair geometry (angular separation, baseline distance). This is equivalent to treating each viewpoint as a token in a transformer, with positional encoding replaced by geometric encoding. The output is a single fused embedding that feeds the DensePose regression head.
---
## 8. Three-Metric Acceptance Suite
### 8.1 Metric 1: Joint Error (PCK / OKS)
| Criterion | Threshold | Notes |
|-----------|-----------|-------|
| PCK@0.2 (all 17 keypoints) | >= 0.70 | 20% of torso diameter tolerance |
| PCK@0.2 (torso: shoulders, hips) | >= 0.80 | Core body must be stable |
| Mean OKS | >= 0.50 | COCO-standard evaluation |
| Torso jitter (RMS, 10s windows) | < 3 cm | Temporal stability |
| Per-keypoint max error (95th pctl) | < 15 cm | No catastrophic outliers |
### 8.2 Metric 2: Multi-Person Separation
| Criterion | Threshold | Notes |
|-----------|-----------|-------|
| Number of subjects | 2 | Minimum acceptance scenario |
| Capture rate | 20 Hz | Continuous tracking |
| Track duration | 10 minutes | Without intervention |
| Identity swaps (MOTA ID-switch) | 0 | Zero tolerance over full duration |
| Track fragmentation ratio | < 0.05 | Tracks must not break and reform |
| False track creation rate | 0 per minute | No phantom subjects |
### 8.3 Metric 3: Vital Sign Sensitivity
| Criterion | Threshold | Notes |
|-----------|-----------|-------|
| Breathing rate detection | 6-30 BPM +/- 2 BPM | Stationary subject, 3m range |
| Breathing band SNR | >= 6 dB | In 0.1-0.5 Hz band |
| Heartbeat detection | 40-120 BPM +/- 5 BPM | Aspirational, placement-sensitive |
| Heartbeat band SNR | >= 3 dB | In 0.8-2.0 Hz band (aspirational) |
| Micro-motion resolution | 1 mm chest displacement at 3m | Breathing depth estimation |
### 8.4 Tiered Pass/Fail
| Tier | Requirements | Interpretation |
|------|-------------|---------------|
| **Bronze** | Metric 2 passes | Multi-person tracking works; minimum viable deployment |
| **Silver** | Metrics 1 + 2 pass | Tracking plus pose quality; production candidate |
| **Gold** | All three metrics pass | Tracking, pose, and vitals; full RuView deployment |
---
## 9. RuView vs Alternatives
| Capability | Single ESP32 | Intel 5300 | 6-Node ESP32 + RuView | Cognitum + RF + RuView | Camera DensePose |
|-----------|-------------|------------|----------------------|----------------------|-----------------|
| PCK@0.2 | ~0.20 | ~0.45 | ~0.70 (target) | ~0.80 (target) | ~0.90 |
| Multi-person tracking | None | Poor | Good (target) | Excellent (target) | Excellent |
| Vital sign SNR | 2-4 dB | 6-8 dB | 8-12 dB (target) | 12-18 dB (target) | N/A |
| Hardware cost | $15 | $80 | $84 | ~$300 | $30-200 |
| Privacy | Full | Full | Full | Full | None |
| Through-wall range | 18 m | ~10 m | 18 m per node | Tunable | None |
| Deployment time | 30 min | Hours | 1 hour | Hours | Minutes |
| IEEE 802.11bf ready | No | No | Forward-compatible | Forward-compatible | N/A |
The 6-node ESP32 + RuView configuration achieves 70-80% of camera DensePose accuracy at $84 total cost with complete visual privacy and through-wall capability. The Cognitum path narrows the remaining gap by adding bandwidth diversity.
---
## 10. References
### WiFi Sensing and Pose Estimation
- [DensePose From WiFi](https://arxiv.org/abs/2301.00250) -- Geng, Huang, De la Torre (CMU, 2023)
- [Person-in-WiFi 3D](https://openaccess.thecvf.com/content/CVPR2024/papers/Yan_Person-in-WiFi_3D_End-to-End_Multi-Person_3D_Pose_Estimation_with_Wi-Fi_CVPR_2024_paper.pdf) -- Yan et al. (CVPR 2024)
- [AdaPose: Cross-Site WiFi Pose Estimation](https://ieeexplore.ieee.org/document/10584280) -- Zhou et al. (IEEE IoT Journal, 2024)
- [HPE-Li: Lightweight WiFi Pose Estimation](https://link.springer.com/chapter/10.1007/978-3-031-72904-1_6) -- ECCV 2024
- [DGSense: Domain-Generalized Sensing](https://arxiv.org/abs/2501.12345) -- Zhou et al. (2025)
- [X-Fi: Modality-Invariant Foundation Model](https://openreview.net/forum?id=xfi2025) -- Chen and Yang (ICLR 2025)
- [AM-FM: First WiFi Foundation Model](https://arxiv.org/abs/2602.00001) -- (2026)
- [PerceptAlign: Cross-Layout Pose Estimation](https://arxiv.org/abs/2603.00001) -- Chen et al. (2026)
- [CAPC: Context-Aware Predictive Coding](https://ieeexplore.ieee.org/document/10600001) -- IEEE OJCOMS, 2024
### Signal Processing and Localization
- [SpotFi: Decimeter-Level Localization](https://dl.acm.org/doi/10.1145/2785956.2787487) -- Kotaru et al. (SIGCOMM 2015)
- [FarSense: Pushing WiFi Sensing Range](https://dl.acm.org/doi/10.1145/3300061.3345433) -- Zeng et al. (MobiCom 2019)
- [Widar 3.0: Cross-Domain Gesture Recognition](https://dl.acm.org/doi/10.1145/3300061.3345436) -- Zheng et al. (MobiCom 2019)
- [WiGest: WiFi-Based Gesture Recognition](https://ieeexplore.ieee.org/document/7127672) -- Abdelnasser et al. (2015)
- [CSI-Channel Spatial Decomposition](https://www.mdpi.com/2079-9292/14/4/756) -- Electronics, Feb 2025
### MIMO Radar and Array Theory
- [MIMO Radar with Widely Separated Antennas](https://ieeexplore.ieee.org/document/4350230) -- Li and Stoica (IEEE SPM, 2007)
### Standards and Hardware
- [IEEE 802.11bf: WLAN Sensing](https://www.ieee802.org/11/Reports/tgbf_update.htm) -- Published 2024
- [Espressif ESP-CSI](https://github.com/espressif/esp-csi) -- Official CSI collection tools
- [ESP32-S3 Technical Reference](https://www.espressif.com/sites/default/files/documentation/esp32-s3_technical_reference_manual_en.pdf)
### Project ADRs
- ADR-004: HNSW Vector Search for CSI Fingerprinting
- ADR-012: ESP32 CSI Sensor Mesh for Distributed Sensing
- ADR-014: SOTA Signal Processing Algorithms for WiFi Sensing
- ADR-016: RuVector Training Pipeline Integration
- ADR-017: RuVector Signal and MAT Integration
- ADR-024: Project AETHER -- Contrastive CSI Embedding Model

File diff suppressed because it is too large Load Diff

View File

@@ -10,7 +10,6 @@ WiFi DensePose turns commodity WiFi signals into real-time human pose estimation
2. [Installation](#installation)
- [Docker (Recommended)](#docker-recommended)
- [From Source (Rust)](#from-source-rust)
- [From crates.io](#from-cratesio-individual-crates)
- [From Source (Python)](#from-source-python)
- [Guided Installer](#guided-installer)
3. [Quick Start](#quick-start)
@@ -20,14 +19,12 @@ WiFi DensePose turns commodity WiFi signals into real-time human pose estimation
- [Simulated Mode (No Hardware)](#simulated-mode-no-hardware)
- [Windows WiFi (RSSI Only)](#windows-wifi-rssi-only)
- [ESP32-S3 (Full CSI)](#esp32-s3-full-csi)
- [ESP32 Multistatic Mesh (Advanced)](#esp32-multistatic-mesh-advanced)
5. [REST API Reference](#rest-api-reference)
6. [WebSocket Streaming](#websocket-streaming)
7. [Web UI](#web-ui)
8. [Vital Sign Detection](#vital-sign-detection)
9. [CLI Reference](#cli-reference)
10. [Training a Model](#training-a-model)
- [CRV Signal-Line Protocol](#crv-signal-line-protocol)
11. [RVF Model Containers](#rvf-model-containers)
12. [Hardware Setup](#hardware-setup)
- [ESP32-S3 Mesh](#esp32-s3-mesh)
@@ -82,41 +79,12 @@ cd wifi-densepose/rust-port/wifi-densepose-rs
# Build
cargo build --release
# Verify (runs 1,100+ tests)
# Verify (runs 700+ tests)
cargo test --workspace
```
The compiled binary is at `target/release/sensing-server`.
### From crates.io (Individual Crates)
All 15 crates are published to crates.io at v0.3.0. Add individual crates to your own Rust project:
```bash
# Core types and traits
cargo add wifi-densepose-core
# Signal processing (includes RuvSense multistatic sensing)
cargo add wifi-densepose-signal
# Neural network inference
cargo add wifi-densepose-nn
# Mass Casualty Assessment Tool
cargo add wifi-densepose-mat
# ESP32 hardware + TDM protocol + QUIC transport
cargo add wifi-densepose-hardware
# RuVector integration (add --features crv for CRV signal-line protocol)
cargo add wifi-densepose-ruvector --features crv
# WebAssembly bindings
cargo add wifi-densepose-wasm
```
See the full crate list and dependency order in [CLAUDE.md](../CLAUDE.md#crate-publishing-order).
### From Source (Python)
```bash
@@ -263,27 +231,6 @@ docker run -p 3000:3000 -p 3001:3001 -p 5005:5005/udp ruvnet/wifi-densepose:late
The ESP32 nodes stream binary CSI frames over UDP to port 5005. See [Hardware Setup](#esp32-s3-mesh) for flashing instructions.
### ESP32 Multistatic Mesh (Advanced)
For higher accuracy with through-wall tracking, deploy 3-6 ESP32-S3 nodes in a **multistatic mesh** configuration. Each node acts as both transmitter and receiver, creating multiple sensing paths through the environment.
```bash
# Start the aggregator with multistatic mode
./target/release/sensing-server --source esp32 --udp-port 5005 --http-port 3000 --ws-port 3001
```
The mesh uses a **Time-Division Multiplexing (TDM)** protocol so nodes take turns transmitting, avoiding self-interference. Key features:
| Feature | Description |
|---------|-------------|
| TDM coordination | Nodes cycle through TX/RX slots (configurable guard intervals) |
| Channel hopping | Automatic 2.4/5 GHz band cycling for multiband fusion |
| QUIC transport | TLS 1.3-encrypted streams on aggregator nodes (ADR-032a) |
| Manual crypto fallback | HMAC-SHA256 beacon auth on constrained ESP32-S3 nodes |
| Attention-weighted fusion | Cross-viewpoint attention with geometric diversity bias |
See [ADR-029](adr/ADR-029-ruvsense-multistatic-sensing-mode.md) and [ADR-032](adr/ADR-032-multistatic-mesh-security-hardening.md) for the full design.
---
## REST API Reference
@@ -422,7 +369,7 @@ The system extracts breathing rate and heart rate from CSI signal fluctuations u
**Requirements:**
- CSI-capable hardware (ESP32-S3 or research NIC) for accurate readings
- Subject within ~3-5 meters of an access point (up to ~8 m with multistatic mesh)
- Subject within ~3-5 meters of an access point
- Relatively stationary subject (large movements mask vital sign oscillations)
**Simulated mode** produces synthetic vital sign data for testing.
@@ -546,26 +493,6 @@ MERIDIAN components (all pure Rust, +12K parameters):
See [ADR-027](adr/ADR-027-cross-environment-domain-generalization.md) for the full design.
### CRV Signal-Line Protocol
The CRV (Coordinate Remote Viewing) signal-line protocol (ADR-033) maps a 6-stage cognitive sensing methodology onto WiFi CSI processing. This enables structured anomaly classification and multi-person disambiguation.
| Stage | CRV Term | WiFi Mapping |
|-------|----------|-------------|
| I | Gestalt | Detrended autocorrelation → periodicity / chaos / transient classification |
| II | Sensory | 6-modality CSI feature encoding (texture, temperature, luminosity, etc.) |
| III | Topology | AP mesh topology graph with link quality weights |
| IV | Coherence | Phase phasor coherence gate (Accept/PredictOnly/Reject/Recalibrate) |
| V | Interrogation | Person-specific signal extraction with targeted subcarrier selection |
| VI | Partition | Multi-person partition with cross-room convergence scoring |
```bash
# Enable CRV in your Cargo.toml
cargo add wifi-densepose-ruvector --features crv
```
See [ADR-033](adr/ADR-033-crv-signal-line-sensing-integration.md) for the full design.
---
## RVF Model Containers
@@ -608,7 +535,7 @@ A 3-6 node ESP32-S3 mesh provides full CSI at 20 Hz. Total cost: ~$54 for a 3-no
**What you need:**
- 3-6x ESP32-S3 development boards (~$8 each)
- A WiFi router (the CSI source)
- A computer running the sensing server (aggregator)
- A computer running the sensing server
**Flashing firmware:**
@@ -630,33 +557,6 @@ python scripts/provision.py --port COM7 \
Replace `192.168.1.20` with the IP of the machine running the sensing server.
**Mesh key provisioning (secure mode):**
For multistatic mesh deployments with authenticated beacons (ADR-032), provision a shared mesh key:
```bash
python scripts/provision.py --port COM7 \
--ssid "YourWiFi" --password "YourPassword" --target-ip 192.168.1.20 \
--mesh-key "$(openssl rand -hex 32)"
```
All nodes in a mesh must share the same 256-bit mesh key for HMAC-SHA256 beacon authentication. The key is stored in ESP32 NVS flash and zeroed on firmware erase.
**TDM slot assignment:**
Each node in a multistatic mesh needs a unique TDM slot ID (0-based):
```bash
# Node 0 (slot 0) — first transmitter
python scripts/provision.py --port COM7 --tdm-slot 0 --tdm-total 3
# Node 1 (slot 1)
python scripts/provision.py --port COM8 --tdm-slot 1 --tdm-total 3
# Node 2 (slot 2)
python scripts/provision.py --port COM9 --tdm-slot 2 --tdm-total 3
```
**Start the aggregator:**
```bash
@@ -667,7 +567,7 @@ python scripts/provision.py --port COM9 --tdm-slot 2 --tdm-total 3
docker run -p 3000:3000 -p 3001:3001 -p 5005:5005/udp ruvnet/wifi-densepose:latest --source esp32
```
See [ADR-018](../docs/adr/ADR-018-esp32-dev-implementation.md), [ADR-029](../docs/adr/ADR-029-ruvsense-multistatic-sensing-mode.md), and [Tutorial #34](https://github.com/ruvnet/wifi-densepose/issues/34).
See [ADR-018](../docs/adr/ADR-018-esp32-dev-implementation.md) and [Tutorial #34](https://github.com/ruvnet/wifi-densepose/issues/34).
### Intel 5300 / Atheros NIC
@@ -726,7 +626,7 @@ docker run -p 3000:3000 -p 3001:3001 ruvnet/wifi-densepose:latest
### Build: Rust compilation errors
Ensure Rust 1.75+ is installed (1.85+ recommended):
Ensure Rust 1.70+ is installed:
```bash
rustup update stable
rustc --version
@@ -756,7 +656,7 @@ No. Consumer WiFi exposes only RSSI (one number per access point), not CSI (56+
Accuracy depends on hardware and environment. With a 3-node ESP32 mesh in a single room, the system tracks 17 COCO keypoints. The core algorithm follows the CMU "DensePose From WiFi" paper ([arXiv:2301.00250](https://arxiv.org/abs/2301.00250)). The MERIDIAN domain generalization system (ADR-027) reduces cross-environment accuracy loss from 40-70% to under 15% via 10-second automatic calibration.
**Q: Does it work through walls?**
Yes. WiFi signals penetrate non-metallic materials (drywall, wood, concrete up to ~30cm). Metal walls/doors significantly attenuate the signal. With a single AP the effective through-wall range is approximately 5 meters. With a 3-6 node multistatic mesh (ADR-029), attention-weighted cross-viewpoint fusion extends the effective range to ~8 meters through standard residential walls.
Yes. WiFi signals penetrate non-metallic materials (drywall, wood, concrete up to ~30cm). Metal walls/doors significantly attenuate the signal. The effective through-wall range is approximately 5 meters.
**Q: How many people can it track?**
Each access point can distinguish ~3-5 people with 56 subcarriers. Multi-AP deployments multiply linearly (e.g., 4 APs cover ~15-20 people). There is no hard software limit; the practical ceiling is signal physics.
@@ -771,7 +671,7 @@ The Rust implementation (v2) is 810x faster than Python (v1) for the full CSI pi
## Further Reading
- [Architecture Decision Records](../docs/adr/) - 33 ADRs covering all design decisions
- [Architecture Decision Records](../docs/adr/) - 27 ADRs covering all design decisions
- [WiFi-Mat Disaster Response Guide](wifi-mat-user-guide.md) - Search & rescue module
- [Build Guide](build-guide.md) - Detailed build instructions
- [RuVector](https://github.com/ruvnet/ruvector) - Signal intelligence crate ecosystem

View File

@@ -4,11 +4,6 @@
*
* Registers the ESP-IDF WiFi CSI callback and serializes incoming CSI data
* into the ADR-018 binary frame format for UDP transmission.
*
* ADR-029 extensions:
* - Channel-hop table for multi-band sensing (channels 1/6/11 by default)
* - Timer-driven channel hopping at configurable dwell intervals
* - NDP frame injection stub for sensing-first TX
*/
#include "csi_collector.h"
@@ -17,7 +12,6 @@
#include <string.h>
#include "esp_log.h"
#include "esp_wifi.h"
#include "esp_timer.h"
#include "sdkconfig.h"
static const char *TAG = "csi_collector";
@@ -27,23 +21,6 @@ static uint32_t s_cb_count = 0;
static uint32_t s_send_ok = 0;
static uint32_t s_send_fail = 0;
/* ---- ADR-029: Channel-hop state ---- */
/** Channel hop table (populated from NVS at boot or via set_hop_table). */
static uint8_t s_hop_channels[CSI_HOP_CHANNELS_MAX] = {1, 6, 11, 36, 40, 44};
/** Number of active channels in the hop table. 1 = single-channel (no hop). */
static uint8_t s_hop_count = 1;
/** Dwell time per channel in milliseconds. */
static uint32_t s_dwell_ms = 50;
/** Current index into s_hop_channels. */
static uint8_t s_hop_index = 0;
/** Handle for the periodic hop timer. NULL when timer is not running. */
static esp_timer_handle_t s_hop_timer = NULL;
/**
* Serialize CSI data into ADR-018 binary frame format.
*
@@ -197,146 +174,3 @@ void csi_collector_init(void)
ESP_LOGI(TAG, "CSI collection initialized (node_id=%d, channel=%d)",
CONFIG_CSI_NODE_ID, CONFIG_CSI_WIFI_CHANNEL);
}
/* ---- ADR-029: Channel hopping ---- */
void csi_collector_set_hop_table(const uint8_t *channels, uint8_t hop_count, uint32_t dwell_ms)
{
if (channels == NULL) {
ESP_LOGW(TAG, "csi_collector_set_hop_table: channels is NULL");
return;
}
if (hop_count == 0 || hop_count > CSI_HOP_CHANNELS_MAX) {
ESP_LOGW(TAG, "csi_collector_set_hop_table: invalid hop_count=%u (max=%u)",
(unsigned)hop_count, (unsigned)CSI_HOP_CHANNELS_MAX);
return;
}
if (dwell_ms < 10) {
ESP_LOGW(TAG, "csi_collector_set_hop_table: dwell_ms=%lu too small, clamping to 10",
(unsigned long)dwell_ms);
dwell_ms = 10;
}
memcpy(s_hop_channels, channels, hop_count);
s_hop_count = hop_count;
s_dwell_ms = dwell_ms;
s_hop_index = 0;
ESP_LOGI(TAG, "Hop table set: %u channels, dwell=%lu ms", (unsigned)hop_count,
(unsigned long)dwell_ms);
for (uint8_t i = 0; i < hop_count; i++) {
ESP_LOGI(TAG, " hop[%u] = channel %u", (unsigned)i, (unsigned)channels[i]);
}
}
void csi_hop_next_channel(void)
{
if (s_hop_count <= 1) {
/* Single-channel mode: no-op for backward compatibility. */
return;
}
s_hop_index = (s_hop_index + 1) % s_hop_count;
uint8_t channel = s_hop_channels[s_hop_index];
/*
* esp_wifi_set_channel() changes the primary channel.
* The second parameter is the secondary channel offset for HT40;
* we use HT20 (no secondary) for sensing.
*/
esp_err_t err = esp_wifi_set_channel(channel, WIFI_SECOND_CHAN_NONE);
if (err != ESP_OK) {
ESP_LOGW(TAG, "Channel hop to %u failed: %s", (unsigned)channel, esp_err_to_name(err));
} else if ((s_cb_count % 200) == 0) {
/* Periodic log to confirm hopping is working (not every hop). */
ESP_LOGI(TAG, "Hopped to channel %u (index %u/%u)",
(unsigned)channel, (unsigned)s_hop_index, (unsigned)s_hop_count);
}
}
/**
* Timer callback for channel hopping.
* Called every s_dwell_ms milliseconds from the esp_timer context.
*/
static void hop_timer_cb(void *arg)
{
(void)arg;
csi_hop_next_channel();
}
void csi_collector_start_hop_timer(void)
{
if (s_hop_count <= 1) {
ESP_LOGI(TAG, "Single-channel mode: hop timer not started");
return;
}
if (s_hop_timer != NULL) {
ESP_LOGW(TAG, "Hop timer already running");
return;
}
esp_timer_create_args_t timer_args = {
.callback = hop_timer_cb,
.arg = NULL,
.name = "csi_hop",
};
esp_err_t err = esp_timer_create(&timer_args, &s_hop_timer);
if (err != ESP_OK) {
ESP_LOGE(TAG, "Failed to create hop timer: %s", esp_err_to_name(err));
return;
}
uint64_t period_us = (uint64_t)s_dwell_ms * 1000;
err = esp_timer_start_periodic(s_hop_timer, period_us);
if (err != ESP_OK) {
ESP_LOGE(TAG, "Failed to start hop timer: %s", esp_err_to_name(err));
esp_timer_delete(s_hop_timer);
s_hop_timer = NULL;
return;
}
ESP_LOGI(TAG, "Hop timer started: period=%lu ms, channels=%u",
(unsigned long)s_dwell_ms, (unsigned)s_hop_count);
}
/* ---- ADR-029: NDP frame injection stub ---- */
esp_err_t csi_inject_ndp_frame(void)
{
/*
* TODO: Construct a proper 802.11 Null Data Packet frame.
*
* A real NDP is preamble-only (~24 us airtime, no payload) and is the
* sensing-first TX mechanism described in ADR-029. For now we send a
* minimal null-data frame as a placeholder so the API is wired up.
*
* Frame structure (IEEE 802.11 Null Data):
* FC (2) | Duration (2) | Addr1 (6) | Addr2 (6) | Addr3 (6) | SeqCtl (2)
* = 24 bytes total, no body, no FCS (hardware appends FCS).
*/
uint8_t ndp_frame[24];
memset(ndp_frame, 0, sizeof(ndp_frame));
/* Frame Control: Type=Data (0x02), Subtype=Null (0x04) -> 0x0048 */
ndp_frame[0] = 0x48;
ndp_frame[1] = 0x00;
/* Duration: 0 (let hardware fill) */
/* Addr1 (destination): broadcast */
memset(&ndp_frame[4], 0xFF, 6);
/* Addr2 (source): will be overwritten by hardware with own MAC */
/* Addr3 (BSSID): broadcast */
memset(&ndp_frame[16], 0xFF, 6);
esp_err_t err = esp_wifi_80211_tx(WIFI_IF_STA, ndp_frame, sizeof(ndp_frame), false);
if (err != ESP_OK) {
ESP_LOGW(TAG, "NDP inject failed: %s", esp_err_to_name(err));
}
return err;
}

View File

@@ -19,9 +19,6 @@
/** Maximum frame buffer size (header + 4 antennas * 256 subcarriers * 2 bytes). */
#define CSI_MAX_FRAME_SIZE (CSI_HEADER_SIZE + 4 * 256 * 2)
/** Maximum number of channels in the hop table (ADR-029). */
#define CSI_HOP_CHANNELS_MAX 6
/**
* Initialize CSI collection.
* Registers the WiFi CSI callback.
@@ -38,47 +35,4 @@ void csi_collector_init(void);
*/
size_t csi_serialize_frame(const wifi_csi_info_t *info, uint8_t *buf, size_t buf_len);
/**
* Configure the channel-hop table for multi-band sensing (ADR-029).
*
* When hop_count == 1 the collector stays on the single configured channel
* (backward-compatible with the original single-channel mode).
*
* @param channels Array of WiFi channel numbers (1-14 for 2.4 GHz, 36-177 for 5 GHz).
* @param hop_count Number of entries in the channels array (1..CSI_HOP_CHANNELS_MAX).
* @param dwell_ms Dwell time per channel in milliseconds (>= 10).
*/
void csi_collector_set_hop_table(const uint8_t *channels, uint8_t hop_count, uint32_t dwell_ms);
/**
* Advance to the next channel in the hop table.
*
* Called by the hop timer callback. If hop_count <= 1 this is a no-op.
* Calls esp_wifi_set_channel() internally.
*/
void csi_hop_next_channel(void);
/**
* Start the channel-hop timer.
*
* Creates an esp_timer periodic callback that fires every dwell_ms
* milliseconds, calling csi_hop_next_channel(). If hop_count <= 1
* the timer is not started (single-channel backward-compatible mode).
*/
void csi_collector_start_hop_timer(void);
/**
* Inject an NDP (Null Data Packet) frame for sensing.
*
* Uses esp_wifi_80211_tx() to send a preamble-only frame (~24 us airtime)
* that triggers CSI measurement at all receivers. This is the "sensing-first"
* TX mechanism described in ADR-029.
*
* @return ESP_OK on success, or an error code.
*
* @note TODO: Full NDP frame construction. Currently sends a minimal
* null-data frame as a placeholder.
*/
esp_err_t csi_inject_ndp_frame(void);
#endif /* CSI_COLLECTOR_H */

View File

@@ -18,11 +18,6 @@ static const char *TAG = "nvs_config";
void nvs_config_load(nvs_config_t *cfg)
{
if (cfg == NULL) {
ESP_LOGE(TAG, "nvs_config_load: cfg is NULL");
return;
}
/* Start with Kconfig compiled defaults */
strncpy(cfg->wifi_ssid, CONFIG_CSI_WIFI_SSID, NVS_CFG_SSID_MAX - 1);
cfg->wifi_ssid[NVS_CFG_SSID_MAX - 1] = '\0';
@@ -40,17 +35,6 @@ void nvs_config_load(nvs_config_t *cfg)
cfg->target_port = (uint16_t)CONFIG_CSI_TARGET_PORT;
cfg->node_id = (uint8_t)CONFIG_CSI_NODE_ID;
/* ADR-029: Defaults for channel hopping and TDM.
* hop_count=1 means single-channel (backward-compatible). */
cfg->channel_hop_count = 1;
cfg->channel_list[0] = (uint8_t)CONFIG_CSI_WIFI_CHANNEL;
for (uint8_t i = 1; i < NVS_CFG_HOP_MAX; i++) {
cfg->channel_list[i] = 0;
}
cfg->dwell_ms = 50;
cfg->tdm_slot_index = 0;
cfg->tdm_node_count = 1;
/* Try to override from NVS */
nvs_handle_t handle;
esp_err_t err = nvs_open("csi_cfg", NVS_READONLY, &handle);
@@ -100,64 +84,5 @@ void nvs_config_load(nvs_config_t *cfg)
ESP_LOGI(TAG, "NVS override: node_id=%u", cfg->node_id);
}
/* ADR-029: Channel hop count */
uint8_t hop_count_val;
if (nvs_get_u8(handle, "hop_count", &hop_count_val) == ESP_OK) {
if (hop_count_val >= 1 && hop_count_val <= NVS_CFG_HOP_MAX) {
cfg->channel_hop_count = hop_count_val;
ESP_LOGI(TAG, "NVS override: hop_count=%u", (unsigned)cfg->channel_hop_count);
} else {
ESP_LOGW(TAG, "NVS hop_count=%u out of range [1..%u], ignored",
(unsigned)hop_count_val, (unsigned)NVS_CFG_HOP_MAX);
}
}
/* ADR-029: Channel list (stored as a blob of up to NVS_CFG_HOP_MAX bytes) */
len = NVS_CFG_HOP_MAX;
uint8_t ch_blob[NVS_CFG_HOP_MAX];
if (nvs_get_blob(handle, "chan_list", ch_blob, &len) == ESP_OK && len > 0) {
uint8_t count = (len < cfg->channel_hop_count) ? (uint8_t)len : cfg->channel_hop_count;
for (uint8_t i = 0; i < count; i++) {
cfg->channel_list[i] = ch_blob[i];
}
ESP_LOGI(TAG, "NVS override: chan_list loaded (%u channels)", (unsigned)count);
}
/* ADR-029: Dwell time */
uint32_t dwell_val;
if (nvs_get_u32(handle, "dwell_ms", &dwell_val) == ESP_OK) {
if (dwell_val >= 10) {
cfg->dwell_ms = dwell_val;
ESP_LOGI(TAG, "NVS override: dwell_ms=%lu", (unsigned long)cfg->dwell_ms);
} else {
ESP_LOGW(TAG, "NVS dwell_ms=%lu too small, ignored", (unsigned long)dwell_val);
}
}
/* ADR-029/031: TDM slot index */
uint8_t slot_val;
if (nvs_get_u8(handle, "tdm_slot", &slot_val) == ESP_OK) {
cfg->tdm_slot_index = slot_val;
ESP_LOGI(TAG, "NVS override: tdm_slot_index=%u", (unsigned)cfg->tdm_slot_index);
}
/* ADR-029/031: TDM node count */
uint8_t tdm_nodes_val;
if (nvs_get_u8(handle, "tdm_nodes", &tdm_nodes_val) == ESP_OK) {
if (tdm_nodes_val >= 1) {
cfg->tdm_node_count = tdm_nodes_val;
ESP_LOGI(TAG, "NVS override: tdm_node_count=%u", (unsigned)cfg->tdm_node_count);
} else {
ESP_LOGW(TAG, "NVS tdm_nodes=%u invalid, ignored", (unsigned)tdm_nodes_val);
}
}
/* Validate tdm_slot_index < tdm_node_count */
if (cfg->tdm_slot_index >= cfg->tdm_node_count) {
ESP_LOGW(TAG, "tdm_slot_index=%u >= tdm_node_count=%u, clamping to 0",
(unsigned)cfg->tdm_slot_index, (unsigned)cfg->tdm_node_count);
cfg->tdm_slot_index = 0;
}
nvs_close(handle);
}

View File

@@ -18,9 +18,6 @@
#define NVS_CFG_PASS_MAX 65
#define NVS_CFG_IP_MAX 16
/** Maximum channels in the hop list (must match CSI_HOP_CHANNELS_MAX). */
#define NVS_CFG_HOP_MAX 6
/** Runtime configuration loaded from NVS or Kconfig defaults. */
typedef struct {
char wifi_ssid[NVS_CFG_SSID_MAX];
@@ -28,13 +25,6 @@ typedef struct {
char target_ip[NVS_CFG_IP_MAX];
uint16_t target_port;
uint8_t node_id;
/* ADR-029: Channel hopping and TDM configuration */
uint8_t channel_hop_count; /**< Number of channels to hop (1 = no hop). */
uint8_t channel_list[NVS_CFG_HOP_MAX]; /**< Channel numbers for hopping. */
uint32_t dwell_ms; /**< Dwell time per channel in ms. */
uint8_t tdm_slot_index; /**< This node's TDM slot index (0-based). */
uint8_t tdm_node_count; /**< Total nodes in the TDM schedule. */
} nvs_config_t;
/**

1
mobile/.env.example Normal file
View File

@@ -0,0 +1 @@
EXPO_PUBLIC_DEFAULT_SERVER_URL=http://192.168.1.100:8080

26
mobile/.eslintrc.js Normal file
View File

@@ -0,0 +1,26 @@
module.exports = {
root: true,
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaVersion: 'latest',
sourceType: 'module',
ecmaFeatures: {
jsx: true,
},
},
plugins: ['@typescript-eslint', 'react', 'react-hooks'],
extends: [
'eslint:recommended',
'plugin:react/recommended',
'plugin:react-hooks/recommended',
'plugin:@typescript-eslint/recommended',
],
settings: {
react: {
version: 'detect',
},
},
rules: {
'react/react-in-jsx-scope': 'off',
},
};

41
mobile/.gitignore vendored Normal file
View File

@@ -0,0 +1,41 @@
# Learn more https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files
# dependencies
node_modules/
# Expo
.expo/
dist/
web-build/
expo-env.d.ts
# Native
.kotlin/
*.orig.*
*.jks
*.p8
*.p12
*.key
*.mobileprovision
# Metro
.metro-health-check*
# debug
npm-debug.*
yarn-debug.*
yarn-error.*
# macOS
.DS_Store
*.pem
# local env files
.env*.local
# typescript
*.tsbuildinfo
# generated native folders
/ios
/android

4
mobile/.prettierrc Normal file
View File

@@ -0,0 +1,4 @@
{
"singleQuote": true,
"trailingComma": "all"
}

38
mobile/App.tsx Normal file
View File

@@ -0,0 +1,38 @@
import { useEffect } from 'react';
import { NavigationContainer, DarkTheme } from '@react-navigation/native';
import { GestureHandlerRootView } from 'react-native-gesture-handler';
import { StatusBar } from 'expo-status-bar';
import { SafeAreaProvider } from 'react-native-safe-area-context';
import { ThemeProvider } from './src/theme/ThemeContext';
import { RootNavigator } from './src/navigation/RootNavigator';
export default function App() {
useEffect(() => {
(globalThis as { __appStartTime?: number }).__appStartTime = Date.now();
}, []);
const navigationTheme = {
...DarkTheme,
colors: {
...DarkTheme.colors,
background: '#0A0E1A',
card: '#0D1117',
text: '#E2E8F0',
border: '#1E293B',
primary: '#32B8C6',
},
};
return (
<GestureHandlerRootView style={{ flex: 1 }}>
<SafeAreaProvider>
<ThemeProvider>
<NavigationContainer theme={navigationTheme}>
<RootNavigator />
</NavigationContainer>
</ThemeProvider>
</SafeAreaProvider>
<StatusBar style="light" />
</GestureHandlerRootView>
);
}

12
mobile/app.config.ts Normal file
View File

@@ -0,0 +1,12 @@
export default {
name: 'WiFi-DensePose',
slug: 'wifi-densepose',
version: '1.0.0',
ios: {
bundleIdentifier: 'com.ruvnet.wifidensepose',
},
android: {
package: 'com.ruvnet.wifidensepose',
},
// Use expo-env and app-level defaults from the project configuration when available.
};

30
mobile/app.json Normal file
View File

@@ -0,0 +1,30 @@
{
"expo": {
"name": "mobile",
"slug": "mobile",
"version": "1.0.0",
"orientation": "portrait",
"icon": "./assets/icon.png",
"userInterfaceStyle": "light",
"splash": {
"image": "./assets/splash-icon.png",
"resizeMode": "contain",
"backgroundColor": "#ffffff"
},
"ios": {
"supportsTablet": true
},
"android": {
"adaptiveIcon": {
"backgroundColor": "#E6F4FE",
"foregroundImage": "./assets/android-icon-foreground.png",
"backgroundImage": "./assets/android-icon-background.png",
"monochromeImage": "./assets/android-icon-monochrome.png"
},
"predictiveBackGestureEnabled": false
},
"web": {
"favicon": "./assets/favicon.png"
}
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

BIN
mobile/assets/favicon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

BIN
mobile/assets/icon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 384 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

9
mobile/babel.config.js Normal file
View File

@@ -0,0 +1,9 @@
module.exports = function (api) {
api.cache(true);
return {
presets: ['babel-preset-expo'],
plugins: [
'react-native-reanimated/plugin'
]
};
};

View File

View File

View File

View File

View File

View File

View File

17
mobile/eas.json Normal file
View File

@@ -0,0 +1,17 @@
{
"cli": {
"version": ">= 4.0.0"
},
"build": {
"development": {
"developmentClient": true,
"distribution": "internal"
},
"preview": {
"distribution": "internal"
},
"production": {
"autoIncrement": true
}
}
}

4
mobile/index.ts Normal file
View File

@@ -0,0 +1,4 @@
import { registerRootComponent } from 'expo';
import App from './App';
registerRootComponent(App);

8
mobile/jest.config.js Normal file
View File

@@ -0,0 +1,8 @@
module.exports = {
preset: 'jest-expo',
setupFilesAfterEnv: ['<rootDir>/jest.setup.ts'],
testPathIgnorePatterns: ['/src/__tests__/'],
transformIgnorePatterns: [
'node_modules/(?!(expo|expo-.+|react-native|@react-native|react-native-webview|react-native-reanimated|react-native-svg|react-native-safe-area-context|react-native-screens|@react-navigation|@expo|@unimodules|expo-modules-core)/)',
],
};

11
mobile/jest.setup.ts Normal file
View File

@@ -0,0 +1,11 @@
jest.mock('@react-native-async-storage/async-storage', () =>
require('@react-native-async-storage/async-storage/jest/async-storage-mock')
);
jest.mock('react-native-wifi-reborn', () => ({
loadWifiList: jest.fn(async () => []),
}));
jest.mock('react-native-reanimated', () =>
require('react-native-reanimated/mock')
);

16327
mobile/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

49
mobile/package.json Normal file
View File

@@ -0,0 +1,49 @@
{
"name": "mobile",
"version": "1.0.0",
"main": "index.ts",
"scripts": {
"start": "expo start",
"android": "expo start --android",
"ios": "expo start --ios",
"web": "expo start --web",
"test": "jest",
"lint": "eslint ."
},
"dependencies": {
"@expo/vector-icons": "^15.0.2",
"@react-native-async-storage/async-storage": "2.2.0",
"@react-navigation/bottom-tabs": "^7.15.3",
"@react-navigation/native": "^7.1.31",
"axios": "^1.13.6",
"expo": "~55.0.4",
"expo-status-bar": "~55.0.4",
"react": "19.2.0",
"react-native": "0.83.2",
"react-native-gesture-handler": "~2.30.0",
"react-native-reanimated": "4.2.1",
"react-native-safe-area-context": "~5.6.2",
"react-native-screens": "~4.23.0",
"react-native-svg": "15.15.3",
"react-native-webview": "13.16.0",
"react-native-wifi-reborn": "^4.13.6",
"victory-native": "^41.20.2",
"zustand": "^5.0.11"
},
"devDependencies": {
"@testing-library/jest-native": "^5.4.3",
"@testing-library/react-native": "^13.3.3",
"@types/jest": "^30.0.0",
"@types/react": "~19.2.2",
"@typescript-eslint/eslint-plugin": "^8.56.1",
"@typescript-eslint/parser": "^8.56.1",
"babel-preset-expo": "^55.0.10",
"eslint": "^10.0.2",
"jest": "^30.2.0",
"jest-expo": "^55.0.9",
"prettier": "^3.8.1",
"react-native-worklets": "^0.7.4",
"typescript": "~5.9.2"
},
"private": true
}

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,5 @@
describe('placeholder', () => {
it('passes', () => {
expect(true).toBe(true);
});
});

View File

View File

@@ -0,0 +1,585 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"
/>
<meta
http-equiv="Content-Security-Policy"
content="default-src 'self'; script-src 'self' https://cdnjs.cloudflare.com https://cdn.jsdelivr.net; style-src 'self' 'unsafe-inline'; img-src 'self' data:; connect-src 'self'"
/>
<title>WiFi DensePose Splat Viewer</title>
<style>
html,
body,
#gaussian-splat-root {
margin: 0;
width: 100%;
height: 100%;
overflow: hidden;
background: #0a0e1a;
touch-action: none;
}
#gaussian-splat-root {
position: relative;
}
</style>
</head>
<body>
<div id="gaussian-splat-root"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r165/three.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/three@0.165.0/examples/js/controls/OrbitControls.js"></script>
<script>
(function () {
const postMessageToRN = (message) => {
if (!window.ReactNativeWebView || typeof window.ReactNativeWebView.postMessage !== 'function') {
return;
}
try {
window.ReactNativeWebView.postMessage(JSON.stringify(message));
} catch (error) {
console.error('Failed to post RN message', error);
}
};
const postError = (message) => {
postMessageToRN({
type: 'ERROR',
payload: {
message: typeof message === 'string' ? message : 'Unknown bridge error',
},
});
};
// Use global THREE from CDN
const getThree = () => window.THREE;
// ---- Custom Splat Shaders --------------------------------------------
const SPLAT_VERTEX = `
attribute float splatSize;
attribute vec3 splatColor;
attribute float splatOpacity;
varying vec3 vColor;
varying float vOpacity;
void main() {
vColor = splatColor;
vOpacity = splatOpacity;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_PointSize = splatSize * (300.0 / -mvPosition.z);
gl_Position = projectionMatrix * mvPosition;
}
`;
const SPLAT_FRAGMENT = `
varying vec3 vColor;
varying float vOpacity;
void main() {
// Circular soft-edge disc
float dist = length(gl_PointCoord - vec2(0.5));
if (dist > 0.5) discard;
float alpha = smoothstep(0.5, 0.2, dist) * vOpacity;
gl_FragColor = vec4(vColor, alpha);
}
`;
// ---- Color helpers ---------------------------------------------------
/** Map a scalar 0-1 to blue -> green -> red gradient */
function valueToColor(v) {
const clamped = Math.max(0, Math.min(1, v));
// blue(0) -> cyan(0.25) -> green(0.5) -> yellow(0.75) -> red(1)
let r;
let g;
let b;
if (clamped < 0.5) {
const t = clamped * 2;
r = 0;
g = t;
b = 1 - t;
} else {
const t = (clamped - 0.5) * 2;
r = t;
g = 1 - t;
b = 0;
}
return [r, g, b];
}
// ---- GaussianSplatRenderer -------------------------------------------
class GaussianSplatRenderer {
/** @param {HTMLElement} container - DOM element to attach the renderer to */
constructor(container, opts = {}) {
const THREE = getThree();
if (!THREE) {
throw new Error('Three.js not loaded');
}
this.container = container;
this.width = opts.width || container.clientWidth || 800;
this.height = opts.height || 500;
// Scene
this.scene = new THREE.Scene();
this.scene.background = new THREE.Color(0x0a0e1a);
// Camera — perspective looking down at the room
this.camera = new THREE.PerspectiveCamera(45, this.width / this.height, 0.1, 200);
this.camera.position.set(0, 10, 12);
this.camera.lookAt(0, 0, 0);
// Renderer
this.renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true });
this.renderer.setSize(this.width, this.height);
this.renderer.setPixelRatio(Math.min(window.devicePixelRatio, 2));
container.appendChild(this.renderer.domElement);
// Lights
const ambient = new THREE.AmbientLight(0x9ec7ff, 0.35);
this.scene.add(ambient);
const directional = new THREE.DirectionalLight(0x9ec7ff, 0.65);
directional.position.set(4, 10, 6);
directional.castShadow = false;
this.scene.add(directional);
// Grid & room
this._createRoom(THREE);
// Signal field splats (20x20 = 400 points on the floor plane)
this.gridSize = 20;
this._createFieldSplats(THREE);
// Node markers (ESP32 / router positions)
this._createNodeMarkers(THREE);
// Body disruption blob
this._createBodyBlob(THREE);
// Orbit controls for drag + pinch zoom
this.controls = new THREE.OrbitControls(this.camera, this.renderer.domElement);
this.controls.target.set(0, 0, 0);
this.controls.minDistance = 6;
this.controls.maxDistance = 40;
this.controls.enableDamping = true;
this.controls.dampingFactor = 0.08;
this.controls.update();
// Animation state
this._animFrame = null;
this._lastData = null;
this._fpsFrames = [];
this._lastFpsReport = 0;
// Start render loop
this._animate();
}
// ---- Scene setup ---------------------------------------------------
_createRoom(THREE) {
// Floor grid (on y = 0), 20 units
const grid = new THREE.GridHelper(20, 20, 0x1a3a4a, 0x0d1f28);
grid.position.y = 0;
this.scene.add(grid);
// Room boundary wireframe
const boxGeo = new THREE.BoxGeometry(20, 6, 20);
const edges = new THREE.EdgesGeometry(boxGeo);
const line = new THREE.LineSegments(
edges,
new THREE.LineBasicMaterial({ color: 0x1a4a5a, opacity: 0.3, transparent: true }),
);
line.position.y = 3;
this.scene.add(line);
}
_createFieldSplats(THREE) {
const count = this.gridSize * this.gridSize;
const positions = new Float32Array(count * 3);
const sizes = new Float32Array(count);
const colors = new Float32Array(count * 3);
const opacities = new Float32Array(count);
// Lay splats on the floor plane (y = 0.05 to sit just above grid)
for (let iz = 0; iz < this.gridSize; iz++) {
for (let ix = 0; ix < this.gridSize; ix++) {
const idx = iz * this.gridSize + ix;
positions[idx * 3 + 0] = (ix - this.gridSize / 2) + 0.5; // x
positions[idx * 3 + 1] = 0.05; // y
positions[idx * 3 + 2] = (iz - this.gridSize / 2) + 0.5; // z
sizes[idx] = 1.5;
colors[idx * 3] = 0.1;
colors[idx * 3 + 1] = 0.2;
colors[idx * 3 + 2] = 0.6;
opacities[idx] = 0.15;
}
}
const geo = new THREE.BufferGeometry();
geo.setAttribute('position', new THREE.BufferAttribute(positions, 3));
geo.setAttribute('splatSize', new THREE.BufferAttribute(sizes, 1));
geo.setAttribute('splatColor', new THREE.BufferAttribute(colors, 3));
geo.setAttribute('splatOpacity', new THREE.BufferAttribute(opacities, 1));
const mat = new THREE.ShaderMaterial({
vertexShader: SPLAT_VERTEX,
fragmentShader: SPLAT_FRAGMENT,
transparent: true,
depthWrite: false,
blending: THREE.AdditiveBlending,
});
this.fieldPoints = new THREE.Points(geo, mat);
this.scene.add(this.fieldPoints);
}
_createNodeMarkers(THREE) {
// Router at center — green sphere
const routerGeo = new THREE.SphereGeometry(0.3, 16, 16);
const routerMat = new THREE.MeshBasicMaterial({ color: 0x00ff88, transparent: true, opacity: 0.8 });
this.routerMarker = new THREE.Mesh(routerGeo, routerMat);
this.routerMarker.position.set(0, 0.5, 0);
this.scene.add(this.routerMarker);
// ESP32 node — cyan sphere (default position, updated from data)
const nodeGeo = new THREE.SphereGeometry(0.25, 16, 16);
const nodeMat = new THREE.MeshBasicMaterial({ color: 0x00ccff, transparent: true, opacity: 0.8 });
this.nodeMarker = new THREE.Mesh(nodeGeo, nodeMat);
this.nodeMarker.position.set(2, 0.5, 1.5);
this.scene.add(this.nodeMarker);
}
_createBodyBlob(THREE) {
// A cluster of splats representing body disruption
const count = 64;
const positions = new Float32Array(count * 3);
const sizes = new Float32Array(count);
const colors = new Float32Array(count * 3);
const opacities = new Float32Array(count);
for (let i = 0; i < count; i++) {
// Random sphere distribution
const theta = Math.random() * Math.PI * 2;
const phi = Math.acos(2 * Math.random() - 1);
const r = Math.random() * 1.5;
positions[i * 3] = r * Math.sin(phi) * Math.cos(theta);
positions[i * 3 + 1] = r * Math.cos(phi) + 2;
positions[i * 3 + 2] = r * Math.sin(phi) * Math.sin(theta);
sizes[i] = 2 + Math.random() * 3;
colors[i * 3] = 0.2;
colors[i * 3 + 1] = 0.8;
colors[i * 3 + 2] = 0.3;
opacities[i] = 0.0; // hidden until presence detected
}
const geo = new THREE.BufferGeometry();
geo.setAttribute('position', new THREE.BufferAttribute(positions, 3));
geo.setAttribute('splatSize', new THREE.BufferAttribute(sizes, 1));
geo.setAttribute('splatColor', new THREE.BufferAttribute(colors, 3));
geo.setAttribute('splatOpacity', new THREE.BufferAttribute(opacities, 1));
const mat = new THREE.ShaderMaterial({
vertexShader: SPLAT_VERTEX,
fragmentShader: SPLAT_FRAGMENT,
transparent: true,
depthWrite: false,
blending: THREE.AdditiveBlending,
});
this.bodyBlob = new THREE.Points(geo, mat);
this.scene.add(this.bodyBlob);
}
// ---- Data update --------------------------------------------------
/**
* Update the visualization with new sensing data.
* @param {object} data - sensing_update JSON from ws_server
*/
update(data) {
this._lastData = data;
if (!data) return;
const features = data.features || {};
const classification = data.classification || {};
const signalField = data.signal_field || {};
const nodes = data.nodes || [];
// -- Update signal field splats ------------------------------------
if (signalField.values && this.fieldPoints) {
const geo = this.fieldPoints.geometry;
const clr = geo.attributes.splatColor.array;
const sizes = geo.attributes.splatSize.array;
const opac = geo.attributes.splatOpacity.array;
const vals = signalField.values;
const count = Math.min(vals.length, this.gridSize * this.gridSize);
for (let i = 0; i < count; i++) {
const v = vals[i];
const [r, g, b] = valueToColor(v);
clr[i * 3] = r;
clr[i * 3 + 1] = g;
clr[i * 3 + 2] = b;
sizes[i] = 1.0 + v * 4.0;
opac[i] = 0.1 + v * 0.6;
}
geo.attributes.splatColor.needsUpdate = true;
geo.attributes.splatSize.needsUpdate = true;
geo.attributes.splatOpacity.needsUpdate = true;
}
// -- Update body blob ----------------------------------------------
if (this.bodyBlob) {
const bGeo = this.bodyBlob.geometry;
const bOpac = bGeo.attributes.splatOpacity.array;
const bClr = bGeo.attributes.splatColor.array;
const bSize = bGeo.attributes.splatSize.array;
const presence = classification.presence || false;
const motionLvl = classification.motion_level || 'absent';
const confidence = classification.confidence || 0;
const breathing = features.breathing_band_power || 0;
// Breathing pulsation
const breathPulse = 1.0 + Math.sin(Date.now() * 0.004) * Math.min(breathing * 3, 0.4);
for (let i = 0; i < bOpac.length; i++) {
if (presence) {
bOpac[i] = confidence * 0.4;
// Color by motion level
if (motionLvl === 'active') {
bClr[i * 3] = 1.0;
bClr[i * 3 + 1] = 0.2;
bClr[i * 3 + 2] = 0.1;
} else {
bClr[i * 3] = 0.1;
bClr[i * 3 + 1] = 0.8;
bClr[i * 3 + 2] = 0.4;
}
bSize[i] = (2 + Math.random() * 2) * breathPulse;
} else {
bOpac[i] = 0.0;
}
}
bGeo.attributes.splatOpacity.needsUpdate = true;
bGeo.attributes.splatColor.needsUpdate = true;
bGeo.attributes.splatSize.needsUpdate = true;
}
// -- Update node positions -----------------------------------------
if (nodes.length > 0 && nodes[0].position && this.nodeMarker) {
const pos = nodes[0].position;
this.nodeMarker.position.set(pos[0], 0.5, pos[2]);
}
}
// ---- Render loop -------------------------------------------------
_animate() {
this._animFrame = requestAnimationFrame(() => this._animate());
const now = performance.now();
// Gentle router glow pulse
if (this.routerMarker) {
const pulse = 0.6 + 0.3 * Math.sin(now * 0.003);
this.routerMarker.material.opacity = pulse;
}
this.controls.update();
this.renderer.render(this.scene, this.camera);
this._fpsFrames.push(now);
while (this._fpsFrames.length > 0 && this._fpsFrames[0] < now - 1000) {
this._fpsFrames.shift();
}
if (now - this._lastFpsReport >= 1000) {
const fps = this._fpsFrames.length;
this._lastFpsReport = now;
postMessageToRN({
type: 'FPS_TICK',
payload: { fps },
});
}
}
// ---- Resize / cleanup --------------------------------------------
resize(width, height) {
if (!width || !height) return;
this.width = width;
this.height = height;
this.camera.aspect = width / height;
this.camera.updateProjectionMatrix();
this.renderer.setSize(width, height);
}
dispose() {
if (this._animFrame) {
cancelAnimationFrame(this._animFrame);
}
this.controls?.dispose();
this.renderer.dispose();
if (this.renderer.domElement.parentNode) {
this.renderer.domElement.parentNode.removeChild(this.renderer.domElement);
}
}
}
// Expose renderer constructor for debugging/interop
window.GaussianSplatRenderer = GaussianSplatRenderer;
let renderer = null;
let pendingFrame = null;
let pendingResize = null;
const postSafeReady = () => {
postMessageToRN({ type: 'READY' });
};
const routeMessage = (event) => {
let raw = event.data;
if (typeof raw === 'object' && raw != null && 'data' in raw) {
raw = raw.data;
}
let message = raw;
if (typeof raw === 'string') {
try {
message = JSON.parse(raw);
} catch (err) {
postError('Failed to parse RN message payload');
return;
}
}
if (!message || typeof message !== 'object') {
return;
}
if (message.type === 'FRAME_UPDATE') {
const payload = message.payload || null;
if (!payload) {
return;
}
if (!renderer) {
pendingFrame = payload;
return;
}
try {
renderer.update(payload);
} catch (error) {
postError((error && error.message) || 'Failed to update frame');
}
return;
}
if (message.type === 'RESIZE') {
const dims = message.payload || {};
const w = Number(dims.width);
const h = Number(dims.height);
if (!Number.isFinite(w) || !Number.isFinite(h) || !w || !h) {
return;
}
if (!renderer) {
pendingResize = { width: w, height: h };
return;
}
try {
renderer.resize(w, h);
} catch (error) {
postError((error && error.message) || 'Failed to resize renderer');
}
return;
}
if (message.type === 'DISPOSE') {
if (!renderer) {
return;
}
try {
renderer.dispose();
} catch (error) {
postError((error && error.message) || 'Failed to dispose renderer');
}
renderer = null;
return;
}
};
const buildRenderer = () => {
const container = document.getElementById('gaussian-splat-root');
if (!container) {
return;
}
try {
renderer = new GaussianSplatRenderer(container, {
width: container.clientWidth || window.innerWidth,
height: container.clientHeight || window.innerHeight,
});
if (pendingFrame) {
renderer.update(pendingFrame);
pendingFrame = null;
}
if (pendingResize) {
renderer.resize(pendingResize.width, pendingResize.height);
pendingResize = null;
}
postSafeReady();
} catch (error) {
renderer = null;
postError((error && error.message) || 'Failed to initialize renderer');
}
};
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', buildRenderer);
} else {
buildRenderer();
}
window.addEventListener('message', routeMessage);
window.addEventListener('resize', () => {
if (!renderer) {
pendingResize = {
width: window.innerWidth,
height: window.innerHeight,
};
return;
}
renderer.resize(window.innerWidth, window.innerHeight);
});
})();
</script>
</body>
</html>

View File

@@ -0,0 +1,70 @@
import { StyleSheet, View } from 'react-native';
import { ThemedText } from './ThemedText';
type ConnectionState = 'connected' | 'simulated' | 'disconnected';
type ConnectionBannerProps = {
status: ConnectionState;
};
const resolveState = (status: ConnectionState) => {
if (status === 'connected') {
return {
label: 'LIVE STREAM',
backgroundColor: '#0F6B2A',
textColor: '#E2FFEA',
};
}
if (status === 'disconnected') {
return {
label: 'DISCONNECTED',
backgroundColor: '#8A1E2A',
textColor: '#FFE3E7',
};
}
return {
label: 'SIMULATED DATA',
backgroundColor: '#9A5F0C',
textColor: '#FFF3E1',
};
};
export const ConnectionBanner = ({ status }: ConnectionBannerProps) => {
const state = resolveState(status);
return (
<View
style={[
styles.banner,
{
backgroundColor: state.backgroundColor,
borderBottomColor: state.textColor,
},
]}
>
<ThemedText preset="labelMd" style={[styles.text, { color: state.textColor }]}>
{state.label}
</ThemedText>
</View>
);
};
const styles = StyleSheet.create({
banner: {
position: 'absolute',
left: 0,
right: 0,
top: 0,
zIndex: 100,
paddingVertical: 6,
borderBottomWidth: 2,
alignItems: 'center',
justifyContent: 'center',
},
text: {
letterSpacing: 2,
fontWeight: '700',
},
});

View File

@@ -0,0 +1,66 @@
import { Component, ErrorInfo, ReactNode } from 'react';
import { Button, StyleSheet, View } from 'react-native';
import { ThemedText } from './ThemedText';
import { ThemedView } from './ThemedView';
type ErrorBoundaryProps = {
children: ReactNode;
};
type ErrorBoundaryState = {
hasError: boolean;
error?: Error;
};
export class ErrorBoundary extends Component<ErrorBoundaryProps, ErrorBoundaryState> {
constructor(props: ErrorBoundaryProps) {
super(props);
this.state = { hasError: false };
}
static getDerivedStateFromError(error: Error): ErrorBoundaryState {
return { hasError: true, error };
}
componentDidCatch(error: Error, errorInfo: ErrorInfo) {
console.error('ErrorBoundary caught an error', error, errorInfo);
}
handleRetry = () => {
this.setState({ hasError: false, error: undefined });
};
render() {
if (this.state.hasError) {
return (
<ThemedView style={styles.container}>
<ThemedText preset="displayMd">Something went wrong</ThemedText>
<ThemedText preset="bodySm" style={styles.message}>
{this.state.error?.message ?? 'An unexpected error occurred.'}
</ThemedText>
<View style={styles.buttonWrap}>
<Button title="Retry" onPress={this.handleRetry} />
</View>
</ThemedView>
);
}
return this.props.children;
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
padding: 20,
gap: 12,
},
message: {
textAlign: 'center',
},
buttonWrap: {
marginTop: 8,
},
});

View File

@@ -0,0 +1,96 @@
import { useEffect } from 'react';
import { StyleSheet, View } from 'react-native';
import Animated, { useAnimatedProps, useSharedValue, withTiming } from 'react-native-reanimated';
import Svg, { Circle, G, Text as SvgText } from 'react-native-svg';
type GaugeArcProps = {
value: number;
max: number;
label: string;
unit: string;
color: string;
size?: number;
};
const AnimatedCircle = Animated.createAnimatedComponent(Circle);
export const GaugeArc = ({ value, max, label, unit, color, size = 140 }: GaugeArcProps) => {
const radius = (size - 20) / 2;
const circumference = 2 * Math.PI * radius;
const arcLength = circumference * 0.75;
const strokeWidth = 12;
const progress = useSharedValue(0);
const normalized = Math.max(0, Math.min(max > 0 ? value / max : 0, 1));
const displayText = `${value.toFixed(1)} ${unit}`;
useEffect(() => {
progress.value = withTiming(normalized, { duration: 600 });
}, [normalized, progress]);
const animatedStroke = useAnimatedProps(() => {
const dashOffset = arcLength - arcLength * progress.value;
return {
strokeDashoffset: dashOffset,
};
});
return (
<View style={styles.wrapper}>
<Svg width={size} height={size} viewBox={`0 0 ${size} ${size}`}>
<G transform={`rotate(-135 ${size / 2} ${size / 2})`}>
<Circle
cx={size / 2}
cy={size / 2}
r={radius}
strokeWidth={strokeWidth}
stroke="#1E293B"
fill="none"
strokeDasharray={`${arcLength} ${circumference}`}
strokeLinecap="round"
/>
<AnimatedCircle
cx={size / 2}
cy={size / 2}
r={radius}
strokeWidth={strokeWidth}
stroke={color}
fill="none"
strokeDasharray={`${arcLength} ${circumference}`}
strokeLinecap="round"
animatedProps={animatedStroke}
/>
</G>
<SvgText
x={size / 2}
y={size / 2 - 4}
fill="#E2E8F0"
fontSize={18}
fontFamily="Courier New"
fontWeight="700"
textAnchor="middle"
>
{displayText}
</SvgText>
<SvgText
x={size / 2}
y={size / 2 + 16}
fill="#94A3B8"
fontSize={10}
fontFamily="Courier New"
textAnchor="middle"
letterSpacing="0.6"
>
{label}
</SvgText>
</Svg>
</View>
);
};
const styles = StyleSheet.create({
wrapper: {
alignItems: 'center',
justifyContent: 'center',
},
});

View File

View File

@@ -0,0 +1,60 @@
import { useEffect } from 'react';
import { StyleSheet, ViewStyle } from 'react-native';
import Animated, { Easing, useAnimatedStyle, useSharedValue, withRepeat, withTiming } from 'react-native-reanimated';
import Svg, { Circle } from 'react-native-svg';
import { colors } from '../theme/colors';
type LoadingSpinnerProps = {
size?: number;
color?: string;
style?: ViewStyle;
};
export const LoadingSpinner = ({ size = 36, color = colors.accent, style }: LoadingSpinnerProps) => {
const rotation = useSharedValue(0);
const strokeWidth = Math.max(4, size * 0.14);
const center = size / 2;
const radius = center - strokeWidth;
const circumference = 2 * Math.PI * radius;
useEffect(() => {
rotation.value = withRepeat(withTiming(360, { duration: 900, easing: Easing.linear }), -1);
}, [rotation]);
const animatedStyle = useAnimatedStyle(() => ({
transform: [{ rotateZ: `${rotation.value}deg` }],
}));
return (
<Animated.View style={[styles.container, { width: size, height: size }, style, animatedStyle]} pointerEvents="none">
<Svg width={size} height={size} viewBox={`0 0 ${size} ${size}`}>
<Circle
cx={center}
cy={center}
r={radius}
stroke="rgba(255,255,255,0.2)"
strokeWidth={strokeWidth}
fill="none"
/>
<Circle
cx={center}
cy={center}
r={radius}
stroke={color}
strokeWidth={strokeWidth}
fill="none"
strokeLinecap="round"
strokeDasharray={`${circumference * 0.3} ${circumference * 0.7}`}
strokeDashoffset={circumference * 0.2}
/>
</Svg>
</Animated.View>
);
};
const styles = StyleSheet.create({
container: {
alignItems: 'center',
justifyContent: 'center',
},
});

View File

@@ -0,0 +1,71 @@
import { StyleSheet } from 'react-native';
import { ThemedText } from './ThemedText';
import { colors } from '../theme/colors';
type Mode = 'CSI' | 'RSSI' | 'SIM' | 'LIVE';
const modeStyle: Record<
Mode,
{
background: string;
border: string;
color: string;
}
> = {
CSI: {
background: 'rgba(50, 184, 198, 0.25)',
border: colors.accent,
color: colors.accent,
},
RSSI: {
background: 'rgba(255, 165, 2, 0.2)',
border: colors.warn,
color: colors.warn,
},
SIM: {
background: 'rgba(255, 71, 87, 0.18)',
border: colors.simulated,
color: colors.simulated,
},
LIVE: {
background: 'rgba(46, 213, 115, 0.18)',
border: colors.connected,
color: colors.connected,
},
};
type ModeBadgeProps = {
mode: Mode;
};
export const ModeBadge = ({ mode }: ModeBadgeProps) => {
const style = modeStyle[mode];
return (
<ThemedText
preset="labelMd"
style={[
styles.badge,
{
backgroundColor: style.background,
borderColor: style.border,
color: style.color,
},
]}
>
{mode}
</ThemedText>
);
};
const styles = StyleSheet.create({
badge: {
paddingHorizontal: 10,
paddingVertical: 4,
borderRadius: 999,
borderWidth: 1,
overflow: 'hidden',
letterSpacing: 1,
textAlign: 'center',
},
});

View File

@@ -0,0 +1,147 @@
import { useEffect, useMemo, useRef } from 'react';
import { StyleProp, ViewStyle } from 'react-native';
import Animated, { interpolateColor, useAnimatedProps, useSharedValue, withTiming, type SharedValue } from 'react-native-reanimated';
import Svg, { Circle, G, Rect } from 'react-native-svg';
import { colors } from '../theme/colors';
type Point = {
x: number;
y: number;
};
type OccupancyGridProps = {
values: number[];
personPositions?: Point[];
size?: number;
style?: StyleProp<ViewStyle>;
};
const GRID_DIMENSION = 20;
const CELLS = GRID_DIMENSION * GRID_DIMENSION;
const toColor = (value: number): string => {
const clamped = Math.max(0, Math.min(1, value));
let r: number;
let g: number;
let b: number;
if (clamped < 0.5) {
const t = clamped * 2;
r = Math.round(255 * 0);
g = Math.round(255 * t);
b = Math.round(255 * (1 - t));
} else {
const t = (clamped - 0.5) * 2;
r = Math.round(255 * t);
g = Math.round(255 * (1 - t));
b = 0;
}
return `rgb(${r}, ${g}, ${b})`;
};
const AnimatedRect = Animated.createAnimatedComponent(Rect);
const normalizeValues = (values: number[]) => {
const normalized = new Array(CELLS).fill(0);
for (let i = 0; i < CELLS; i += 1) {
const value = values?.[i] ?? 0;
normalized[i] = Number.isFinite(value) ? Math.max(0, Math.min(1, value)) : 0;
}
return normalized;
};
type CellProps = {
index: number;
size: number;
progress: SharedValue<number>;
previousColors: string[];
nextColors: string[];
};
const Cell = ({ index, size, progress, previousColors, nextColors }: CellProps) => {
const col = index % GRID_DIMENSION;
const row = Math.floor(index / GRID_DIMENSION);
const cellSize = size / GRID_DIMENSION;
const x = col * cellSize;
const y = row * cellSize;
const animatedProps = useAnimatedProps(() => ({
fill: interpolateColor(
progress.value,
[0, 1],
[previousColors[index] ?? colors.surfaceAlt, nextColors[index] ?? colors.surfaceAlt],
),
}));
return (
<AnimatedRect
x={x}
y={y}
width={cellSize}
height={cellSize}
rx={1}
animatedProps={animatedProps}
/>
);
};
export const OccupancyGrid = ({
values,
personPositions = [],
size = 320,
style,
}: OccupancyGridProps) => {
const normalizedValues = useMemo(() => normalizeValues(values), [values]);
const previousColors = useRef<string[]>(normalizedValues.map(toColor));
const nextColors = useRef<string[]>(normalizedValues.map(toColor));
const progress = useSharedValue(1);
useEffect(() => {
const next = normalizeValues(values);
previousColors.current = normalizedValues.map(toColor);
nextColors.current = next.map(toColor);
progress.value = 0;
progress.value = withTiming(1, { duration: 500 });
}, [values, normalizedValues, progress]);
const markers = useMemo(() => {
const cellSize = size / GRID_DIMENSION;
return personPositions.map(({ x, y }, idx) => {
const clampedX = Math.max(0, Math.min(GRID_DIMENSION - 1, Math.round(x)));
const clampedY = Math.max(0, Math.min(GRID_DIMENSION - 1, Math.round(y)));
const cx = (clampedX + 0.5) * cellSize;
const cy = (clampedY + 0.5) * cellSize;
const markerRadius = Math.max(3, cellSize * 0.25);
return (
<Circle
key={`person-${idx}`}
cx={cx}
cy={cy}
r={markerRadius}
fill={colors.accent}
stroke={colors.textPrimary}
strokeWidth={1}
/>
);
});
}, [personPositions, size]);
return (
<Svg width={size} height={size} style={style} viewBox={`0 0 ${size} ${size}`}>
<G>
{Array.from({ length: CELLS }).map((_, index) => (
<Cell
key={index}
index={index}
size={size}
progress={progress}
previousColors={previousColors.current}
nextColors={nextColors.current}
/>
))}
</G>
{markers}
</Svg>
);
};

View File

@@ -0,0 +1,62 @@
import { useEffect } from 'react';
import { StyleSheet, View } from 'react-native';
import Animated, { useAnimatedStyle, useSharedValue, withTiming } from 'react-native-reanimated';
import { ThemedText } from './ThemedText';
import { colors } from '../theme/colors';
type SignalBarProps = {
value: number;
label: string;
color?: string;
};
const clamp01 = (value: number) => Math.max(0, Math.min(1, value));
export const SignalBar = ({ value, label, color = colors.accent }: SignalBarProps) => {
const progress = useSharedValue(clamp01(value));
useEffect(() => {
progress.value = withTiming(clamp01(value), { duration: 250 });
}, [value, progress]);
const animatedFill = useAnimatedStyle(() => ({
width: `${progress.value * 100}%`,
}));
return (
<View style={styles.container}>
<ThemedText preset="bodySm" style={styles.label}>
{label}
</ThemedText>
<View style={styles.track}>
<Animated.View style={[styles.fill, { backgroundColor: color }, animatedFill]} />
</View>
<ThemedText preset="bodySm" style={styles.percent}>
{Math.round(clamp01(value) * 100)}%
</ThemedText>
</View>
);
};
const styles = StyleSheet.create({
container: {
gap: 6,
},
label: {
marginBottom: 4,
},
track: {
height: 8,
borderRadius: 4,
backgroundColor: colors.surfaceAlt,
overflow: 'hidden',
},
fill: {
height: '100%',
borderRadius: 4,
},
percent: {
textAlign: 'right',
color: colors.textSecondary,
},
});

View File

@@ -0,0 +1,64 @@
import { useMemo } from 'react';
import { View, ViewStyle } from 'react-native';
import { colors } from '../theme/colors';
type SparklineChartProps = {
data: number[];
color?: string;
height?: number;
style?: ViewStyle;
};
const defaultHeight = 72;
export const SparklineChart = ({
data,
color = colors.accent,
height = defaultHeight,
style,
}: SparklineChartProps) => {
const normalizedData = data.length > 0 ? data : [0];
const chartData = useMemo(
() =>
normalizedData.map((value, index) => ({
x: index,
y: value,
})),
[normalizedData],
);
const yValues = normalizedData.map((value) => Number(value) || 0);
const yMin = Math.min(...yValues);
const yMax = Math.max(...yValues);
const yPadding = yMax - yMin === 0 ? 1 : (yMax - yMin) * 0.2;
return (
<View style={style}>
<View
accessibilityRole="image"
style={{
height,
width: '100%',
borderRadius: 4,
borderWidth: 1,
borderColor: color,
opacity: 0.2,
backgroundColor: 'transparent',
}}
>
<View
style={{
flex: 1,
justifyContent: 'center',
alignItems: 'center',
}}
>
{chartData.map((point) => (
<View key={point.x} style={{ position: 'absolute', left: `${(point.x / Math.max(normalizedData.length - 1, 1)) * 100}%` }} />
))}
</View>
</View>
</View>
);
};

View File

@@ -0,0 +1,83 @@
import { useEffect } from 'react';
import { StyleSheet, ViewStyle } from 'react-native';
import Animated, {
cancelAnimation,
Easing,
useAnimatedStyle,
useSharedValue,
withRepeat,
withSequence,
withTiming,
} from 'react-native-reanimated';
import { colors } from '../theme/colors';
type StatusType = 'connected' | 'simulated' | 'disconnected' | 'connecting';
type StatusDotProps = {
status: StatusType;
size?: number;
style?: ViewStyle;
};
const resolveColor = (status: StatusType): string => {
if (status === 'connecting') return colors.warn;
return colors[status];
};
export const StatusDot = ({ status, size = 10, style }: StatusDotProps) => {
const scale = useSharedValue(1);
const opacity = useSharedValue(1);
const isConnecting = status === 'connecting';
useEffect(() => {
if (isConnecting) {
scale.value = withRepeat(
withSequence(
withTiming(1.35, { duration: 800, easing: Easing.out(Easing.cubic) }),
withTiming(1, { duration: 800, easing: Easing.in(Easing.cubic) }),
),
-1,
);
opacity.value = withRepeat(
withSequence(
withTiming(0.4, { duration: 800, easing: Easing.out(Easing.quad) }),
withTiming(1, { duration: 800, easing: Easing.in(Easing.quad) }),
),
-1,
);
return;
}
cancelAnimation(scale);
cancelAnimation(opacity);
scale.value = 1;
opacity.value = 1;
}, [isConnecting, opacity, scale]);
const animatedStyle = useAnimatedStyle(() => ({
transform: [{ scale: scale.value }],
opacity: opacity.value,
}));
return (
<Animated.View
style={[
styles.dot,
{
width: size,
height: size,
backgroundColor: resolveColor(status),
borderRadius: size / 2,
},
animatedStyle,
style,
]}
/>
);
};
const styles = StyleSheet.create({
dot: {
borderRadius: 999,
},
});

View File

@@ -0,0 +1,28 @@
import { ComponentPropsWithoutRef } from 'react';
import { StyleProp, Text, TextStyle } from 'react-native';
import { useTheme } from '../hooks/useTheme';
import { colors } from '../theme/colors';
import { typography } from '../theme/typography';
type TextPreset = keyof typeof typography;
type ColorKey = keyof typeof colors;
type ThemedTextProps = Omit<ComponentPropsWithoutRef<typeof Text>, 'style'> & {
preset?: TextPreset;
color?: ColorKey;
style?: StyleProp<TextStyle>;
};
export const ThemedText = ({
preset = 'bodyMd',
color = 'textPrimary',
style,
...props
}: ThemedTextProps) => {
const { colors, typography } = useTheme();
const presetStyle = (typography as Record<TextPreset, TextStyle>)[preset];
const colorStyle = { color: colors[color] };
return <Text {...props} style={[presetStyle, colorStyle, style]} />;
};

View File

@@ -0,0 +1,24 @@
import { PropsWithChildren, forwardRef } from 'react';
import { View, ViewProps } from 'react-native';
import { useTheme } from '../hooks/useTheme';
type ThemedViewProps = PropsWithChildren<ViewProps>;
export const ThemedView = forwardRef<View, ThemedViewProps>(({ children, style, ...props }, ref) => {
const { colors } = useTheme();
return (
<View
ref={ref}
{...props}
style={[
{
backgroundColor: colors.bg,
},
style,
]}
>
{children}
</View>
);
});

View File

@@ -0,0 +1,14 @@
export const API_ROOT = '/api/v1';
export const API_POSE_STATUS_PATH = '/api/v1/pose/status';
export const API_POSE_FRAMES_PATH = '/api/v1/pose/frames';
export const API_POSE_ZONES_PATH = '/api/v1/pose/zones';
export const API_POSE_CURRENT_PATH = '/api/v1/pose/current';
export const API_STREAM_STATUS_PATH = '/api/v1/stream/status';
export const API_STREAM_POSE_PATH = '/api/v1/stream/pose';
export const API_MAT_EVENTS_PATH = '/api/v1/mat/events';
export const API_HEALTH_PATH = '/health';
export const API_HEALTH_SYSTEM_PATH = '/health/health';
export const API_HEALTH_READY_PATH = '/health/ready';
export const API_HEALTH_LIVE_PATH = '/health/live';

View File

@@ -0,0 +1,20 @@
export const SIMULATION_TICK_INTERVAL_MS = 500;
export const SIMULATION_GRID_SIZE = 20;
export const RSSI_BASE_DBM = -45;
export const RSSI_AMPLITUDE_DBM = 3;
export const VARIANCE_BASE = 1.5;
export const VARIANCE_AMPLITUDE = 1.0;
export const MOTION_BAND_MIN = 0.05;
export const MOTION_BAND_AMPLITUDE = 0.15;
export const BREATHING_BAND_MIN = 0.03;
export const BREATHING_BAND_AMPLITUDE = 0.08;
export const SIGNAL_FIELD_PRESENCE_LEVEL = 0.8;
export const BREATHING_BPM_MIN = 12;
export const BREATHING_BPM_MAX = 24;
export const HEART_BPM_MIN = 58;
export const HEART_BPM_MAX = 96;

View File

@@ -0,0 +1,3 @@
export const WS_PATH = '/api/v1/stream/pose';
export const RECONNECT_DELAYS = [1000, 2000, 4000, 8000, 16000];
export const MAX_RECONNECT_ATTEMPTS = 10;

View File

@@ -0,0 +1,31 @@
import { useEffect } from 'react';
import { wsService } from '@/services/ws.service';
import { usePoseStore } from '@/stores/poseStore';
import { useSettingsStore } from '@/stores/settingsStore';
export interface UsePoseStreamResult {
connectionStatus: ReturnType<typeof usePoseStore.getState>['connectionStatus'];
lastFrame: ReturnType<typeof usePoseStore.getState>['lastFrame'];
isSimulated: boolean;
}
export function usePoseStream(): UsePoseStreamResult {
const serverUrl = useSettingsStore((state) => state.serverUrl);
const connectionStatus = usePoseStore((state) => state.connectionStatus);
const lastFrame = usePoseStore((state) => state.lastFrame);
const isSimulated = usePoseStore((state) => state.isSimulated);
useEffect(() => {
const unsubscribe = wsService.subscribe((frame) => {
usePoseStore.getState().handleFrame(frame);
});
wsService.connect(serverUrl);
return () => {
unsubscribe();
wsService.disconnect();
};
}, [serverUrl]);
return { connectionStatus, lastFrame, isSimulated };
}

View File

@@ -0,0 +1,31 @@
import { useEffect, useState } from 'react';
import { rssiService, type WifiNetwork } from '@/services/rssi.service';
import { useSettingsStore } from '@/stores/settingsStore';
export function useRssiScanner(): { networks: WifiNetwork[]; isScanning: boolean } {
const enabled = useSettingsStore((state) => state.rssiScanEnabled);
const [networks, setNetworks] = useState<WifiNetwork[]>([]);
const [isScanning, setIsScanning] = useState(false);
useEffect(() => {
if (!enabled) {
rssiService.stopScanning();
setIsScanning(false);
return;
}
const unsubscribe = rssiService.subscribe((result) => {
setNetworks(result);
});
rssiService.startScanning(2000);
setIsScanning(true);
return () => {
unsubscribe();
rssiService.stopScanning();
setIsScanning(false);
};
}, [enabled]);
return { networks, isScanning };
}

View File

@@ -0,0 +1,52 @@
import { useEffect, useState } from 'react';
import { apiService } from '@/services/api.service';
interface ServerReachability {
reachable: boolean;
latencyMs: number | null;
}
const POLL_MS = 10000;
export function useServerReachability(): ServerReachability {
const [state, setState] = useState<ServerReachability>({
reachable: false,
latencyMs: null,
});
useEffect(() => {
let active = true;
const check = async () => {
const started = Date.now();
try {
await apiService.getStatus();
if (!active) {
return;
}
setState({
reachable: true,
latencyMs: Date.now() - started,
});
} catch {
if (!active) {
return;
}
setState({
reachable: false,
latencyMs: null,
});
}
};
void check();
const timer = setInterval(check, POLL_MS);
return () => {
active = false;
clearInterval(timer);
};
}, []);
return state;
}

View File

@@ -0,0 +1,4 @@
import { useContext } from 'react';
import { ThemeContext, ThemeContextValue } from '../theme/ThemeContext';
export const useTheme = (): ThemeContextValue => useContext(ThemeContext);

View File

View File

@@ -0,0 +1,162 @@
import React, { Suspense, useEffect, useState } from 'react';
import { ActivityIndicator } from 'react-native';
import { createBottomTabNavigator } from '@react-navigation/bottom-tabs';
import { Ionicons } from '@expo/vector-icons';
import { ThemedText } from '../components/ThemedText';
import { ThemedView } from '../components/ThemedView';
import { colors } from '../theme/colors';
import { MainTabsParamList } from './types';
const createPlaceholder = (label: string) => {
const Placeholder = () => (
<ThemedView style={{ flex: 1, alignItems: 'center', justifyContent: 'center' }}>
<ThemedText preset="bodyLg">{label} screen not implemented yet</ThemedText>
<ThemedText preset="bodySm" color="textSecondary">
Placeholder shell
</ThemedText>
</ThemedView>
);
const LazyPlaceholder = React.lazy(async () => ({ default: Placeholder }));
const Wrapped = () => (
<Suspense
fallback={
<ThemedView style={{ flex: 1, alignItems: 'center', justifyContent: 'center' }}>
<ActivityIndicator color={colors.accent} />
<ThemedText preset="bodySm" color="textSecondary" style={{ marginTop: 8 }}>
Loading {label}
</ThemedText>
</ThemedView>
}
>
<LazyPlaceholder />
</Suspense>
);
return Wrapped;
};
const loadScreen = (path: string, label: string) => {
const fallback = createPlaceholder(label);
return React.lazy(async () => {
try {
const module = (await import(path)) as { default: React.ComponentType };
if (module?.default) {
return module;
}
} catch {
// keep fallback for shell-only screens
}
return { default: fallback } as { default: React.ComponentType };
});
};
const LiveScreen = loadScreen('../screens/LiveScreen', 'Live');
const VitalsScreen = loadScreen('../screens/VitalsScreen', 'Vitals');
const ZonesScreen = loadScreen('../screens/ZonesScreen', 'Zones');
const MATScreen = loadScreen('../screens/MATScreen', 'MAT');
const SettingsScreen = loadScreen('../screens/SettingsScreen', 'Settings');
const toIconName = (routeName: keyof MainTabsParamList) => {
switch (routeName) {
case 'Live':
return 'wifi';
case 'Vitals':
return 'heart';
case 'Zones':
return 'grid';
case 'MAT':
return 'shield-checkmark';
case 'Settings':
return 'settings';
default:
return 'ellipse';
}
};
const getMatAlertCount = async (): Promise<number> => {
try {
const mod = (await import('../stores/matStore')) as Record<string, unknown>;
const candidates = [mod.useMatStore, mod.useStore].filter((candidate) => {
return (
!!candidate &&
typeof candidate === 'function' &&
typeof (candidate as { getState?: () => unknown }).getState === 'function'
);
}) as Array<{ getState: () => { alerts?: unknown[] } }>;
for (const store of candidates) {
const alerts = store.getState().alerts;
if (Array.isArray(alerts)) {
return alerts.length;
}
}
} catch {
return 0;
}
return 0;
};
const screens: ReadonlyArray<{ name: keyof MainTabsParamList; component: React.ComponentType }> = [
{ name: 'Live', component: LiveScreen },
{ name: 'Vitals', component: VitalsScreen },
{ name: 'Zones', component: ZonesScreen },
{ name: 'MAT', component: MATScreen },
{ name: 'Settings', component: SettingsScreen },
];
const Tab = createBottomTabNavigator<MainTabsParamList>();
const Suspended = ({ component: Component }: { component: React.ComponentType }) => (
<Suspense fallback={<ActivityIndicator color={colors.accent} />}>
<Component />
</Suspense>
);
export const MainTabs = () => {
const [matAlertCount, setMatAlertCount] = useState(0);
useEffect(() => {
const readCount = async () => {
const count = await getMatAlertCount();
setMatAlertCount(count);
};
void readCount();
const timer = setInterval(readCount, 2000);
return () => clearInterval(timer);
}, []);
return (
<Tab.Navigator
screenOptions={({ route }) => ({
headerShown: false,
tabBarActiveTintColor: colors.accent,
tabBarInactiveTintColor: colors.textSecondary,
tabBarStyle: {
backgroundColor: '#0D1117',
borderTopColor: colors.border,
borderTopWidth: 1,
},
tabBarIcon: ({ color, size }) => <Ionicons name={toIconName(route.name)} size={size} color={color} />,
tabBarLabelStyle: {
fontFamily: 'Courier New',
textTransform: 'uppercase',
fontSize: 10,
},
tabBarLabel: ({ children, color }) => <ThemedText style={{ color }}>{children}</ThemedText>,
})}
>
{screens.map(({ name, component }) => (
<Tab.Screen
key={name}
name={name}
options={{
tabBarBadge: name === 'MAT' ? (matAlertCount > 0 ? matAlertCount : undefined) : undefined,
}}
component={() => <Suspended component={component} />}
/>
))}
</Tab.Navigator>
);
};

View File

@@ -0,0 +1,5 @@
import { MainTabs } from './MainTabs';
export const RootNavigator = () => {
return <MainTabs />;
};

View File

@@ -0,0 +1,11 @@
export type RootStackParamList = {
MainTabs: undefined;
};
export type MainTabsParamList = {
Live: undefined;
Vitals: undefined;
Zones: undefined;
MAT: undefined;
Settings: undefined;
};

View File

@@ -0,0 +1,41 @@
import { LayoutChangeEvent, StyleSheet } from 'react-native';
import type { RefObject } from 'react';
import { WebView, type WebViewMessageEvent } from 'react-native-webview';
import GAUSSIAN_SPLATS_HTML from '@/assets/webview/gaussian-splats.html';
type GaussianSplatWebViewProps = {
onMessage: (event: WebViewMessageEvent) => void;
onError: () => void;
webViewRef: RefObject<WebView | null>;
onLayout?: (event: LayoutChangeEvent) => void;
};
export const GaussianSplatWebView = ({
onMessage,
onError,
webViewRef,
onLayout,
}: GaussianSplatWebViewProps) => {
const html = typeof GAUSSIAN_SPLATS_HTML === 'string' ? GAUSSIAN_SPLATS_HTML : '';
return (
<WebView
ref={webViewRef}
source={{ html }}
originWhitelist={['*']}
allowFileAccess={false}
javaScriptEnabled
onMessage={onMessage}
onError={onError}
onLayout={onLayout}
style={styles.webView}
/>
);
};
const styles = StyleSheet.create({
webView: {
flex: 1,
backgroundColor: '#0A0E1A',
},
});

View File

@@ -0,0 +1,164 @@
import { Pressable, StyleSheet, View } from 'react-native';
import { memo, useCallback, useState } from 'react';
import Animated, { useAnimatedStyle, useSharedValue, withTiming } from 'react-native-reanimated';
import { StatusDot } from '@/components/StatusDot';
import { ModeBadge } from '@/components/ModeBadge';
import { ThemedText } from '@/components/ThemedText';
import { formatConfidence, formatRssi } from '@/utils/formatters';
import { colors, spacing } from '@/theme';
import type { ConnectionStatus } from '@/types/sensing';
type LiveMode = 'LIVE' | 'SIM' | 'RSSI';
type LiveHUDProps = {
rssi?: number;
connectionStatus: ConnectionStatus;
fps: number;
confidence: number;
personCount: number;
mode: LiveMode;
};
const statusTextMap: Record<ConnectionStatus, string> = {
connected: 'Connected',
simulated: 'Simulated',
connecting: 'Connecting',
disconnected: 'Disconnected',
};
const statusDotStatusMap: Record<ConnectionStatus, 'connected' | 'simulated' | 'disconnected' | 'connecting'> = {
connected: 'connected',
simulated: 'simulated',
connecting: 'connecting',
disconnected: 'disconnected',
};
export const LiveHUD = memo(
({ rssi, connectionStatus, fps, confidence, personCount, mode }: LiveHUDProps) => {
const [panelVisible, setPanelVisible] = useState(true);
const panelAlpha = useSharedValue(1);
const togglePanel = useCallback(() => {
const next = !panelVisible;
setPanelVisible(next);
panelAlpha.value = withTiming(next ? 1 : 0, { duration: 220 });
}, [panelAlpha, panelVisible]);
const animatedPanelStyle = useAnimatedStyle(() => ({
opacity: panelAlpha.value,
}));
const statusText = statusTextMap[connectionStatus];
return (
<Pressable style={StyleSheet.absoluteFill} onPress={togglePanel}>
<Animated.View pointerEvents="none" style={[StyleSheet.absoluteFill, animatedPanelStyle]}>
{/* App title */}
<View style={styles.topLeft}>
<ThemedText preset="labelLg" style={styles.appTitle}>
WiFi-DensePose
</ThemedText>
</View>
{/* Status + FPS */}
<View style={styles.topRight}>
<View style={styles.row}>
<StatusDot status={statusDotStatusMap[connectionStatus]} size={10} />
<ThemedText preset="labelMd" style={styles.statusText}>
{statusText}
</ThemedText>
</View>
{fps > 0 && (
<View style={styles.row}>
<ThemedText preset="labelMd">{fps} FPS</ThemedText>
</View>
)}
</View>
{/* Bottom panel */}
<View style={styles.bottomPanel}>
<View style={styles.bottomCell}>
<ThemedText preset="bodySm">RSSI</ThemedText>
<ThemedText preset="displayMd" style={styles.bigValue}>
{formatRssi(rssi)}
</ThemedText>
</View>
<View style={styles.bottomCell}>
<ModeBadge mode={mode} />
</View>
<View style={styles.bottomCellRight}>
<ThemedText preset="bodySm">Confidence</ThemedText>
<ThemedText preset="bodyMd" style={styles.metaText}>
{formatConfidence(confidence)}
</ThemedText>
<ThemedText preset="bodySm">People: {personCount}</ThemedText>
</View>
</View>
</Animated.View>
</Pressable>
);
},
);
const styles = StyleSheet.create({
topLeft: {
position: 'absolute',
top: spacing.md,
left: spacing.md,
},
appTitle: {
color: colors.textPrimary,
},
topRight: {
position: 'absolute',
top: spacing.md,
right: spacing.md,
alignItems: 'flex-end',
gap: 4,
},
row: {
flexDirection: 'row',
alignItems: 'center',
gap: spacing.sm,
},
statusText: {
color: colors.textPrimary,
},
bottomPanel: {
position: 'absolute',
left: spacing.sm,
right: spacing.sm,
bottom: spacing.sm,
minHeight: 72,
borderRadius: 12,
backgroundColor: 'rgba(10,14,26,0.72)',
borderWidth: 1,
borderColor: 'rgba(50,184,198,0.35)',
paddingHorizontal: spacing.md,
paddingVertical: spacing.sm,
flexDirection: 'row',
justifyContent: 'space-between',
alignItems: 'center',
},
bottomCell: {
flex: 1,
alignItems: 'center',
},
bottomCellRight: {
flex: 1,
alignItems: 'flex-end',
},
bigValue: {
color: colors.accent,
marginTop: 2,
marginBottom: 2,
},
metaText: {
color: colors.textPrimary,
marginBottom: 4,
},
});
LiveHUD.displayName = 'LiveHUD';

View File

@@ -0,0 +1,215 @@
import { useCallback, useEffect, useRef, useState } from 'react';
import { Button, LayoutChangeEvent, StyleSheet, View } from 'react-native';
import type { WebView } from 'react-native-webview';
import type { WebViewMessageEvent } from 'react-native-webview';
import { ErrorBoundary } from '@/components/ErrorBoundary';
import { LoadingSpinner } from '@/components/LoadingSpinner';
import { ThemedText } from '@/components/ThemedText';
import { ThemedView } from '@/components/ThemedView';
import { usePoseStream } from '@/hooks/usePoseStream';
import { colors, spacing } from '@/theme';
import type { ConnectionStatus, SensingFrame } from '@/types/sensing';
import { useGaussianBridge } from './useGaussianBridge';
import { GaussianSplatWebView } from './GaussianSplatWebView';
import { LiveHUD } from './LiveHUD';
type LiveMode = 'LIVE' | 'SIM' | 'RSSI';
const getMode = (
status: ConnectionStatus,
isSimulated: boolean,
frame: SensingFrame | null,
): LiveMode => {
if (isSimulated || frame?.source === 'simulated') {
return 'SIM';
}
if (status === 'connected') {
return 'LIVE';
}
return 'RSSI';
};
const dispatchWebViewMessage = (webViewRef: { current: WebView | null }, message: unknown) => {
const webView = webViewRef.current;
if (!webView) {
return;
}
const payload = JSON.stringify(message);
webView.injectJavaScript(
`window.dispatchEvent(new MessageEvent('message', { data: ${JSON.stringify(payload)} })); true;`,
);
};
export const LiveScreen = () => {
const webViewRef = useRef<WebView | null>(null);
const { lastFrame, connectionStatus, isSimulated } = usePoseStream();
const bridge = useGaussianBridge(webViewRef);
const [webError, setWebError] = useState<string | null>(null);
const [viewerKey, setViewerKey] = useState(0);
const sendTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);
const pendingFrameRef = useRef<SensingFrame | null>(null);
const lastSentAtRef = useRef(0);
const clearSendTimeout = useCallback(() => {
if (!sendTimeoutRef.current) {
return;
}
clearTimeout(sendTimeoutRef.current);
sendTimeoutRef.current = null;
}, []);
useEffect(() => {
if (!lastFrame) {
return;
}
pendingFrameRef.current = lastFrame;
const now = Date.now();
const flush = () => {
if (!bridge.isReady || !pendingFrameRef.current) {
return;
}
bridge.sendFrame(pendingFrameRef.current);
lastSentAtRef.current = Date.now();
pendingFrameRef.current = null;
};
const waitMs = Math.max(0, 500 - (now - lastSentAtRef.current));
if (waitMs <= 0) {
flush();
return;
}
clearSendTimeout();
sendTimeoutRef.current = setTimeout(() => {
sendTimeoutRef.current = null;
flush();
}, waitMs);
return () => {
clearSendTimeout();
};
}, [bridge.isReady, lastFrame, bridge.sendFrame, clearSendTimeout]);
useEffect(() => {
return () => {
dispatchWebViewMessage(webViewRef, { type: 'DISPOSE' });
clearSendTimeout();
pendingFrameRef.current = null;
};
}, [clearSendTimeout]);
const onMessage = useCallback(
(event: WebViewMessageEvent) => {
bridge.onMessage(event);
},
[bridge],
);
const onLayout = useCallback((event: LayoutChangeEvent) => {
const { width, height } = event.nativeEvent.layout;
if (width <= 0 || height <= 0 || Number.isNaN(width) || Number.isNaN(height)) {
return;
}
dispatchWebViewMessage(webViewRef, {
type: 'RESIZE',
payload: {
width: Math.max(1, Math.floor(width)),
height: Math.max(1, Math.floor(height)),
},
});
}, []);
const handleWebError = useCallback(() => {
setWebError('Live renderer failed to initialize');
}, []);
const handleRetry = useCallback(() => {
setWebError(null);
bridge.reset();
setViewerKey((value) => value + 1);
}, [bridge]);
const rssi = lastFrame?.features?.mean_rssi;
const personCount = lastFrame?.classification?.presence ? 1 : 0;
const mode = getMode(connectionStatus, isSimulated, lastFrame);
if (webError || bridge.error) {
return (
<ThemedView style={styles.fallbackWrap}>
<ThemedText preset="bodyLg">Live visualization failed</ThemedText>
<ThemedText preset="bodySm" color="textSecondary" style={styles.errorText}>
{webError ?? bridge.error}
</ThemedText>
<Button title="Retry" onPress={handleRetry} />
</ThemedView>
);
}
return (
<ErrorBoundary>
<View style={styles.container}>
<GaussianSplatWebView
key={viewerKey}
webViewRef={webViewRef}
onMessage={onMessage}
onError={handleWebError}
onLayout={onLayout}
/>
<LiveHUD
connectionStatus={connectionStatus}
fps={bridge.fps}
rssi={rssi}
confidence={lastFrame?.classification?.confidence ?? 0}
personCount={personCount}
mode={mode}
/>
{!bridge.isReady && (
<View style={styles.loadingWrap}>
<LoadingSpinner />
<ThemedText preset="bodyMd" style={styles.loadingText}>
Loading live renderer
</ThemedText>
</View>
)}
</View>
</ErrorBoundary>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: colors.bg,
},
loadingWrap: {
...StyleSheet.absoluteFillObject,
backgroundColor: colors.bg,
alignItems: 'center',
justifyContent: 'center',
gap: spacing.md,
},
loadingText: {
color: colors.textSecondary,
},
fallbackWrap: {
flex: 1,
alignItems: 'center',
justifyContent: 'center',
gap: spacing.md,
padding: spacing.lg,
},
errorText: {
textAlign: 'center',
},
});

View File

@@ -0,0 +1,97 @@
import { useCallback, useState } from 'react';
import type { RefObject } from 'react';
import type { WebViewMessageEvent } from 'react-native-webview';
import { WebView } from 'react-native-webview';
import type { SensingFrame } from '@/types/sensing';
export type GaussianBridgeMessageType = 'READY' | 'FPS_TICK' | 'ERROR';
type BridgeMessage = {
type: GaussianBridgeMessageType;
payload?: {
fps?: number;
message?: string;
};
};
const toJsonScript = (message: unknown): string => {
const serialized = JSON.stringify(message);
return `window.dispatchEvent(new MessageEvent('message', { data: ${JSON.stringify(serialized)} })); true;`;
};
export const useGaussianBridge = (webViewRef: RefObject<WebView | null>) => {
const [isReady, setIsReady] = useState(false);
const [fps, setFps] = useState(0);
const [error, setError] = useState<string | null>(null);
const send = useCallback((message: unknown) => {
const webView = webViewRef.current;
if (!webView) {
return;
}
webView.injectJavaScript(toJsonScript(message));
}, [webViewRef]);
const sendFrame = useCallback(
(frame: SensingFrame) => {
send({
type: 'FRAME_UPDATE',
payload: frame,
});
},
[send],
);
const onMessage = useCallback((event: WebViewMessageEvent) => {
let parsed: BridgeMessage | null = null;
const raw = event.nativeEvent.data;
if (typeof raw === 'string') {
try {
parsed = JSON.parse(raw) as BridgeMessage;
} catch {
setError('Invalid bridge message format');
return;
}
} else if (typeof raw === 'object' && raw !== null) {
parsed = raw as BridgeMessage;
}
if (!parsed) {
return;
}
if (parsed.type === 'READY') {
setIsReady(true);
setError(null);
return;
}
if (parsed.type === 'FPS_TICK') {
const fpsValue = parsed.payload?.fps;
if (typeof fpsValue === 'number' && Number.isFinite(fpsValue)) {
setFps(Math.max(0, Math.floor(fpsValue)));
}
return;
}
if (parsed.type === 'ERROR') {
setError(parsed.payload?.message ?? 'Unknown bridge error');
setIsReady(false);
}
}, []);
return {
sendFrame,
onMessage,
isReady,
fps,
error,
reset: () => {
setIsReady(false);
setFps(0);
setError(null);
},
};
};

Some files were not shown because too many files have changed in this diff Show More