🧠 wifi-densepose-ruvector: AI Backbone for WiFi Human Sensing #67
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
wifi-densepose-ruvector — AI Backbone for WiFi Human Sensing
wifi-densepose-ruvectoris the AI intelligence layer that transforms raw, noisy WiFi radio signals into clean, structured input for neural networks that detect humans through walls.It uses attention mechanisms to learn which signals to trust, graph algorithms that automatically discover which WiFi channels are sensitive to body motion, sparse solvers that locate people using physics, and compressed representations that let the whole AI pipeline run on an $8 microcontroller.
Without RuVector, WiFi DensePose would need hand-tuned thresholds, brute-force matrix math, and 4x more memory — making real-time edge inference impossible.
The AI Pipeline
WiFi DensePose works like this: radio waves bounce off people, creating disturbances in WiFi signals. A neural network (DensePose head) converts those disturbances into body pose, vital signs, and presence data. But raw WiFi signals are incredibly noisy — RuVector is what makes them usable.
Each RuVector component replaces what would otherwise be a hand-tuned heuristic or an expensive brute-force computation with a learned, self-optimizing algorithm.
What Each AI Component Does
1. Self-Optimizing Channel Selection (
ruvector-mincut)Problem: A WiFi access point broadcasts on 56 subcarrier frequencies. Some carry useful information about body movement. Others are just noise. Which ones matter changes with the environment.
AI approach: Model the subcarriers as a graph where edge weights represent motion correlation. Apply min-cut to partition them into "sensitive" (body motion) and "insensitive" (noise) groups. The partition adapts automatically — no thresholds to tune.
Old way: Sort by variance, pick top-K. Breaks when environment changes.
RuVector way: O(n^1.5 log n) graph partition that adapts to any room.
2. Attention-Based Signal Cleaning (
ruvector-attn-mincut)Problem: A Doppler spectrogram (time-frequency map of movement) contains frames where someone was moving and frames of pure noise. The neural network performs poorly on noisy frames.
AI approach: Attention-guided gating learns which frames carry signal and which are noise, then suppresses the noise frames before they reach the DensePose head.
Old way: Fixed energy threshold. Misses low-amplitude breathing signals.
RuVector way: Learned attention weights that amplify subtle body signals.
3. Learned Signal Fusion (
ruvector-attention)Problem: Multiple subcarriers each give a partial view of body motion. Some are reliable, some are corrupted. How do you combine them?
AI approach: Scaled dot-product attention (the same mechanism behind transformers) weights each subcarrier by its reliability, producing a single fused body velocity profile.
Old way: Simple averaging. A single bad channel corrupts everything.
RuVector way: Learned weighting that automatically downweights corrupted channels.
4. Physics-Informed Localization (
ruvector-solver)Problem: You know someone is in the room. But where exactly? With CSI you can solve this using Fresnel zone physics — but the equations are nonlinear.
AI approach: Sparse regularized least-squares solver linearizes the Fresnel zone equations, estimating the TX-body and body-RX distances from multi-subcarrier amplitude data.
5. Survivor Triangulation (
ruvector-solver)Problem: In a disaster scenario, multiple access points detect a breathing signature. Where is the survivor? TDoA (time-difference-of-arrival) equations are hyperbolic and expensive to solve.
AI approach: Neumann series expansion linearizes the hyperbolic equations into a 2x2 system solvable in O(1) — fast enough to update in real-time as new data arrives.
6. Edge-AI Memory Compression (
ruvector-temporal-tensor)Problem: An ESP32 has 520 KB of RAM. Storing 60 seconds of breathing data for 56 subcarriers at 100 Hz requires 13.4 MB. That is 25x more than available.
AI approach: Tiered quantized streaming buffers — recent data stays at 8-bit precision (hot tier), older data compresses to 5-7 bits (warm), oldest to 3 bits (cold). The AI pipeline barely notices the quality loss, but memory drops 75%.
The 5 RuVector Crates
ruvector-mincutruvector-attn-mincutruvector-attentionruvector-solverruvector-temporal-tensorAll 5 published on crates.io at v2.0.4. Zero unsafe code. No Python or C dependencies.
What This Enables
Today
Advanced
Future directions
Getting Started
Use in your Rust project
Try the full WiFi DensePose system
Build from source
Verify the math (no hardware needed)
Links