Phase 1: HardwareNormalizer (hardware_norm.rs, 399 lines, 14 tests)
- Catmull-Rom cubic interpolation: any subcarrier count → canonical 56
- Z-score normalization, phase unwrap + linear detrend
- Hardware detection: ESP32-S3, Intel 5300, Atheros, Generic
Phase 2: DomainFactorizer + GRL (domain.rs, 392 lines, 20 tests)
- PoseEncoder: Linear→LayerNorm→GELU→Linear (environment-invariant)
- EnvEncoder: GlobalMeanPool→Linear (environment-specific, discarded)
- GradientReversalLayer: identity forward, -lambda*grad backward
- AdversarialSchedule: sigmoidal lambda annealing 0→1
Phase 3: GeometryEncoder + FiLM (geometry.rs, 364 lines, 14 tests)
- FourierPositionalEncoding: 3D coords → 64-dim
- DeepSets: permutation-invariant AP position aggregation
- FilmLayer: Feature-wise Linear Modulation for zero-shot deployment
Phase 4: VirtualDomainAugmentor (virtual_aug.rs, 297 lines, 10 tests)
- Room scale, reflection coeff, virtual scatterers, noise injection
- Deterministic Xorshift64 RNG, 4x effective training diversity
Phase 5: RapidAdaptation (rapid_adapt.rs, 255 lines, 7 tests)
- 10-second unsupervised calibration via contrastive TTT + entropy min
- LoRA weight generation without pose labels
Phase 6: CrossDomainEvaluator (eval.rs, 151 lines, 7 tests)
- 6 metrics: in-domain/cross-domain/few-shot/cross-hw MPJPE,
domain gap ratio, adaptation speedup
All 72 MERIDIAN tests pass. Full workspace compiles clean.
Co-Authored-By: claude-flow <ruv@ruv.net>
wifi-densepose-train
Complete training pipeline for WiFi-DensePose, integrated with all five ruvector crates.
Overview
wifi-densepose-train provides everything needed to train the WiFi-to-DensePose model: dataset
loading, subcarrier interpolation, loss functions, evaluation metrics, and the training loop
orchestrator. It supports both the MM-Fi dataset (NeurIPS 2023) and deterministic synthetic data
for reproducible experiments.
Without the tch-backend feature the crate still provides the dataset, configuration, and
subcarrier interpolation APIs needed for data preprocessing and proof verification.
Features
- MM-Fi dataset loader -- Reads the MM-Fi multimodal dataset (NeurIPS 2023) from disk with
memory-mapped
.npyfiles. - Synthetic dataset -- Deterministic, fixed-seed CSI generation for unit tests and proofs.
- Subcarrier interpolation -- 114 -> 56 subcarrier compression via
ruvector-solversparse interpolation with variance-based selection. - Loss functions (
tch-backend) -- Pose estimation losses including MSE, OKS, and combined multi-task loss. - Metrics (
tch-backend) -- PCKh, OKS-AP, and per-keypoint evaluation withruvector-mincut-based person matching. - Training orchestrator (
tch-backend) -- Full training loop with learning rate scheduling, gradient clipping, checkpointing, and reproducible proofs. - All 5 ruvector crates --
ruvector-mincut,ruvector-attn-mincut,ruvector-temporal-tensor,ruvector-solver, andruvector-attentionintegrated across dataset loading, metrics, and model attention.
Feature flags
| Flag | Default | Description |
|---|---|---|
tch-backend |
no | Enable PyTorch training via tch-rs |
cuda |
no | CUDA GPU acceleration (implies tch) |
Binaries
| Binary | Description |
|---|---|
train |
Main training entry point |
verify-training |
Proof verification (requires tch-backend) |
Quick Start
use wifi_densepose_train::config::TrainingConfig;
use wifi_densepose_train::dataset::{SyntheticCsiDataset, SyntheticConfig, CsiDataset};
// Build and validate config
let config = TrainingConfig::default();
config.validate().expect("config is valid");
// Create a synthetic dataset (deterministic, fixed-seed)
let syn_cfg = SyntheticConfig::default();
let dataset = SyntheticCsiDataset::new(200, syn_cfg);
// Load one sample
let sample = dataset.get(0).unwrap();
println!("amplitude shape: {:?}", sample.amplitude.shape());
Architecture
wifi-densepose-train/src/
lib.rs -- Re-exports, VERSION
config.rs -- TrainingConfig, hyperparameters, validation
dataset.rs -- CsiDataset trait, MmFiDataset, SyntheticCsiDataset, DataLoader
error.rs -- TrainError, ConfigError, DatasetError, SubcarrierError
subcarrier.rs -- interpolate_subcarriers (114->56), variance-based selection
losses.rs -- (tch) MSE, OKS, multi-task loss [feature-gated]
metrics.rs -- (tch) PCKh, OKS-AP, person matching [feature-gated]
model.rs -- (tch) Model definition with attention [feature-gated]
proof.rs -- (tch) Deterministic training proofs [feature-gated]
trainer.rs -- (tch) Training loop orchestrator [feature-gated]
Related Crates
| Crate | Role |
|---|---|
wifi-densepose-signal |
Signal preprocessing consumed by dataset loaders |
wifi-densepose-nn |
Inference engine that loads trained models |
ruvector-mincut |
Person matching in metrics |
ruvector-attn-mincut |
Attention-weighted graph cuts |
ruvector-temporal-tensor |
Compressed CSI buffering in datasets |
ruvector-solver |
Sparse subcarrier interpolation |
ruvector-attention |
Spatial attention in model |
License
MIT OR Apache-2.0