feat: ADR-024 Contrastive CSI Embedding Model — all 7 phases (#52)
Full implementation of Project AETHER — Contrastive CSI Embedding Model. ## Phases Delivered 1. ProjectionHead (64→128→128) + L2 normalization 2. CsiAugmenter (5 physically-motivated augmentations) 3. InfoNCE contrastive loss + SimCLR pretraining 4. FingerprintIndex (4 index types: env, activity, temporal, person) 5. RVF SEG_EMBED (0x0C) + CLI integration 6. Cross-modal alignment (PoseEncoder + InfoNCE) 7. Deep RuVector: MicroLoRA, EWC++, drift detection, hard-negative mining, SEG_LORA ## Stats - 276 tests passing (191 lib + 51 bin + 16 rvf + 18 vitals) - 3,342 additions across 8 files - Zero unsafe/unwrap/panic/todo stubs - ~55KB INT8 model for ESP32 edge deployment Also fixes deprecated GitHub Actions (v3→v4) and adds feat/* branch CI triggers. Closes #50
This commit was merged in pull request #52.
This commit is contained in:
@@ -486,6 +486,16 @@ impl CsiToPoseTransformer {
|
||||
}
|
||||
pub fn config(&self) -> &TransformerConfig { &self.config }
|
||||
|
||||
/// Extract body-part feature embeddings without regression heads.
|
||||
/// Returns 17 vectors of dimension d_model (same as forward() but stops
|
||||
/// before xyz_head/conf_head).
|
||||
pub fn embed(&self, csi_features: &[Vec<f32>]) -> Vec<Vec<f32>> {
|
||||
let embedded: Vec<Vec<f32>> = csi_features.iter()
|
||||
.map(|f| self.csi_embed.forward(f)).collect();
|
||||
let attended = self.cross_attn.forward(&self.keypoint_queries, &embedded, &embedded);
|
||||
self.gnn.forward(&attended)
|
||||
}
|
||||
|
||||
/// Collect all trainable parameters into a flat vec.
|
||||
///
|
||||
/// Layout: csi_embed | keypoint_queries (flat) | cross_attn | gnn | xyz_head | conf_head
|
||||
|
||||
Reference in New Issue
Block a user