feat: implement ADR-029/030/031 — RuvSense multistatic sensing + field model + RuView fusion

12,126 lines of new Rust code across 22 modules with 285 tests:

ADR-029 RuvSense Core (signal crate, 10 modules):
- multiband.rs: Multi-band CSI frame fusion from channel hopping
- phase_align.rs: Cross-channel LO phase rotation correction
- multistatic.rs: Attention-weighted cross-node viewpoint fusion
- coherence.rs: Z-score per-subcarrier coherence scoring
- coherence_gate.rs: Accept/PredictOnly/Reject/Recalibrate gating
- pose_tracker.rs: 17-keypoint Kalman tracker with re-ID
- mod.rs: Pipeline orchestrator

ADR-030 Persistent Field Model (signal crate, 7 modules):
- field_model.rs: SVD-based room eigenstructure, Welford stats
- tomography.rs: Coarse RF tomography from link attenuations (ISTA)
- longitudinal.rs: Personal baseline drift detection over days
- intention.rs: Pre-movement prediction (200-500ms lead signals)
- cross_room.rs: Cross-room identity continuity
- gesture.rs: Gesture classification via DTW template matching
- adversarial.rs: Physically impossible signal detection

ADR-031 RuView (ruvector crate, 5 modules):
- attention.rs: Scaled dot-product with geometric bias
- geometry.rs: Geometric Diversity Index, Cramer-Rao bounds
- coherence.rs: Phase phasor coherence gating
- fusion.rs: MultistaticArray aggregate, fusion orchestrator
- mod.rs: Module exports

Training & Hardware:
- ruview_metrics.rs: 3-metric acceptance test (PCK/OKS, MOTA, vitals)
- esp32/tdm.rs: TDM sensing protocol, sync beacons, drift compensation
- Firmware: channel hopping, NDP injection, NVS config extensions

Security fixes:
- field_model.rs: saturating_sub prevents timestamp underflow
- longitudinal.rs: FIFO eviction note for bounded buffer

README updated with RuvSense section, new feature badges, changelog v3.1.0.

Co-Authored-By: claude-flow <ruv@ruv.net>
This commit is contained in:
ruv
2026-03-01 21:39:02 -05:00
parent 303871275b
commit 37b54d649b
24 changed files with 11417 additions and 8 deletions

View File

@@ -28,3 +28,4 @@
pub mod mat;
pub mod signal;
pub mod viewpoint;

View File

@@ -0,0 +1,667 @@
//! Cross-viewpoint scaled dot-product attention with geometric bias (ADR-031).
//!
//! Implements the core RuView attention mechanism:
//!
//! ```text
//! Q = W_q * X, K = W_k * X, V = W_v * X
//! A = softmax((Q * K^T + G_bias) / sqrt(d))
//! fused = A * V
//! ```
//!
//! The geometric bias `G_bias` encodes angular separation and baseline distance
//! between each viewpoint pair, allowing the attention mechanism to learn that
//! widely-separated, orthogonal viewpoints are more complementary than clustered
//! ones.
//!
//! Wraps `ruvector_attention::ScaledDotProductAttention` for the underlying
//! attention computation.
// The cross-viewpoint attention is implemented directly rather than wrapping
// ruvector_attention::ScaledDotProductAttention, because we need to inject
// the geometric bias matrix G_bias into the QK^T scores before softmax --
// an operation not exposed by the ruvector API. The ruvector-attention crate
// is still a workspace dependency for the signal/bvp integration point.
// ---------------------------------------------------------------------------
// Error types
// ---------------------------------------------------------------------------
/// Errors produced by the cross-viewpoint attention module.
#[derive(Debug, Clone)]
pub enum AttentionError {
/// The number of viewpoints is zero.
EmptyViewpoints,
/// Embedding dimension mismatch between viewpoints.
DimensionMismatch {
/// Expected embedding dimension.
expected: usize,
/// Actual embedding dimension found.
actual: usize,
},
/// The geometric bias matrix dimensions do not match the viewpoint count.
BiasDimensionMismatch {
/// Number of viewpoints.
n_viewpoints: usize,
/// Rows in bias matrix.
bias_rows: usize,
/// Columns in bias matrix.
bias_cols: usize,
},
/// The projection weight matrix has incorrect dimensions.
WeightDimensionMismatch {
/// Expected dimension.
expected: usize,
/// Actual dimension.
actual: usize,
},
}
impl std::fmt::Display for AttentionError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
AttentionError::EmptyViewpoints => write!(f, "no viewpoint embeddings provided"),
AttentionError::DimensionMismatch { expected, actual } => {
write!(f, "embedding dimension mismatch: expected {expected}, got {actual}")
}
AttentionError::BiasDimensionMismatch { n_viewpoints, bias_rows, bias_cols } => {
write!(
f,
"geometric bias matrix is {bias_rows}x{bias_cols} but {n_viewpoints} viewpoints require {n_viewpoints}x{n_viewpoints}"
)
}
AttentionError::WeightDimensionMismatch { expected, actual } => {
write!(f, "weight matrix dimension mismatch: expected {expected}, got {actual}")
}
}
}
}
impl std::error::Error for AttentionError {}
// ---------------------------------------------------------------------------
// GeometricBias
// ---------------------------------------------------------------------------
/// Geometric bias matrix encoding spatial relationships between viewpoint pairs.
///
/// The bias for viewpoint pair `(i, j)` is computed as:
///
/// ```text
/// G_bias[i,j] = w_angle * cos(theta_ij) + w_dist * exp(-d_ij / d_ref)
/// ```
///
/// where `theta_ij` is the angular separation between viewpoints `i` and `j`
/// from the array centroid, `d_ij` is the baseline distance, `w_angle` and
/// `w_dist` are learnable scalar weights, and `d_ref` is a reference distance
/// (typically room diagonal / 2).
#[derive(Debug, Clone)]
pub struct GeometricBias {
/// Learnable weight for the angular component.
pub w_angle: f32,
/// Learnable weight for the distance component.
pub w_dist: f32,
/// Reference distance for the exponential decay (metres).
pub d_ref: f32,
}
impl Default for GeometricBias {
fn default() -> Self {
GeometricBias {
w_angle: 1.0,
w_dist: 1.0,
d_ref: 5.0,
}
}
}
/// A single viewpoint geometry descriptor.
#[derive(Debug, Clone)]
pub struct ViewpointGeometry {
/// Azimuth angle from array centroid (radians).
pub azimuth: f32,
/// 2-D position (x, y) in metres.
pub position: (f32, f32),
}
impl GeometricBias {
/// Create a new geometric bias with the given parameters.
pub fn new(w_angle: f32, w_dist: f32, d_ref: f32) -> Self {
GeometricBias { w_angle, w_dist, d_ref }
}
/// Compute the bias value for a single viewpoint pair.
///
/// # Arguments
///
/// - `theta_ij`: angular separation in radians between viewpoints `i` and `j`.
/// - `d_ij`: baseline distance in metres between viewpoints `i` and `j`.
///
/// # Returns
///
/// The scalar bias value `w_angle * cos(theta_ij) + w_dist * exp(-d_ij / d_ref)`.
pub fn compute_pair(&self, theta_ij: f32, d_ij: f32) -> f32 {
let safe_d_ref = self.d_ref.max(1e-6);
self.w_angle * theta_ij.cos() + self.w_dist * (-d_ij / safe_d_ref).exp()
}
/// Build the full N x N geometric bias matrix from viewpoint geometries.
///
/// # Arguments
///
/// - `viewpoints`: slice of viewpoint geometry descriptors.
///
/// # Returns
///
/// Flat row-major `N x N` bias matrix.
pub fn build_matrix(&self, viewpoints: &[ViewpointGeometry]) -> Vec<f32> {
let n = viewpoints.len();
let mut matrix = vec![0.0_f32; n * n];
for i in 0..n {
for j in 0..n {
if i == j {
// Self-bias: maximum (cos(0) = 1, exp(0) = 1)
matrix[i * n + j] = self.w_angle + self.w_dist;
} else {
let theta_ij = (viewpoints[i].azimuth - viewpoints[j].azimuth).abs();
let dx = viewpoints[i].position.0 - viewpoints[j].position.0;
let dy = viewpoints[i].position.1 - viewpoints[j].position.1;
let d_ij = (dx * dx + dy * dy).sqrt();
matrix[i * n + j] = self.compute_pair(theta_ij, d_ij);
}
}
}
matrix
}
}
// ---------------------------------------------------------------------------
// Projection weights
// ---------------------------------------------------------------------------
/// Linear projection weights for Q, K, V transformations.
///
/// Each weight matrix is `d_out x d_in`, stored row-major. In the default
/// (identity) configuration `d_out == d_in` and the matrices are identity.
#[derive(Debug, Clone)]
pub struct ProjectionWeights {
/// W_q projection matrix, row-major `[d_out, d_in]`.
pub w_q: Vec<f32>,
/// W_k projection matrix, row-major `[d_out, d_in]`.
pub w_k: Vec<f32>,
/// W_v projection matrix, row-major `[d_out, d_in]`.
pub w_v: Vec<f32>,
/// Input dimension.
pub d_in: usize,
/// Output (projected) dimension.
pub d_out: usize,
}
impl ProjectionWeights {
/// Create identity projections (d_out == d_in, W = I).
pub fn identity(dim: usize) -> Self {
let mut eye = vec![0.0_f32; dim * dim];
for i in 0..dim {
eye[i * dim + i] = 1.0;
}
ProjectionWeights {
w_q: eye.clone(),
w_k: eye.clone(),
w_v: eye,
d_in: dim,
d_out: dim,
}
}
/// Create projections with given weight matrices.
///
/// Each matrix must be `d_out * d_in` elements, stored row-major.
pub fn new(
w_q: Vec<f32>,
w_k: Vec<f32>,
w_v: Vec<f32>,
d_in: usize,
d_out: usize,
) -> Result<Self, AttentionError> {
let expected_len = d_out * d_in;
if w_q.len() != expected_len {
return Err(AttentionError::WeightDimensionMismatch {
expected: expected_len,
actual: w_q.len(),
});
}
if w_k.len() != expected_len {
return Err(AttentionError::WeightDimensionMismatch {
expected: expected_len,
actual: w_k.len(),
});
}
if w_v.len() != expected_len {
return Err(AttentionError::WeightDimensionMismatch {
expected: expected_len,
actual: w_v.len(),
});
}
Ok(ProjectionWeights { w_q, w_k, w_v, d_in, d_out })
}
/// Project a single embedding vector through a weight matrix.
///
/// `weight` is `[d_out, d_in]` row-major, `input` is `[d_in]`.
/// Returns `[d_out]`.
fn project(&self, weight: &[f32], input: &[f32]) -> Vec<f32> {
let mut output = vec![0.0_f32; self.d_out];
for row in 0..self.d_out {
let mut sum = 0.0_f32;
for col in 0..self.d_in {
sum += weight[row * self.d_in + col] * input[col];
}
output[row] = sum;
}
output
}
/// Project all viewpoint embeddings through W_q.
pub fn project_queries(&self, embeddings: &[Vec<f32>]) -> Vec<Vec<f32>> {
embeddings.iter().map(|e| self.project(&self.w_q, e)).collect()
}
/// Project all viewpoint embeddings through W_k.
pub fn project_keys(&self, embeddings: &[Vec<f32>]) -> Vec<Vec<f32>> {
embeddings.iter().map(|e| self.project(&self.w_k, e)).collect()
}
/// Project all viewpoint embeddings through W_v.
pub fn project_values(&self, embeddings: &[Vec<f32>]) -> Vec<Vec<f32>> {
embeddings.iter().map(|e| self.project(&self.w_v, e)).collect()
}
}
// ---------------------------------------------------------------------------
// CrossViewpointAttention
// ---------------------------------------------------------------------------
/// Cross-viewpoint attention with geometric bias.
///
/// Computes the full RuView attention pipeline:
///
/// 1. Project embeddings through W_q, W_k, W_v.
/// 2. Compute attention scores: `A = softmax((Q * K^T + G_bias) / sqrt(d))`.
/// 3. Weighted sum: `fused = A * V`.
///
/// The output is one fused embedding per input viewpoint (row of A * V).
/// To obtain a single fused embedding, use [`CrossViewpointAttention::fuse`]
/// which mean-pools the attended outputs.
pub struct CrossViewpointAttention {
/// Projection weights for Q, K, V.
pub weights: ProjectionWeights,
/// Geometric bias parameters.
pub bias: GeometricBias,
}
impl CrossViewpointAttention {
/// Create a new cross-viewpoint attention module with identity projections.
///
/// # Arguments
///
/// - `embed_dim`: embedding dimension (e.g. 128 for AETHER).
pub fn new(embed_dim: usize) -> Self {
CrossViewpointAttention {
weights: ProjectionWeights::identity(embed_dim),
bias: GeometricBias::default(),
}
}
/// Create with custom projection weights and bias.
pub fn with_params(weights: ProjectionWeights, bias: GeometricBias) -> Self {
CrossViewpointAttention { weights, bias }
}
/// Compute the full attention output for all viewpoints.
///
/// # Arguments
///
/// - `embeddings`: per-viewpoint embedding vectors, each of length `d_in`.
/// - `viewpoint_geom`: per-viewpoint geometry descriptors (same length).
///
/// # Returns
///
/// `Ok(attended)` where `attended` is `N` vectors of length `d_out`, one per
/// viewpoint after cross-viewpoint attention. Returns an error if dimensions
/// are inconsistent.
pub fn attend(
&self,
embeddings: &[Vec<f32>],
viewpoint_geom: &[ViewpointGeometry],
) -> Result<Vec<Vec<f32>>, AttentionError> {
let n = embeddings.len();
if n == 0 {
return Err(AttentionError::EmptyViewpoints);
}
// Validate embedding dimensions.
for (idx, emb) in embeddings.iter().enumerate() {
if emb.len() != self.weights.d_in {
return Err(AttentionError::DimensionMismatch {
expected: self.weights.d_in,
actual: emb.len(),
});
}
let _ = idx; // suppress unused warning
}
let d = self.weights.d_out;
let scale = 1.0 / (d as f32).sqrt();
// Project through W_q, W_k, W_v.
let queries = self.weights.project_queries(embeddings);
let keys = self.weights.project_keys(embeddings);
let values = self.weights.project_values(embeddings);
// Build geometric bias matrix.
let g_bias = self.bias.build_matrix(viewpoint_geom);
// Compute attention scores: (Q * K^T + G_bias) / sqrt(d), then softmax.
let mut attention_weights = vec![0.0_f32; n * n];
for i in 0..n {
// Compute raw scores for row i.
let mut max_score = f32::NEG_INFINITY;
for j in 0..n {
let dot: f32 = queries[i].iter().zip(&keys[j]).map(|(q, k)| q * k).sum();
let score = (dot + g_bias[i * n + j]) * scale;
attention_weights[i * n + j] = score;
if score > max_score {
max_score = score;
}
}
// Softmax: subtract max for numerical stability, then exponentiate.
let mut sum_exp = 0.0_f32;
for j in 0..n {
let val = (attention_weights[i * n + j] - max_score).exp();
attention_weights[i * n + j] = val;
sum_exp += val;
}
let safe_sum = sum_exp.max(f32::EPSILON);
for j in 0..n {
attention_weights[i * n + j] /= safe_sum;
}
}
// Weighted sum: attended[i] = sum_j (attention_weights[i,j] * values[j]).
let mut attended = Vec::with_capacity(n);
for i in 0..n {
let mut output = vec![0.0_f32; d];
for j in 0..n {
let w = attention_weights[i * n + j];
for k in 0..d {
output[k] += w * values[j][k];
}
}
attended.push(output);
}
Ok(attended)
}
/// Fuse multiple viewpoint embeddings into a single embedding.
///
/// Applies cross-viewpoint attention, then mean-pools the attended outputs
/// to produce a single fused embedding of dimension `d_out`.
///
/// # Arguments
///
/// - `embeddings`: per-viewpoint embedding vectors.
/// - `viewpoint_geom`: per-viewpoint geometry descriptors.
///
/// # Returns
///
/// A single fused embedding of length `d_out`.
pub fn fuse(
&self,
embeddings: &[Vec<f32>],
viewpoint_geom: &[ViewpointGeometry],
) -> Result<Vec<f32>, AttentionError> {
let attended = self.attend(embeddings, viewpoint_geom)?;
let n = attended.len();
let d = self.weights.d_out;
let mut fused = vec![0.0_f32; d];
for row in &attended {
for k in 0..d {
fused[k] += row[k];
}
}
let n_f = n as f32;
for k in 0..d {
fused[k] /= n_f;
}
Ok(fused)
}
/// Extract the raw attention weight matrix (for diagnostics).
///
/// Returns the `N x N` attention weight matrix (row-major, each row sums to 1).
pub fn attention_weights(
&self,
embeddings: &[Vec<f32>],
viewpoint_geom: &[ViewpointGeometry],
) -> Result<Vec<f32>, AttentionError> {
let n = embeddings.len();
if n == 0 {
return Err(AttentionError::EmptyViewpoints);
}
let d = self.weights.d_out;
let scale = 1.0 / (d as f32).sqrt();
let queries = self.weights.project_queries(embeddings);
let keys = self.weights.project_keys(embeddings);
let g_bias = self.bias.build_matrix(viewpoint_geom);
let mut weights = vec![0.0_f32; n * n];
for i in 0..n {
let mut max_score = f32::NEG_INFINITY;
for j in 0..n {
let dot: f32 = queries[i].iter().zip(&keys[j]).map(|(q, k)| q * k).sum();
let score = (dot + g_bias[i * n + j]) * scale;
weights[i * n + j] = score;
if score > max_score {
max_score = score;
}
}
let mut sum_exp = 0.0_f32;
for j in 0..n {
let val = (weights[i * n + j] - max_score).exp();
weights[i * n + j] = val;
sum_exp += val;
}
let safe_sum = sum_exp.max(f32::EPSILON);
for j in 0..n {
weights[i * n + j] /= safe_sum;
}
}
Ok(weights)
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
fn make_test_geom(n: usize) -> Vec<ViewpointGeometry> {
(0..n)
.map(|i| {
let angle = 2.0 * std::f32::consts::PI * i as f32 / n as f32;
let r = 3.0;
ViewpointGeometry {
azimuth: angle,
position: (r * angle.cos(), r * angle.sin()),
}
})
.collect()
}
fn make_test_embeddings(n: usize, dim: usize) -> Vec<Vec<f32>> {
(0..n)
.map(|i| {
(0..dim).map(|d| ((i * dim + d) as f32 * 0.01).sin()).collect()
})
.collect()
}
#[test]
fn fuse_produces_correct_dimension() {
let dim = 16;
let n = 4;
let attn = CrossViewpointAttention::new(dim);
let embeddings = make_test_embeddings(n, dim);
let geom = make_test_geom(n);
let fused = attn.fuse(&embeddings, &geom).unwrap();
assert_eq!(fused.len(), dim, "fused embedding must have length {dim}");
}
#[test]
fn attend_produces_n_outputs() {
let dim = 8;
let n = 3;
let attn = CrossViewpointAttention::new(dim);
let embeddings = make_test_embeddings(n, dim);
let geom = make_test_geom(n);
let attended = attn.attend(&embeddings, &geom).unwrap();
assert_eq!(attended.len(), n, "must produce one output per viewpoint");
for row in &attended {
assert_eq!(row.len(), dim);
}
}
#[test]
fn attention_weights_sum_to_one() {
let dim = 8;
let n = 4;
let attn = CrossViewpointAttention::new(dim);
let embeddings = make_test_embeddings(n, dim);
let geom = make_test_geom(n);
let weights = attn.attention_weights(&embeddings, &geom).unwrap();
assert_eq!(weights.len(), n * n);
for i in 0..n {
let row_sum: f32 = (0..n).map(|j| weights[i * n + j]).sum();
assert!(
(row_sum - 1.0).abs() < 1e-5,
"row {i} sums to {row_sum}, expected 1.0"
);
}
}
#[test]
fn attention_weights_are_non_negative() {
let dim = 8;
let n = 3;
let attn = CrossViewpointAttention::new(dim);
let embeddings = make_test_embeddings(n, dim);
let geom = make_test_geom(n);
let weights = attn.attention_weights(&embeddings, &geom).unwrap();
for w in &weights {
assert!(*w >= 0.0, "attention weight must be non-negative, got {w}");
}
}
#[test]
fn empty_viewpoints_returns_error() {
let attn = CrossViewpointAttention::new(8);
let result = attn.fuse(&[], &[]);
assert!(result.is_err());
}
#[test]
fn dimension_mismatch_returns_error() {
let attn = CrossViewpointAttention::new(8);
let embeddings = vec![vec![1.0_f32; 4]]; // wrong dim
let geom = make_test_geom(1);
let result = attn.fuse(&embeddings, &geom);
assert!(result.is_err());
}
#[test]
fn geometric_bias_pair_computation() {
let bias = GeometricBias::new(1.0, 1.0, 5.0);
// Same position: theta=0, d=0 -> cos(0) + exp(0) = 2.0
let val = bias.compute_pair(0.0, 0.0);
assert!((val - 2.0).abs() < 1e-5, "self-bias should be 2.0, got {val}");
// Orthogonal, far apart: theta=PI/2, d=5.0
let val_orth = bias.compute_pair(std::f32::consts::FRAC_PI_2, 5.0);
// cos(PI/2) ~ 0 + exp(-1) ~ 0.368
assert!(val_orth < 1.0, "orthogonal far-apart viewpoints should have low bias");
}
#[test]
fn geometric_bias_matrix_is_symmetric_for_symmetric_layout() {
let bias = GeometricBias::default();
let geom = make_test_geom(4);
let matrix = bias.build_matrix(&geom);
let n = 4;
for i in 0..n {
for j in 0..n {
assert!(
(matrix[i * n + j] - matrix[j * n + i]).abs() < 1e-5,
"bias matrix must be symmetric for symmetric layout: [{i},{j}]={} vs [{j},{i}]={}",
matrix[i * n + j],
matrix[j * n + i]
);
}
}
}
#[test]
fn single_viewpoint_fuse_returns_projection() {
let dim = 8;
let attn = CrossViewpointAttention::new(dim);
let embeddings = vec![vec![1.0_f32; dim]];
let geom = make_test_geom(1);
let fused = attn.fuse(&embeddings, &geom).unwrap();
// With identity projection and single viewpoint, fused == input.
for (i, v) in fused.iter().enumerate() {
assert!(
(v - 1.0).abs() < 1e-5,
"single-viewpoint fuse should return input, dim {i}: {v}"
);
}
}
#[test]
fn projection_weights_custom_transform() {
// Verify that non-identity weights change the output.
let dim = 4;
// Swap first two dimensions in Q.
let mut w_q = vec![0.0_f32; dim * dim];
w_q[0 * dim + 1] = 1.0; // row 0 picks dim 1
w_q[1 * dim + 0] = 1.0; // row 1 picks dim 0
w_q[2 * dim + 2] = 1.0;
w_q[3 * dim + 3] = 1.0;
let w_id = {
let mut eye = vec![0.0_f32; dim * dim];
for i in 0..dim {
eye[i * dim + i] = 1.0;
}
eye
};
let weights = ProjectionWeights::new(w_q, w_id.clone(), w_id, dim, dim).unwrap();
let queries = weights.project_queries(&[vec![1.0, 2.0, 3.0, 4.0]]);
assert_eq!(queries[0], vec![2.0, 1.0, 3.0, 4.0]);
}
#[test]
fn geometric_bias_with_large_distance_decays() {
let bias = GeometricBias::new(0.0, 1.0, 2.0); // only distance component
let close = bias.compute_pair(0.0, 0.5);
let far = bias.compute_pair(0.0, 10.0);
assert!(close > far, "closer viewpoints should have higher distance bias");
}
}

View File

@@ -0,0 +1,383 @@
//! Coherence gating for environment stability (ADR-031).
//!
//! Phase coherence determines whether the wireless environment is sufficiently
//! stable for a model update. When multipath conditions change rapidly (e.g.
//! doors opening, people entering), phase becomes incoherent and fusion
//! quality degrades. The coherence gate prevents model updates during these
//! transient periods.
//!
//! The core computation is the complex mean of unit phasors:
//!
//! ```text
//! coherence = |mean(exp(j * delta_phi))|
//! = sqrt((mean(cos(delta_phi)))^2 + (mean(sin(delta_phi)))^2)
//! ```
//!
//! A coherence value near 1.0 indicates consistent phase; near 0.0 indicates
//! random phase (incoherent environment).
// ---------------------------------------------------------------------------
// CoherenceState
// ---------------------------------------------------------------------------
/// Rolling coherence state tracking phase consistency over a sliding window.
///
/// Maintains a circular buffer of phase differences and incrementally updates
/// the coherence estimate as new measurements arrive.
#[derive(Debug, Clone)]
pub struct CoherenceState {
/// Circular buffer of phase differences (radians).
phase_diffs: Vec<f32>,
/// Write position in the circular buffer.
write_pos: usize,
/// Number of valid entries in the buffer (may be less than capacity
/// during warm-up).
count: usize,
/// Running sum of cos(phase_diff).
sum_cos: f64,
/// Running sum of sin(phase_diff).
sum_sin: f64,
}
impl CoherenceState {
/// Create a new coherence state with the given window size.
///
/// # Arguments
///
/// - `window_size`: number of phase measurements to retain. Larger windows
/// are more stable but respond more slowly to environment changes.
/// Must be at least 1.
pub fn new(window_size: usize) -> Self {
let size = window_size.max(1);
CoherenceState {
phase_diffs: vec![0.0; size],
write_pos: 0,
count: 0,
sum_cos: 0.0,
sum_sin: 0.0,
}
}
/// Push a new phase difference measurement into the rolling window.
///
/// If the buffer is full, the oldest measurement is evicted and its
/// contribution is subtracted from the running sums.
pub fn push(&mut self, phase_diff: f32) {
let cap = self.phase_diffs.len();
// If buffer is full, subtract the evicted entry.
if self.count == cap {
let old = self.phase_diffs[self.write_pos];
self.sum_cos -= old.cos() as f64;
self.sum_sin -= old.sin() as f64;
} else {
self.count += 1;
}
// Write new entry.
self.phase_diffs[self.write_pos] = phase_diff;
self.sum_cos += phase_diff.cos() as f64;
self.sum_sin += phase_diff.sin() as f64;
self.write_pos = (self.write_pos + 1) % cap;
}
/// Current coherence value in `[0, 1]`.
///
/// Returns 0.0 if no measurements have been pushed yet.
pub fn coherence(&self) -> f32 {
if self.count == 0 {
return 0.0;
}
let n = self.count as f64;
let mean_cos = self.sum_cos / n;
let mean_sin = self.sum_sin / n;
(mean_cos * mean_cos + mean_sin * mean_sin).sqrt() as f32
}
/// Number of measurements currently in the buffer.
pub fn len(&self) -> usize {
self.count
}
/// Returns `true` if no measurements have been pushed.
pub fn is_empty(&self) -> bool {
self.count == 0
}
/// Window capacity.
pub fn capacity(&self) -> usize {
self.phase_diffs.len()
}
/// Reset the coherence state, clearing all measurements.
pub fn reset(&mut self) {
self.write_pos = 0;
self.count = 0;
self.sum_cos = 0.0;
self.sum_sin = 0.0;
}
}
// ---------------------------------------------------------------------------
// CoherenceGate
// ---------------------------------------------------------------------------
/// Coherence gate that controls model updates based on phase stability.
///
/// Only allows model updates when the coherence exceeds a configurable
/// threshold. Provides hysteresis to avoid rapid gate toggling near the
/// threshold boundary.
#[derive(Debug, Clone)]
pub struct CoherenceGate {
/// Coherence threshold for opening the gate.
pub threshold: f32,
/// Hysteresis band: gate opens at `threshold` and closes at
/// `threshold - hysteresis`.
pub hysteresis: f32,
/// Current gate state: `true` = open (updates allowed).
gate_open: bool,
/// Total number of gate evaluations.
total_evaluations: u64,
/// Number of times the gate was open.
open_count: u64,
}
impl CoherenceGate {
/// Create a new coherence gate with the given threshold.
///
/// # Arguments
///
/// - `threshold`: coherence level required for the gate to open (typically 0.7).
/// - `hysteresis`: band below the threshold where the gate stays in its
/// current state (typically 0.05).
pub fn new(threshold: f32, hysteresis: f32) -> Self {
CoherenceGate {
threshold: threshold.clamp(0.0, 1.0),
hysteresis: hysteresis.clamp(0.0, threshold),
gate_open: false,
total_evaluations: 0,
open_count: 0,
}
}
/// Create a gate with default parameters (threshold=0.7, hysteresis=0.05).
pub fn default_params() -> Self {
Self::new(0.7, 0.05)
}
/// Evaluate the gate against the current coherence value.
///
/// Returns `true` if the gate is open (model update allowed).
pub fn evaluate(&mut self, coherence: f32) -> bool {
self.total_evaluations += 1;
if self.gate_open {
// Gate is open: close if coherence drops below threshold - hysteresis.
if coherence < self.threshold - self.hysteresis {
self.gate_open = false;
}
} else {
// Gate is closed: open if coherence exceeds threshold.
if coherence >= self.threshold {
self.gate_open = true;
}
}
if self.gate_open {
self.open_count += 1;
}
self.gate_open
}
/// Whether the gate is currently open.
pub fn is_open(&self) -> bool {
self.gate_open
}
/// Fraction of evaluations where the gate was open.
pub fn duty_cycle(&self) -> f32 {
if self.total_evaluations == 0 {
return 0.0;
}
self.open_count as f32 / self.total_evaluations as f32
}
/// Reset the gate state and counters.
pub fn reset(&mut self) {
self.gate_open = false;
self.total_evaluations = 0;
self.open_count = 0;
}
}
/// Stateless coherence gate function matching the ADR-031 specification.
///
/// Computes the complex mean of unit phasors from the given phase differences
/// and returns `true` when coherence exceeds the threshold.
///
/// # Arguments
///
/// - `phase_diffs`: delta-phi over T recent frames (radians).
/// - `threshold`: coherence threshold (typically 0.7).
///
/// # Returns
///
/// `true` if the phase coherence exceeds the threshold.
pub fn coherence_gate(phase_diffs: &[f32], threshold: f32) -> bool {
if phase_diffs.is_empty() {
return false;
}
let (sum_cos, sum_sin) = phase_diffs
.iter()
.fold((0.0_f32, 0.0_f32), |(c, s), &dp| {
(c + dp.cos(), s + dp.sin())
});
let n = phase_diffs.len() as f32;
let coherence = ((sum_cos / n).powi(2) + (sum_sin / n).powi(2)).sqrt();
coherence > threshold
}
/// Compute the raw coherence value from phase differences.
///
/// Returns a value in `[0, 1]` where 1.0 = perfectly coherent phase.
pub fn compute_coherence(phase_diffs: &[f32]) -> f32 {
if phase_diffs.is_empty() {
return 0.0;
}
let (sum_cos, sum_sin) = phase_diffs
.iter()
.fold((0.0_f32, 0.0_f32), |(c, s), &dp| {
(c + dp.cos(), s + dp.sin())
});
let n = phase_diffs.len() as f32;
((sum_cos / n).powi(2) + (sum_sin / n).powi(2)).sqrt()
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn coherent_phase_returns_high_value() {
// All phase diffs are the same -> coherence ~ 1.0
let phase_diffs = vec![0.5_f32; 100];
let c = compute_coherence(&phase_diffs);
assert!(c > 0.99, "identical phases should give coherence ~ 1.0, got {c}");
}
#[test]
fn random_phase_returns_low_value() {
// Uniformly spaced phases around the circle -> coherence ~ 0.0
let n = 1000;
let phase_diffs: Vec<f32> = (0..n)
.map(|i| 2.0 * std::f32::consts::PI * i as f32 / n as f32)
.collect();
let c = compute_coherence(&phase_diffs);
assert!(c < 0.05, "uniformly spread phases should give coherence ~ 0.0, got {c}");
}
#[test]
fn coherence_gate_opens_above_threshold() {
let coherent = vec![0.3_f32; 50]; // same phase -> high coherence
assert!(coherence_gate(&coherent, 0.7));
}
#[test]
fn coherence_gate_closed_below_threshold() {
let n = 500;
let incoherent: Vec<f32> = (0..n)
.map(|i| 2.0 * std::f32::consts::PI * i as f32 / n as f32)
.collect();
assert!(!coherence_gate(&incoherent, 0.7));
}
#[test]
fn coherence_gate_empty_returns_false() {
assert!(!coherence_gate(&[], 0.5));
}
#[test]
fn coherence_state_rolling_window() {
let mut state = CoherenceState::new(10);
// Push coherent measurements.
for _ in 0..10 {
state.push(1.0);
}
let c1 = state.coherence();
assert!(c1 > 0.9, "coherent window should give high coherence");
// Push incoherent measurements to replace the window.
for i in 0..10 {
state.push(i as f32 * 0.628);
}
let c2 = state.coherence();
assert!(c2 < c1, "incoherent updates should reduce coherence");
}
#[test]
fn coherence_state_empty_returns_zero() {
let state = CoherenceState::new(10);
assert_eq!(state.coherence(), 0.0);
assert!(state.is_empty());
}
#[test]
fn gate_hysteresis_prevents_toggling() {
let mut gate = CoherenceGate::new(0.7, 0.1);
// Open the gate.
assert!(gate.evaluate(0.8));
assert!(gate.is_open());
// Coherence drops to 0.65 (below threshold but within hysteresis band).
assert!(gate.evaluate(0.65));
assert!(gate.is_open(), "gate should stay open within hysteresis band");
// Coherence drops below hysteresis boundary (0.7 - 0.1 = 0.6).
assert!(!gate.evaluate(0.55));
assert!(!gate.is_open(), "gate should close below hysteresis boundary");
}
#[test]
fn gate_duty_cycle_tracks_correctly() {
let mut gate = CoherenceGate::new(0.5, 0.0);
gate.evaluate(0.6); // open
gate.evaluate(0.6); // open
gate.evaluate(0.3); // close
gate.evaluate(0.3); // close
let duty = gate.duty_cycle();
assert!(
(duty - 0.5).abs() < 1e-5,
"duty cycle should be 0.5, got {duty}"
);
}
#[test]
fn gate_reset_clears_state() {
let mut gate = CoherenceGate::new(0.5, 0.0);
gate.evaluate(0.6);
assert!(gate.is_open());
gate.reset();
assert!(!gate.is_open());
assert_eq!(gate.duty_cycle(), 0.0);
}
#[test]
fn coherence_state_push_and_len() {
let mut state = CoherenceState::new(5);
assert_eq!(state.len(), 0);
state.push(0.1);
state.push(0.2);
assert_eq!(state.len(), 2);
// Fill past capacity.
for i in 0..10 {
state.push(i as f32 * 0.1);
}
assert_eq!(state.len(), 5, "count should be capped at window size");
}
}

View File

@@ -0,0 +1,696 @@
//! MultistaticArray aggregate root and fusion pipeline orchestrator (ADR-031).
//!
//! [`MultistaticArray`] is the DDD aggregate root for the ViewpointFusion
//! bounded context. It orchestrates the full fusion pipeline:
//!
//! 1. Collect per-viewpoint AETHER embeddings.
//! 2. Compute geometric bias from viewpoint pair geometry.
//! 3. Apply cross-viewpoint attention with geometric bias.
//! 4. Gate the output through coherence check.
//! 5. Emit a fused embedding for the DensePose regression head.
//!
//! Uses `ruvector-attention` for the attention mechanism and
//! `ruvector-attn-mincut` for optional noise gating on embeddings.
use crate::viewpoint::attention::{
AttentionError, CrossViewpointAttention, GeometricBias, ViewpointGeometry,
};
use crate::viewpoint::coherence::{CoherenceGate, CoherenceState};
use crate::viewpoint::geometry::{GeometricDiversityIndex, NodeId};
// ---------------------------------------------------------------------------
// Domain types
// ---------------------------------------------------------------------------
/// Unique identifier for a multistatic array deployment.
pub type ArrayId = u64;
/// Per-viewpoint embedding with geometric metadata.
///
/// Represents a single CSI observation processed through the per-viewpoint
/// signal pipeline and AETHER encoder into a contrastive embedding.
#[derive(Debug, Clone)]
pub struct ViewpointEmbedding {
/// Source node identifier.
pub node_id: NodeId,
/// AETHER embedding vector (typically 128-d).
pub embedding: Vec<f32>,
/// Azimuth angle from array centroid (radians).
pub azimuth: f32,
/// Elevation angle (radians, 0 for 2-D deployments).
pub elevation: f32,
/// Baseline distance from array centroid (metres).
pub baseline: f32,
/// Node position in metres (x, y).
pub position: (f32, f32),
/// Signal-to-noise ratio at capture time (dB).
pub snr_db: f32,
}
/// Fused embedding output from the cross-viewpoint attention pipeline.
#[derive(Debug, Clone)]
pub struct FusedEmbedding {
/// The fused embedding vector.
pub embedding: Vec<f32>,
/// Geometric Diversity Index at the time of fusion.
pub gdi: f32,
/// Coherence value at the time of fusion.
pub coherence: f32,
/// Number of viewpoints that contributed to the fusion.
pub n_viewpoints: usize,
/// Effective independent viewpoints (after correlation discount).
pub n_effective: f32,
}
/// Configuration for the fusion pipeline.
#[derive(Debug, Clone)]
pub struct FusionConfig {
/// Embedding dimension (must match AETHER output, typically 128).
pub embed_dim: usize,
/// Coherence threshold for gating (typically 0.7).
pub coherence_threshold: f32,
/// Coherence hysteresis band (typically 0.05).
pub coherence_hysteresis: f32,
/// Coherence rolling window size (number of frames).
pub coherence_window: usize,
/// Geometric bias angle weight.
pub w_angle: f32,
/// Geometric bias distance weight.
pub w_dist: f32,
/// Reference distance for geometric bias decay (metres).
pub d_ref: f32,
/// Minimum SNR (dB) for a viewpoint to contribute to fusion.
pub min_snr_db: f32,
}
impl Default for FusionConfig {
fn default() -> Self {
FusionConfig {
embed_dim: 128,
coherence_threshold: 0.7,
coherence_hysteresis: 0.05,
coherence_window: 50,
w_angle: 1.0,
w_dist: 1.0,
d_ref: 5.0,
min_snr_db: 5.0,
}
}
}
// ---------------------------------------------------------------------------
// Fusion errors
// ---------------------------------------------------------------------------
/// Errors produced by the fusion pipeline.
#[derive(Debug, Clone)]
pub enum FusionError {
/// No viewpoint embeddings available for fusion.
NoViewpoints,
/// All viewpoints were filtered out (e.g. by SNR threshold).
AllFiltered {
/// Number of viewpoints that were rejected.
rejected: usize,
},
/// Coherence gate is closed (environment too unstable).
CoherenceGateClosed {
/// Current coherence value.
coherence: f32,
/// Required threshold.
threshold: f32,
},
/// Internal attention computation error.
AttentionError(AttentionError),
/// Embedding dimension mismatch.
DimensionMismatch {
/// Expected dimension.
expected: usize,
/// Actual dimension.
actual: usize,
/// Node that produced the mismatched embedding.
node_id: NodeId,
},
}
impl std::fmt::Display for FusionError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
FusionError::NoViewpoints => write!(f, "no viewpoint embeddings available"),
FusionError::AllFiltered { rejected } => {
write!(f, "all {rejected} viewpoints filtered by SNR threshold")
}
FusionError::CoherenceGateClosed { coherence, threshold } => {
write!(
f,
"coherence gate closed: coherence={coherence:.3} < threshold={threshold:.3}"
)
}
FusionError::AttentionError(e) => write!(f, "attention error: {e}"),
FusionError::DimensionMismatch { expected, actual, node_id } => {
write!(
f,
"node {node_id} embedding dim {actual} != expected {expected}"
)
}
}
}
}
impl std::error::Error for FusionError {}
impl From<AttentionError> for FusionError {
fn from(e: AttentionError) -> Self {
FusionError::AttentionError(e)
}
}
// ---------------------------------------------------------------------------
// Domain events
// ---------------------------------------------------------------------------
/// Events emitted by the ViewpointFusion aggregate.
#[derive(Debug, Clone)]
pub enum ViewpointFusionEvent {
/// A viewpoint embedding was received from a node.
ViewpointCaptured {
/// Source node.
node_id: NodeId,
/// Signal quality.
snr_db: f32,
},
/// A TDM cycle completed with all (or some) viewpoints received.
TdmCycleCompleted {
/// Monotonic cycle counter.
cycle_id: u64,
/// Number of viewpoints received this cycle.
viewpoints_received: usize,
},
/// Fusion completed successfully.
FusionCompleted {
/// GDI at the time of fusion.
gdi: f32,
/// Number of viewpoints fused.
n_viewpoints: usize,
},
/// Coherence gate evaluation result.
CoherenceGateTriggered {
/// Current coherence value.
coherence: f32,
/// Whether the gate accepted the update.
accepted: bool,
},
/// Array geometry was updated.
GeometryUpdated {
/// New GDI value.
new_gdi: f32,
/// Effective independent viewpoints.
n_effective: f32,
},
}
// ---------------------------------------------------------------------------
// MultistaticArray (aggregate root)
// ---------------------------------------------------------------------------
/// Aggregate root for the ViewpointFusion bounded context.
///
/// Manages the lifecycle of a multistatic sensor array: collecting viewpoint
/// embeddings, computing geometric diversity, gating on coherence, and
/// producing fused embeddings for downstream pose estimation.
pub struct MultistaticArray {
/// Unique deployment identifier.
id: ArrayId,
/// Active viewpoint embeddings (latest per node).
viewpoints: Vec<ViewpointEmbedding>,
/// Cross-viewpoint attention module.
attention: CrossViewpointAttention,
/// Coherence state tracker.
coherence_state: CoherenceState,
/// Coherence gate.
coherence_gate: CoherenceGate,
/// Pipeline configuration.
config: FusionConfig,
/// Monotonic TDM cycle counter.
cycle_count: u64,
/// Event log (bounded).
events: Vec<ViewpointFusionEvent>,
/// Maximum events to retain.
max_events: usize,
}
impl MultistaticArray {
/// Create a new multistatic array with the given configuration.
pub fn new(id: ArrayId, config: FusionConfig) -> Self {
let attention = CrossViewpointAttention::new(config.embed_dim);
let attention = CrossViewpointAttention::with_params(
attention.weights,
GeometricBias::new(config.w_angle, config.w_dist, config.d_ref),
);
let coherence_state = CoherenceState::new(config.coherence_window);
let coherence_gate =
CoherenceGate::new(config.coherence_threshold, config.coherence_hysteresis);
MultistaticArray {
id,
viewpoints: Vec::new(),
attention,
coherence_state,
coherence_gate,
config,
cycle_count: 0,
events: Vec::new(),
max_events: 1000,
}
}
/// Create with default configuration.
pub fn with_defaults(id: ArrayId) -> Self {
Self::new(id, FusionConfig::default())
}
/// Array deployment identifier.
pub fn id(&self) -> ArrayId {
self.id
}
/// Number of viewpoints currently held.
pub fn n_viewpoints(&self) -> usize {
self.viewpoints.len()
}
/// Current TDM cycle count.
pub fn cycle_count(&self) -> u64 {
self.cycle_count
}
/// Submit a viewpoint embedding from a sensor node.
///
/// Replaces any existing embedding for the same `node_id`.
pub fn submit_viewpoint(&mut self, vp: ViewpointEmbedding) -> Result<(), FusionError> {
// Validate embedding dimension.
if vp.embedding.len() != self.config.embed_dim {
return Err(FusionError::DimensionMismatch {
expected: self.config.embed_dim,
actual: vp.embedding.len(),
node_id: vp.node_id,
});
}
self.emit_event(ViewpointFusionEvent::ViewpointCaptured {
node_id: vp.node_id,
snr_db: vp.snr_db,
});
// Upsert: replace existing embedding for this node.
if let Some(pos) = self.viewpoints.iter().position(|v| v.node_id == vp.node_id) {
self.viewpoints[pos] = vp;
} else {
self.viewpoints.push(vp);
}
Ok(())
}
/// Push a phase-difference measurement for coherence tracking.
pub fn push_phase_diff(&mut self, phase_diff: f32) {
self.coherence_state.push(phase_diff);
}
/// Current coherence value.
pub fn coherence(&self) -> f32 {
self.coherence_state.coherence()
}
/// Compute the Geometric Diversity Index for the current array layout.
pub fn compute_gdi(&self) -> Option<GeometricDiversityIndex> {
let azimuths: Vec<f32> = self.viewpoints.iter().map(|v| v.azimuth).collect();
let ids: Vec<NodeId> = self.viewpoints.iter().map(|v| v.node_id).collect();
let gdi = GeometricDiversityIndex::compute(&azimuths, &ids);
if let Some(ref g) = gdi {
// Emit event (mutable borrow not possible here, caller can do it).
let _ = g; // used for return
}
gdi
}
/// Run the full fusion pipeline.
///
/// 1. Filter viewpoints by SNR.
/// 2. Check coherence gate.
/// 3. Compute geometric bias.
/// 4. Apply cross-viewpoint attention.
/// 5. Mean-pool to single fused embedding.
///
/// # Returns
///
/// `Ok(FusedEmbedding)` on success, or an error if the pipeline cannot
/// produce a valid fusion (no viewpoints, gate closed, etc.).
pub fn fuse(&mut self) -> Result<FusedEmbedding, FusionError> {
self.cycle_count += 1;
// Extract all needed data from viewpoints upfront to avoid borrow conflicts.
let min_snr = self.config.min_snr_db;
let total_viewpoints = self.viewpoints.len();
let extracted: Vec<(NodeId, Vec<f32>, f32, (f32, f32))> = self
.viewpoints
.iter()
.filter(|v| v.snr_db >= min_snr)
.map(|v| (v.node_id, v.embedding.clone(), v.azimuth, v.position))
.collect();
let n_valid = extracted.len();
if n_valid == 0 {
if total_viewpoints == 0 {
return Err(FusionError::NoViewpoints);
}
return Err(FusionError::AllFiltered {
rejected: total_viewpoints,
});
}
// Check coherence gate.
let coh = self.coherence_state.coherence();
let gate_open = self.coherence_gate.evaluate(coh);
self.emit_event(ViewpointFusionEvent::CoherenceGateTriggered {
coherence: coh,
accepted: gate_open,
});
if !gate_open {
return Err(FusionError::CoherenceGateClosed {
coherence: coh,
threshold: self.config.coherence_threshold,
});
}
// Prepare embeddings and geometries from extracted data.
let embeddings: Vec<Vec<f32>> = extracted.iter().map(|(_, e, _, _)| e.clone()).collect();
let geom: Vec<ViewpointGeometry> = extracted
.iter()
.map(|(_, _, az, pos)| ViewpointGeometry {
azimuth: *az,
position: *pos,
})
.collect();
// Run cross-viewpoint attention fusion.
let fused_emb = self.attention.fuse(&embeddings, &geom)?;
// Compute GDI.
let azimuths: Vec<f32> = extracted.iter().map(|(_, _, az, _)| *az).collect();
let ids: Vec<NodeId> = extracted.iter().map(|(id, _, _, _)| *id).collect();
let gdi_opt = GeometricDiversityIndex::compute(&azimuths, &ids);
let (gdi_val, n_eff) = match &gdi_opt {
Some(g) => (g.value, g.n_effective),
None => (0.0, n_valid as f32),
};
self.emit_event(ViewpointFusionEvent::TdmCycleCompleted {
cycle_id: self.cycle_count,
viewpoints_received: n_valid,
});
self.emit_event(ViewpointFusionEvent::FusionCompleted {
gdi: gdi_val,
n_viewpoints: n_valid,
});
Ok(FusedEmbedding {
embedding: fused_emb,
gdi: gdi_val,
coherence: coh,
n_viewpoints: n_valid,
n_effective: n_eff,
})
}
/// Run fusion without coherence gating (for testing or forced updates).
pub fn fuse_ungated(&mut self) -> Result<FusedEmbedding, FusionError> {
let min_snr = self.config.min_snr_db;
let total_viewpoints = self.viewpoints.len();
let extracted: Vec<(NodeId, Vec<f32>, f32, (f32, f32))> = self
.viewpoints
.iter()
.filter(|v| v.snr_db >= min_snr)
.map(|v| (v.node_id, v.embedding.clone(), v.azimuth, v.position))
.collect();
let n_valid = extracted.len();
if n_valid == 0 {
if total_viewpoints == 0 {
return Err(FusionError::NoViewpoints);
}
return Err(FusionError::AllFiltered {
rejected: total_viewpoints,
});
}
let embeddings: Vec<Vec<f32>> = extracted.iter().map(|(_, e, _, _)| e.clone()).collect();
let geom: Vec<ViewpointGeometry> = extracted
.iter()
.map(|(_, _, az, pos)| ViewpointGeometry {
azimuth: *az,
position: *pos,
})
.collect();
let fused_emb = self.attention.fuse(&embeddings, &geom)?;
let azimuths: Vec<f32> = extracted.iter().map(|(_, _, az, _)| *az).collect();
let ids: Vec<NodeId> = extracted.iter().map(|(id, _, _, _)| *id).collect();
let gdi_opt = GeometricDiversityIndex::compute(&azimuths, &ids);
let (gdi_val, n_eff) = match &gdi_opt {
Some(g) => (g.value, g.n_effective),
None => (0.0, n_valid as f32),
};
let coh = self.coherence_state.coherence();
Ok(FusedEmbedding {
embedding: fused_emb,
gdi: gdi_val,
coherence: coh,
n_viewpoints: n_valid,
n_effective: n_eff,
})
}
/// Access the event log.
pub fn events(&self) -> &[ViewpointFusionEvent] {
&self.events
}
/// Clear the event log.
pub fn clear_events(&mut self) {
self.events.clear();
}
/// Remove a viewpoint by node ID.
pub fn remove_viewpoint(&mut self, node_id: NodeId) {
self.viewpoints.retain(|v| v.node_id != node_id);
}
/// Clear all viewpoints.
pub fn clear_viewpoints(&mut self) {
self.viewpoints.clear();
}
fn emit_event(&mut self, event: ViewpointFusionEvent) {
if self.events.len() >= self.max_events {
// Drop oldest half to avoid unbounded growth.
let half = self.max_events / 2;
self.events.drain(..half);
}
self.events.push(event);
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
fn make_viewpoint(node_id: NodeId, angle_idx: usize, n: usize, dim: usize) -> ViewpointEmbedding {
let angle = 2.0 * std::f32::consts::PI * angle_idx as f32 / n as f32;
let r = 3.0;
ViewpointEmbedding {
node_id,
embedding: (0..dim).map(|d| ((node_id as usize * dim + d) as f32 * 0.01).sin()).collect(),
azimuth: angle,
elevation: 0.0,
baseline: r,
position: (r * angle.cos(), r * angle.sin()),
snr_db: 15.0,
}
}
fn setup_coherent_array(dim: usize) -> MultistaticArray {
let config = FusionConfig {
embed_dim: dim,
coherence_threshold: 0.5,
coherence_hysteresis: 0.0,
min_snr_db: 0.0,
..FusionConfig::default()
};
let mut array = MultistaticArray::new(1, config);
// Push coherent phase diffs to open the gate.
for _ in 0..60 {
array.push_phase_diff(0.1);
}
array
}
#[test]
fn fuse_produces_correct_dimension() {
let dim = 16;
let mut array = setup_coherent_array(dim);
for i in 0..4 {
array.submit_viewpoint(make_viewpoint(i, i as usize, 4, dim)).unwrap();
}
let fused = array.fuse().unwrap();
assert_eq!(fused.embedding.len(), dim);
assert_eq!(fused.n_viewpoints, 4);
}
#[test]
fn fuse_no_viewpoints_returns_error() {
let mut array = setup_coherent_array(16);
assert!(matches!(array.fuse(), Err(FusionError::NoViewpoints)));
}
#[test]
fn fuse_coherence_gate_closed_returns_error() {
let dim = 16;
let config = FusionConfig {
embed_dim: dim,
coherence_threshold: 0.9,
coherence_hysteresis: 0.0,
min_snr_db: 0.0,
..FusionConfig::default()
};
let mut array = MultistaticArray::new(1, config);
// Push incoherent phase diffs.
for i in 0..100 {
array.push_phase_diff(i as f32 * 0.5);
}
array.submit_viewpoint(make_viewpoint(0, 0, 4, dim)).unwrap();
array.submit_viewpoint(make_viewpoint(1, 1, 4, dim)).unwrap();
let result = array.fuse();
assert!(matches!(result, Err(FusionError::CoherenceGateClosed { .. })));
}
#[test]
fn fuse_ungated_bypasses_coherence() {
let dim = 16;
let config = FusionConfig {
embed_dim: dim,
coherence_threshold: 0.99,
coherence_hysteresis: 0.0,
min_snr_db: 0.0,
..FusionConfig::default()
};
let mut array = MultistaticArray::new(1, config);
// Push incoherent diffs -- gate would be closed.
for i in 0..100 {
array.push_phase_diff(i as f32 * 0.5);
}
array.submit_viewpoint(make_viewpoint(0, 0, 4, dim)).unwrap();
array.submit_viewpoint(make_viewpoint(1, 1, 4, dim)).unwrap();
let fused = array.fuse_ungated().unwrap();
assert_eq!(fused.embedding.len(), dim);
}
#[test]
fn submit_replaces_existing_viewpoint() {
let dim = 8;
let mut array = setup_coherent_array(dim);
let vp1 = make_viewpoint(10, 0, 4, dim);
let mut vp2 = make_viewpoint(10, 1, 4, dim);
vp2.snr_db = 25.0;
array.submit_viewpoint(vp1).unwrap();
assert_eq!(array.n_viewpoints(), 1);
array.submit_viewpoint(vp2).unwrap();
assert_eq!(array.n_viewpoints(), 1, "should replace, not add");
}
#[test]
fn dimension_mismatch_returns_error() {
let dim = 16;
let mut array = setup_coherent_array(dim);
let mut vp = make_viewpoint(0, 0, 4, dim);
vp.embedding = vec![1.0; 8]; // wrong dim
assert!(matches!(
array.submit_viewpoint(vp),
Err(FusionError::DimensionMismatch { .. })
));
}
#[test]
fn snr_filter_rejects_low_quality() {
let dim = 16;
let config = FusionConfig {
embed_dim: dim,
coherence_threshold: 0.0,
min_snr_db: 10.0,
..FusionConfig::default()
};
let mut array = MultistaticArray::new(1, config);
for _ in 0..60 {
array.push_phase_diff(0.1);
}
let mut vp = make_viewpoint(0, 0, 4, dim);
vp.snr_db = 3.0; // below threshold
array.submit_viewpoint(vp).unwrap();
assert!(matches!(array.fuse(), Err(FusionError::AllFiltered { .. })));
}
#[test]
fn events_are_emitted_on_fusion() {
let dim = 8;
let mut array = setup_coherent_array(dim);
array.submit_viewpoint(make_viewpoint(0, 0, 4, dim)).unwrap();
array.submit_viewpoint(make_viewpoint(1, 1, 4, dim)).unwrap();
array.clear_events();
let _ = array.fuse();
assert!(!array.events().is_empty(), "fusion should emit events");
}
#[test]
fn remove_viewpoint_works() {
let dim = 8;
let mut array = setup_coherent_array(dim);
array.submit_viewpoint(make_viewpoint(10, 0, 4, dim)).unwrap();
array.submit_viewpoint(make_viewpoint(20, 1, 4, dim)).unwrap();
assert_eq!(array.n_viewpoints(), 2);
array.remove_viewpoint(10);
assert_eq!(array.n_viewpoints(), 1);
}
#[test]
fn fused_embedding_reports_gdi() {
let dim = 16;
let mut array = setup_coherent_array(dim);
for i in 0..4 {
array.submit_viewpoint(make_viewpoint(i, i as usize, 4, dim)).unwrap();
}
let fused = array.fuse().unwrap();
assert!(fused.gdi > 0.0, "GDI should be positive for spread viewpoints");
assert!(fused.n_effective > 1.0, "effective viewpoints should be > 1");
}
#[test]
fn compute_gdi_standalone() {
let dim = 8;
let mut array = setup_coherent_array(dim);
for i in 0..6 {
array.submit_viewpoint(make_viewpoint(i, i as usize, 6, dim)).unwrap();
}
let gdi = array.compute_gdi().unwrap();
assert!(gdi.value > 0.0);
assert!(gdi.n_effective > 1.0);
}
}

View File

@@ -0,0 +1,499 @@
//! Geometric Diversity Index and Cramer-Rao bound estimation (ADR-031).
//!
//! Provides two key computations for array geometry quality assessment:
//!
//! 1. **Geometric Diversity Index (GDI)**: measures how well the viewpoints
//! are spread around the sensing area. Higher GDI = better spatial coverage.
//!
//! 2. **Cramer-Rao Bound (CRB)**: lower bound on the position estimation
//! variance achievable by any unbiased estimator given the array geometry.
//! Used to predict theoretical localisation accuracy.
//!
//! Uses `ruvector_solver` for matrix operations in the Fisher information
//! matrix inversion required by the Cramer-Rao bound.
use ruvector_solver::neumann::NeumannSolver;
use ruvector_solver::types::CsrMatrix;
// ---------------------------------------------------------------------------
// Node identifier
// ---------------------------------------------------------------------------
/// Unique identifier for a sensor node in the multistatic array.
pub type NodeId = u32;
// ---------------------------------------------------------------------------
// GeometricDiversityIndex
// ---------------------------------------------------------------------------
/// Geometric Diversity Index measuring array viewpoint spread.
///
/// GDI is computed as the mean minimum angular separation across all viewpoints:
///
/// ```text
/// GDI = (1/N) * sum_i min_{j != i} |theta_i - theta_j|
/// ```
///
/// A GDI close to `2*PI/N` (uniform spacing) indicates optimal diversity.
/// A GDI near zero means viewpoints are clustered.
///
/// The `n_effective` field estimates the number of independent viewpoints
/// after accounting for angular correlation between nearby viewpoints.
#[derive(Debug, Clone)]
pub struct GeometricDiversityIndex {
/// GDI value (radians). Higher is better.
pub value: f32,
/// Effective independent viewpoints after correlation discount.
pub n_effective: f32,
/// Worst (most redundant) viewpoint pair.
pub worst_pair: (NodeId, NodeId),
/// Number of physical viewpoints in the array.
pub n_physical: usize,
}
impl GeometricDiversityIndex {
/// Compute the GDI from viewpoint azimuth angles.
///
/// # Arguments
///
/// - `azimuths`: per-viewpoint azimuth angle in radians from the array
/// centroid. Must have at least 2 elements.
/// - `node_ids`: per-viewpoint node identifier (same length as `azimuths`).
///
/// # Returns
///
/// `None` if fewer than 2 viewpoints are provided.
pub fn compute(azimuths: &[f32], node_ids: &[NodeId]) -> Option<Self> {
let n = azimuths.len();
if n < 2 || node_ids.len() != n {
return None;
}
// Find the minimum angular separation for each viewpoint.
let mut min_seps = Vec::with_capacity(n);
let mut worst_sep = f32::MAX;
let mut worst_i = 0_usize;
let mut worst_j = 1_usize;
for i in 0..n {
let mut min_sep = f32::MAX;
let mut min_j = (i + 1) % n;
for j in 0..n {
if i == j {
continue;
}
let sep = angular_distance(azimuths[i], azimuths[j]);
if sep < min_sep {
min_sep = sep;
min_j = j;
}
}
min_seps.push(min_sep);
if min_sep < worst_sep {
worst_sep = min_sep;
worst_i = i;
worst_j = min_j;
}
}
let gdi = min_seps.iter().sum::<f32>() / n as f32;
// Effective viewpoints: discount correlated viewpoints.
// Correlation model: rho(theta) = exp(-theta^2 / (2 * sigma^2))
// with sigma = PI/6 (30 degrees).
let sigma = std::f32::consts::PI / 6.0;
let n_effective = compute_effective_viewpoints(azimuths, sigma);
Some(GeometricDiversityIndex {
value: gdi,
n_effective,
worst_pair: (node_ids[worst_i], node_ids[worst_j]),
n_physical: n,
})
}
/// Returns `true` if the array has sufficient geometric diversity for
/// reliable multi-viewpoint fusion.
///
/// Threshold: GDI >= PI / (2 * N) (at least half the uniform-spacing ideal).
pub fn is_sufficient(&self) -> bool {
if self.n_physical == 0 {
return false;
}
let ideal = std::f32::consts::PI * 2.0 / self.n_physical as f32;
self.value >= ideal * 0.5
}
/// Ratio of effective to physical viewpoints.
pub fn efficiency(&self) -> f32 {
if self.n_physical == 0 {
return 0.0;
}
self.n_effective / self.n_physical as f32
}
}
/// Compute the shortest angular distance between two angles (radians).
///
/// Returns a value in `[0, PI]`.
fn angular_distance(a: f32, b: f32) -> f32 {
let diff = (a - b).abs() % (2.0 * std::f32::consts::PI);
if diff > std::f32::consts::PI {
2.0 * std::f32::consts::PI - diff
} else {
diff
}
}
/// Compute effective independent viewpoints using a Gaussian angular correlation
/// model and eigenvalue analysis of the correlation matrix.
///
/// The effective count is: `N_eff = (sum lambda_i)^2 / sum(lambda_i^2)` where
/// `lambda_i` are the eigenvalues of the angular correlation matrix. For
/// efficiency, we approximate this using trace-based estimation:
/// `N_eff approx trace(R)^2 / trace(R^2)`.
fn compute_effective_viewpoints(azimuths: &[f32], sigma: f32) -> f32 {
let n = azimuths.len();
if n == 0 {
return 0.0;
}
if n == 1 {
return 1.0;
}
let two_sigma_sq = 2.0 * sigma * sigma;
// Build correlation matrix R[i,j] = exp(-angular_dist(i,j)^2 / (2*sigma^2))
// and compute trace(R) and trace(R^2) simultaneously.
// For trace(R^2) = sum_i sum_j R[i,j]^2, we need the full matrix.
let mut r_matrix = vec![0.0_f32; n * n];
for i in 0..n {
r_matrix[i * n + i] = 1.0;
for j in (i + 1)..n {
let d = angular_distance(azimuths[i], azimuths[j]);
let rho = (-d * d / two_sigma_sq).exp();
r_matrix[i * n + j] = rho;
r_matrix[j * n + i] = rho;
}
}
// trace(R) = n (all diagonal entries are 1.0).
let trace_r = n as f32;
// trace(R^2) = sum_{i,j} R[i,j]^2
let trace_r2: f32 = r_matrix.iter().map(|v| v * v).sum();
// N_eff = trace(R)^2 / trace(R^2)
let n_eff = (trace_r * trace_r) / trace_r2.max(f32::EPSILON);
n_eff.min(n as f32).max(1.0)
}
// ---------------------------------------------------------------------------
// Cramer-Rao Bound
// ---------------------------------------------------------------------------
/// Cramer-Rao lower bound on position estimation variance.
///
/// The CRB provides the theoretical minimum variance achievable by any
/// unbiased estimator for the target position given the array geometry.
/// Lower CRB = better localisation potential.
#[derive(Debug, Clone)]
pub struct CramerRaoBound {
/// CRB for x-coordinate estimation (metres squared).
pub crb_x: f32,
/// CRB for y-coordinate estimation (metres squared).
pub crb_y: f32,
/// Root-mean-square position error lower bound (metres).
pub rmse_lower_bound: f32,
/// Geometric dilution of precision (GDOP).
pub gdop: f32,
}
/// A viewpoint position for CRB computation.
#[derive(Debug, Clone)]
pub struct ViewpointPosition {
/// X coordinate in metres.
pub x: f32,
/// Y coordinate in metres.
pub y: f32,
/// Per-measurement noise standard deviation (metres).
pub noise_std: f32,
}
impl CramerRaoBound {
/// Estimate the Cramer-Rao bound for a target at `(tx, ty)` observed by
/// the given viewpoints.
///
/// # Arguments
///
/// - `target`: target position `(x, y)` in metres.
/// - `viewpoints`: sensor node positions with per-node noise levels.
///
/// # Returns
///
/// `None` if fewer than 3 viewpoints are provided (under-determined).
pub fn estimate(target: (f32, f32), viewpoints: &[ViewpointPosition]) -> Option<Self> {
let n = viewpoints.len();
if n < 3 {
return None;
}
// Build the 2x2 Fisher Information Matrix (FIM).
// FIM = sum_i (1/sigma_i^2) * [cos^2(phi_i), cos(phi_i)*sin(phi_i);
// cos(phi_i)*sin(phi_i), sin^2(phi_i)]
// where phi_i is the bearing angle from viewpoint i to the target.
let mut fim_00 = 0.0_f32;
let mut fim_01 = 0.0_f32;
let mut fim_11 = 0.0_f32;
for vp in viewpoints {
let dx = target.0 - vp.x;
let dy = target.1 - vp.y;
let r = (dx * dx + dy * dy).sqrt().max(1e-6);
let cos_phi = dx / r;
let sin_phi = dy / r;
let inv_var = 1.0 / (vp.noise_std * vp.noise_std).max(1e-10);
fim_00 += inv_var * cos_phi * cos_phi;
fim_01 += inv_var * cos_phi * sin_phi;
fim_11 += inv_var * sin_phi * sin_phi;
}
// Invert the 2x2 FIM analytically: CRB = FIM^{-1}.
let det = fim_00 * fim_11 - fim_01 * fim_01;
if det.abs() < 1e-12 {
return None;
}
let crb_x = fim_11 / det;
let crb_y = fim_00 / det;
let rmse = (crb_x + crb_y).sqrt();
let gdop = (crb_x + crb_y).sqrt();
Some(CramerRaoBound {
crb_x,
crb_y,
rmse_lower_bound: rmse,
gdop,
})
}
/// Compute the CRB using the `ruvector-solver` Neumann series solver for
/// larger arrays where the analytic 2x2 inversion is extended to include
/// regularisation for ill-conditioned geometries.
///
/// # Arguments
///
/// - `target`: target position `(x, y)` in metres.
/// - `viewpoints`: sensor node positions with per-node noise levels.
/// - `regularisation`: Tikhonov regularisation parameter (typically 1e-4).
///
/// # Returns
///
/// `None` if fewer than 3 viewpoints or the solver fails.
pub fn estimate_regularised(
target: (f32, f32),
viewpoints: &[ViewpointPosition],
regularisation: f32,
) -> Option<Self> {
let n = viewpoints.len();
if n < 3 {
return None;
}
let mut fim_00 = regularisation;
let mut fim_01 = 0.0_f32;
let mut fim_11 = regularisation;
for vp in viewpoints {
let dx = target.0 - vp.x;
let dy = target.1 - vp.y;
let r = (dx * dx + dy * dy).sqrt().max(1e-6);
let cos_phi = dx / r;
let sin_phi = dy / r;
let inv_var = 1.0 / (vp.noise_std * vp.noise_std).max(1e-10);
fim_00 += inv_var * cos_phi * cos_phi;
fim_01 += inv_var * cos_phi * sin_phi;
fim_11 += inv_var * sin_phi * sin_phi;
}
// Use Neumann solver for the regularised system.
let ata = CsrMatrix::<f32>::from_coo(
2,
2,
vec![
(0, 0, fim_00),
(0, 1, fim_01),
(1, 0, fim_01),
(1, 1, fim_11),
],
);
// Solve FIM * x = e_1 and FIM * x = e_2 to get the CRB diagonal.
let solver = NeumannSolver::new(1e-6, 500);
let crb_x = solver
.solve(&ata, &[1.0, 0.0])
.ok()
.map(|r| r.solution[0])?;
let crb_y = solver
.solve(&ata, &[0.0, 1.0])
.ok()
.map(|r| r.solution[1])?;
let rmse = (crb_x.abs() + crb_y.abs()).sqrt();
Some(CramerRaoBound {
crb_x,
crb_y,
rmse_lower_bound: rmse,
gdop: rmse,
})
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn gdi_uniform_spacing_is_optimal() {
// 4 viewpoints at 0, 90, 180, 270 degrees
let azimuths = vec![0.0, std::f32::consts::FRAC_PI_2, std::f32::consts::PI, 3.0 * std::f32::consts::FRAC_PI_2];
let ids = vec![0, 1, 2, 3];
let gdi = GeometricDiversityIndex::compute(&azimuths, &ids).unwrap();
// Minimum separation = PI/2 for each viewpoint, so GDI = PI/2
let expected = std::f32::consts::FRAC_PI_2;
assert!(
(gdi.value - expected).abs() < 0.01,
"uniform spacing GDI should be PI/2={expected:.3}, got {:.3}",
gdi.value
);
}
#[test]
fn gdi_clustered_viewpoints_have_low_value() {
// 4 viewpoints clustered within 10 degrees
let azimuths = vec![0.0, 0.05, 0.08, 0.12];
let ids = vec![0, 1, 2, 3];
let gdi = GeometricDiversityIndex::compute(&azimuths, &ids).unwrap();
assert!(
gdi.value < 0.15,
"clustered viewpoints should have low GDI, got {:.3}",
gdi.value
);
}
#[test]
fn gdi_insufficient_viewpoints_returns_none() {
assert!(GeometricDiversityIndex::compute(&[0.0], &[0]).is_none());
assert!(GeometricDiversityIndex::compute(&[], &[]).is_none());
}
#[test]
fn gdi_efficiency_is_bounded() {
let azimuths = vec![0.0, 1.0, 2.0, 3.0];
let ids = vec![0, 1, 2, 3];
let gdi = GeometricDiversityIndex::compute(&azimuths, &ids).unwrap();
assert!(gdi.efficiency() > 0.0 && gdi.efficiency() <= 1.0,
"efficiency should be in (0, 1], got {}", gdi.efficiency());
}
#[test]
fn gdi_is_sufficient_for_uniform_layout() {
let azimuths = vec![0.0, std::f32::consts::FRAC_PI_2, std::f32::consts::PI, 3.0 * std::f32::consts::FRAC_PI_2];
let ids = vec![0, 1, 2, 3];
let gdi = GeometricDiversityIndex::compute(&azimuths, &ids).unwrap();
assert!(gdi.is_sufficient(), "uniform layout should be sufficient");
}
#[test]
fn gdi_worst_pair_is_closest() {
// Viewpoints at 0, 0.1, PI, 1.5*PI
let azimuths = vec![0.0, 0.1, std::f32::consts::PI, 1.5 * std::f32::consts::PI];
let ids = vec![10, 20, 30, 40];
let gdi = GeometricDiversityIndex::compute(&azimuths, &ids).unwrap();
// Worst pair should be (10, 20) as they are only 0.1 rad apart
assert!(
(gdi.worst_pair == (10, 20)) || (gdi.worst_pair == (20, 10)),
"worst pair should be nodes 10 and 20, got {:?}",
gdi.worst_pair
);
}
#[test]
fn angular_distance_wraps_correctly() {
let d = angular_distance(0.1, 2.0 * std::f32::consts::PI - 0.1);
assert!(
(d - 0.2).abs() < 1e-4,
"angular distance across 0/2PI boundary should be 0.2, got {d}"
);
}
#[test]
fn effective_viewpoints_all_identical_equals_one() {
let azimuths = vec![0.0, 0.0, 0.0, 0.0];
let sigma = std::f32::consts::PI / 6.0;
let n_eff = compute_effective_viewpoints(&azimuths, sigma);
assert!(
(n_eff - 1.0).abs() < 0.1,
"4 identical viewpoints should have n_eff ~ 1.0, got {n_eff}"
);
}
#[test]
fn crb_decreases_with_more_viewpoints() {
let target = (0.0, 0.0);
let vp3: Vec<ViewpointPosition> = (0..3)
.map(|i| {
let a = 2.0 * std::f32::consts::PI * i as f32 / 3.0;
ViewpointPosition { x: 5.0 * a.cos(), y: 5.0 * a.sin(), noise_std: 0.1 }
})
.collect();
let vp6: Vec<ViewpointPosition> = (0..6)
.map(|i| {
let a = 2.0 * std::f32::consts::PI * i as f32 / 6.0;
ViewpointPosition { x: 5.0 * a.cos(), y: 5.0 * a.sin(), noise_std: 0.1 }
})
.collect();
let crb3 = CramerRaoBound::estimate(target, &vp3).unwrap();
let crb6 = CramerRaoBound::estimate(target, &vp6).unwrap();
assert!(
crb6.rmse_lower_bound < crb3.rmse_lower_bound,
"6 viewpoints should give lower CRB than 3: {:.4} vs {:.4}",
crb6.rmse_lower_bound,
crb3.rmse_lower_bound
);
}
#[test]
fn crb_too_few_viewpoints_returns_none() {
let target = (0.0, 0.0);
let vps = vec![
ViewpointPosition { x: 1.0, y: 0.0, noise_std: 0.1 },
ViewpointPosition { x: 0.0, y: 1.0, noise_std: 0.1 },
];
assert!(CramerRaoBound::estimate(target, &vps).is_none());
}
#[test]
fn crb_regularised_returns_result() {
let target = (0.0, 0.0);
let vps: Vec<ViewpointPosition> = (0..4)
.map(|i| {
let a = 2.0 * std::f32::consts::PI * i as f32 / 4.0;
ViewpointPosition { x: 3.0 * a.cos(), y: 3.0 * a.sin(), noise_std: 0.1 }
})
.collect();
let crb = CramerRaoBound::estimate_regularised(target, &vps, 1e-4);
// May return None if Neumann solver doesn't converge, but should not panic.
if let Some(crb) = crb {
assert!(crb.rmse_lower_bound >= 0.0, "RMSE bound must be non-negative");
}
}
}

View File

@@ -0,0 +1,27 @@
//! Cross-viewpoint embedding fusion for multistatic WiFi sensing (ADR-031).
//!
//! This module implements the RuView fusion pipeline that combines per-viewpoint
//! AETHER embeddings into a single fused embedding using learned cross-viewpoint
//! attention with geometric bias.
//!
//! # Submodules
//!
//! - [`attention`]: Cross-viewpoint scaled dot-product attention with geometric
//! bias encoding angular separation and baseline distance between viewpoint pairs.
//! - [`geometry`]: Geometric Diversity Index (GDI) computation and Cramer-Rao
//! bound estimation for array geometry quality assessment.
//! - [`coherence`]: Coherence gating that determines whether the environment is
//! stable enough for a model update based on phase consistency.
//! - [`fusion`]: `MultistaticArray` aggregate root that orchestrates the full
//! fusion pipeline from per-viewpoint embeddings to a single fused output.
pub mod attention;
pub mod coherence;
pub mod fusion;
pub mod geometry;
// Re-export primary types at the module root for ergonomic imports.
pub use attention::{CrossViewpointAttention, GeometricBias};
pub use coherence::{CoherenceGate, CoherenceState};
pub use fusion::{FusedEmbedding, FusionConfig, MultistaticArray, ViewpointEmbedding};
pub use geometry::{CramerRaoBound, GeometricDiversityIndex};