Merge commit 'd803bfe2b1fe7f5e219e50ac20d6801a0a58ac75' as 'vendor/ruvector'
This commit is contained in:
233
vendor/ruvector/crates/ruvector-nervous-system/examples/README.md
vendored
Normal file
233
vendor/ruvector/crates/ruvector-nervous-system/examples/README.md
vendored
Normal file
@@ -0,0 +1,233 @@
|
||||
# Nervous System Examples
|
||||
|
||||
[](https://www.rust-lang.org/)
|
||||
[](../LICENSE)
|
||||
[]()
|
||||
|
||||
Bio-inspired nervous system architecture examples demonstrating the transition from **"How do we make machines smarter?"** to **"What kind of organism are we building?"**
|
||||
|
||||
## Overview
|
||||
|
||||
These examples show how nervous system thinking unlocks new products, markets, and research categories. The architecture enables systems that **age well** instead of breaking under complexity.
|
||||
|
||||
All tier examples are organized in the unified `tiers/` folder with prefixed names for easy navigation.
|
||||
|
||||
## Application Tiers
|
||||
|
||||
### Tier 1: Immediate Practical Applications
|
||||
*Shippable with current architecture*
|
||||
|
||||
| Example | Domain | Key Benefit |
|
||||
|---------|--------|-------------|
|
||||
| [t1_anomaly_detection](tiers/t1_anomaly_detection.rs) | Infrastructure, Finance, Security | Detection before failure, microsecond response |
|
||||
| [t1_edge_autonomy](tiers/t1_edge_autonomy.rs) | Drones, Vehicles, Robotics | Lower power, certified reflex paths |
|
||||
| [t1_medical_wearable](tiers/t1_medical_wearable.rs) | Monitoring, Assistive Devices | Adapts to the person, always-on, private |
|
||||
|
||||
### Tier 2: Near-Term Transformative Applications
|
||||
*Possible once local learning and coherence routing mature*
|
||||
|
||||
| Example | Domain | Key Benefit |
|
||||
|---------|--------|-------------|
|
||||
| [t2_self_optimizing](tiers/t2_self_optimizing.rs) | Agents Monitoring Agents | Self-stabilizing software, structural witnesses |
|
||||
| [t2_swarm_intelligence](tiers/t2_swarm_intelligence.rs) | IoT Fleets, Sensor Meshes | Scale without fragility, emergent intelligence |
|
||||
| [t2_adaptive_simulation](tiers/t2_adaptive_simulation.rs) | Digital Twins, Logistics | Always-warm simulation, costs scale with relevance |
|
||||
|
||||
### Tier 3: Exotic But Real Applications
|
||||
*Technically grounded, novel research directions*
|
||||
|
||||
| Example | Domain | Key Benefit |
|
||||
|---------|--------|-------------|
|
||||
| [t3_self_awareness](tiers/t3_self_awareness.rs) | Structural Self-Sensing | Systems say "I am becoming unstable" |
|
||||
| [t3_synthetic_nervous](tiers/t3_synthetic_nervous.rs) | Buildings, Factories, Cities | Environments respond like organisms |
|
||||
| [t3_bio_machine](tiers/t3_bio_machine.rs) | Prosthetics, Rehabilitation | Machines stop fighting biology |
|
||||
|
||||
### Tier 4: SOTA & Exotic Research Applications
|
||||
*Cutting-edge research directions pushing neuromorphic boundaries*
|
||||
|
||||
| Example | Domain | Key Benefit |
|
||||
|---------|--------|-------------|
|
||||
| [t4_neuromorphic_rag](tiers/t4_neuromorphic_rag.rs) | LLM Memory, Retrieval | Coherence-gated retrieval, 100x compute reduction |
|
||||
| [t4_agentic_self_model](tiers/t4_agentic_self_model.rs) | Agentic AI, Self-Awareness | Agent models own cognition, knows when capable |
|
||||
| [t4_collective_dreaming](tiers/t4_collective_dreaming.rs) | Swarm Consolidation | Hippocampal replay, cross-agent memory transfer |
|
||||
| [t4_compositional_hdc](tiers/t4_compositional_hdc.rs) | Zero-Shot Reasoning | HDC binding for analogy and composition |
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Run a Tier 1 example
|
||||
cargo run --example t1_anomaly_detection
|
||||
|
||||
# Run a Tier 2 example
|
||||
cargo run --example t2_swarm_intelligence
|
||||
|
||||
# Run a Tier 3 example
|
||||
cargo run --example t3_self_awareness
|
||||
|
||||
# Run a Tier 4 example
|
||||
cargo run --example t4_neuromorphic_rag
|
||||
```
|
||||
|
||||
## Architecture Principles
|
||||
|
||||
Each example demonstrates the same five-layer architecture:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ COHERENCE LAYER │
|
||||
│ Global Workspace • Oscillatory Routing • Predictive Coding │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↑
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ LEARNING LAYER │
|
||||
│ BTSP One-Shot • E-prop Online • EWC Consolidation │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↑
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ MEMORY LAYER │
|
||||
│ Hopfield Networks • HDC Vectors • Pattern Separation │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↑
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ REFLEX LAYER │
|
||||
│ K-WTA Competition • Dendritic Coincidence • Safety │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↑
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ SENSING LAYER │
|
||||
│ Event Bus • Sparse Spikes • Backpressure Control │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Key Concepts Demonstrated
|
||||
|
||||
### Reflex Arcs
|
||||
Fast, deterministic responses with bounded execution:
|
||||
- Latency: <100μs
|
||||
- Certifiable: Maximum iteration counts
|
||||
- Safety: Witness logging for every decision
|
||||
|
||||
### Homeostasis
|
||||
Self-regulation instead of static thresholds:
|
||||
- Adaptive learning from normal operation
|
||||
- Graceful degradation under stress
|
||||
- Anticipatory maintenance
|
||||
|
||||
### Coherence Gating
|
||||
Synchronize only when needed:
|
||||
- Kuramoto oscillators for phase coupling
|
||||
- Communication gain based on phase coherence
|
||||
- 90-99% bandwidth reduction via prediction
|
||||
|
||||
### One-Shot Learning
|
||||
Learn immediately from single examples:
|
||||
- BTSP: Seconds-scale eligibility traces
|
||||
- No batch retraining required
|
||||
- Personalization through use
|
||||
|
||||
## Tutorial: Building a Custom Application
|
||||
|
||||
### Step 1: Define Your Sensing Layer
|
||||
|
||||
```rust
|
||||
use ruvector_nervous_system::eventbus::{DVSEvent, EventRingBuffer};
|
||||
|
||||
// Create event buffer with backpressure
|
||||
let buffer = EventRingBuffer::new(1024);
|
||||
|
||||
// Process events sparsely
|
||||
if let Some(event) = buffer.pop() {
|
||||
// Only significant changes generate events
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Add Reflex Gates
|
||||
|
||||
```rust
|
||||
use ruvector_nervous_system::compete::WTALayer;
|
||||
|
||||
// Winner-take-all for fast decisions
|
||||
let mut wta = WTALayer::new(100, 0.5, 0.8);
|
||||
|
||||
// <1μs for 1000 neurons
|
||||
if let Some(winner) = wta.compete(&inputs) {
|
||||
trigger_immediate_response(winner);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Implement Memory
|
||||
|
||||
```rust
|
||||
use ruvector_nervous_system::hopfield::ModernHopfield;
|
||||
use ruvector_nervous_system::hdc::Hypervector;
|
||||
|
||||
// Hopfield for associative retrieval
|
||||
let mut hopfield = ModernHopfield::new(512, 10.0);
|
||||
hopfield.store(pattern);
|
||||
|
||||
// HDC for ultra-fast similarity
|
||||
let similarity = v1.similarity(&v2); // <100ns
|
||||
```
|
||||
|
||||
### Step 4: Enable Learning
|
||||
|
||||
```rust
|
||||
use ruvector_nervous_system::plasticity::btsp::BTSPSynapse;
|
||||
|
||||
// One-shot learning
|
||||
let mut synapse = BTSPSynapse::new(0.5, 2000.0); // 2s time constant
|
||||
synapse.update(presynaptic_active, plateau_signal, dt);
|
||||
```
|
||||
|
||||
### Step 5: Add Coherence
|
||||
|
||||
```rust
|
||||
use ruvector_nervous_system::routing::{OscillatoryRouter, GlobalWorkspace};
|
||||
|
||||
// Phase-coupled routing
|
||||
let mut router = OscillatoryRouter::new(10, 40.0); // 40Hz gamma
|
||||
let gain = router.communication_gain(sender, receiver);
|
||||
|
||||
// Global workspace (4-7 items)
|
||||
let mut workspace = GlobalWorkspace::new(7);
|
||||
workspace.broadcast(representation);
|
||||
```
|
||||
|
||||
## Performance Targets
|
||||
|
||||
| Component | Latency | Throughput |
|
||||
|-----------|---------|------------|
|
||||
| Event Bus | <100ns push/pop | 10,000+ events/ms |
|
||||
| WTA | <1μs | 1M+ decisions/sec |
|
||||
| HDC Similarity | <100ns | 10M+ comparisons/sec |
|
||||
| Hopfield Retrieval | <1ms | 1000+ queries/sec |
|
||||
| BTSP Update | <100ns | 10M+ synapses/sec |
|
||||
|
||||
## From Practical to SOTA
|
||||
|
||||
The same architecture scales from:
|
||||
|
||||
1. **Practical**: Anomaly detection with microsecond response
|
||||
2. **Transformative**: Self-optimizing software systems
|
||||
3. **Exotic**: Machines that sense their own coherence
|
||||
4. **SOTA**: Neuromorphic RAG, self-modeling agents, collective dreaming
|
||||
|
||||
The difference is how much reflex, learning, and coherence you turn on.
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Architecture Documentation](../../docs/nervous-system/architecture.md)
|
||||
- [Deployment Guide](../../docs/nervous-system/deployment.md)
|
||||
- [Test Plan](../../docs/nervous-system/test-plan.md)
|
||||
- [Main Crate Documentation](../README.md)
|
||||
|
||||
## Contributing
|
||||
|
||||
Examples welcome! Each should demonstrate:
|
||||
1. A clear use case
|
||||
2. The nervous system architecture
|
||||
3. Performance characteristics
|
||||
4. Tests and documentation
|
||||
|
||||
## License
|
||||
|
||||
MIT License - See [LICENSE](../LICENSE)
|
||||
133
vendor/ruvector/crates/ruvector-nervous-system/examples/hopfield_demo.rs
vendored
Normal file
133
vendor/ruvector/crates/ruvector-nervous-system/examples/hopfield_demo.rs
vendored
Normal file
@@ -0,0 +1,133 @@
|
||||
//! Demonstration of Modern Hopfield Networks
|
||||
//!
|
||||
//! This example shows the basic usage of Modern Hopfield Networks
|
||||
//! for associative memory and pattern retrieval.
|
||||
|
||||
use ruvector_nervous_system::hopfield::ModernHopfield;
|
||||
|
||||
fn cosine_similarity(a: &[f32], b: &[f32]) -> f32 {
|
||||
let dot: f32 = a.iter().zip(b).map(|(x, y)| x * y).sum();
|
||||
let norm_a: f32 = a.iter().map(|x| x * x).sum::<f32>().sqrt();
|
||||
let norm_b: f32 = b.iter().map(|x| x * x).sum::<f32>().sqrt();
|
||||
|
||||
if norm_a == 0.0 || norm_b == 0.0 {
|
||||
0.0
|
||||
} else {
|
||||
dot / (norm_a * norm_b)
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
println!("=== Modern Hopfield Networks Demo ===\n");
|
||||
|
||||
// Create a Modern Hopfield network
|
||||
let dimension = 128;
|
||||
let beta = 2.0;
|
||||
let mut hopfield = ModernHopfield::new(dimension, beta);
|
||||
|
||||
println!("Created Hopfield network:");
|
||||
println!(" Dimension: {}", hopfield.dimension());
|
||||
println!(" Beta (temperature): {}", hopfield.beta());
|
||||
println!(" Theoretical capacity: 2^{} patterns\n", dimension / 2);
|
||||
|
||||
// Store some patterns
|
||||
println!("Storing 3 orthogonal patterns...");
|
||||
|
||||
let mut pattern1 = vec![0.0; dimension];
|
||||
pattern1[0] = 1.0;
|
||||
|
||||
let mut pattern2 = vec![0.0; dimension];
|
||||
pattern2[1] = 1.0;
|
||||
|
||||
let mut pattern3 = vec![0.0; dimension];
|
||||
pattern3[2] = 1.0;
|
||||
|
||||
hopfield
|
||||
.store(pattern1.clone())
|
||||
.expect("Failed to store pattern1");
|
||||
hopfield
|
||||
.store(pattern2.clone())
|
||||
.expect("Failed to store pattern2");
|
||||
hopfield
|
||||
.store(pattern3.clone())
|
||||
.expect("Failed to store pattern3");
|
||||
|
||||
println!("Stored {} patterns\n", hopfield.num_patterns());
|
||||
|
||||
// Test perfect retrieval
|
||||
println!("Test 1: Perfect Retrieval");
|
||||
println!("-------------------------");
|
||||
let retrieved1 = hopfield.retrieve(&pattern1).expect("Retrieval failed");
|
||||
let similarity1 = cosine_similarity(&pattern1, &retrieved1);
|
||||
println!("Pattern 1 similarity: {:.6}", similarity1);
|
||||
assert!(similarity1 > 0.99, "Perfect retrieval failed");
|
||||
println!("✓ Perfect retrieval works!\n");
|
||||
|
||||
// Test retrieval with noise
|
||||
println!("Test 2: Noisy Retrieval");
|
||||
println!("-----------------------");
|
||||
let mut noisy_pattern = pattern1.clone();
|
||||
noisy_pattern[0] = 0.95; // Add noise
|
||||
noisy_pattern[10] = 0.05;
|
||||
|
||||
let retrieved_noisy = hopfield.retrieve(&noisy_pattern).expect("Retrieval failed");
|
||||
let similarity_noisy = cosine_similarity(&pattern1, &retrieved_noisy);
|
||||
println!(
|
||||
"Noisy query similarity to original: {:.6}",
|
||||
similarity_noisy
|
||||
);
|
||||
assert!(similarity_noisy > 0.90, "Noisy retrieval failed");
|
||||
println!("✓ Noise-tolerant retrieval works!\n");
|
||||
|
||||
// Test top-k retrieval
|
||||
println!("Test 3: Top-K Retrieval");
|
||||
println!("-----------------------");
|
||||
let query = pattern1.clone();
|
||||
let top_k = hopfield
|
||||
.retrieve_k(&query, 2)
|
||||
.expect("Top-k retrieval failed");
|
||||
|
||||
println!("Top 2 patterns by attention:");
|
||||
for (i, (idx, _pattern, attention)) in top_k.iter().enumerate() {
|
||||
println!(" {}. Pattern {} - Attention: {:.6}", i + 1, idx, attention);
|
||||
}
|
||||
assert_eq!(top_k[0].0, 0, "Top match should be pattern 0");
|
||||
println!("✓ Top-K retrieval works!\n");
|
||||
|
||||
// Test capacity calculation
|
||||
println!("Test 4: Capacity Demonstration");
|
||||
println!("--------------------------------");
|
||||
let capacity = hopfield.capacity();
|
||||
println!(
|
||||
"Theoretical capacity for {}D: 2^{} = {}",
|
||||
dimension,
|
||||
dimension / 2,
|
||||
capacity
|
||||
);
|
||||
println!("✓ Capacity calculation works!\n");
|
||||
|
||||
// Demonstrate beta parameter effect
|
||||
println!("Test 5: Beta Parameter Effect");
|
||||
println!("------------------------------");
|
||||
|
||||
let mut hopfield_low = ModernHopfield::new(dimension, 0.5);
|
||||
let mut hopfield_high = ModernHopfield::new(dimension, 5.0);
|
||||
|
||||
hopfield_low.store(pattern1.clone()).unwrap();
|
||||
hopfield_low.store(pattern2.clone()).unwrap();
|
||||
|
||||
hopfield_high.store(pattern1.clone()).unwrap();
|
||||
hopfield_high.store(pattern2.clone()).unwrap();
|
||||
|
||||
let retrieved_low = hopfield_low.retrieve(&pattern1).unwrap();
|
||||
let retrieved_high = hopfield_high.retrieve(&pattern1).unwrap();
|
||||
|
||||
let sim_low = cosine_similarity(&pattern1, &retrieved_low);
|
||||
let sim_high = cosine_similarity(&pattern1, &retrieved_high);
|
||||
|
||||
println!("Low beta (0.5) similarity: {:.6}", sim_low);
|
||||
println!("High beta (5.0) similarity: {:.6}", sim_high);
|
||||
println!("✓ Higher beta gives sharper retrieval!\n");
|
||||
|
||||
println!("=== All Tests Passed! ===");
|
||||
}
|
||||
512
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t1_anomaly_detection.rs
vendored
Normal file
512
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t1_anomaly_detection.rs
vendored
Normal file
@@ -0,0 +1,512 @@
|
||||
//! # Tier 1: Always-On Anomaly Detection
|
||||
//!
|
||||
//! Infrastructure, finance, security, medical telemetry.
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Event streams replace batch logs
|
||||
//! - Reflex gates fire on structural or temporal anomalies
|
||||
//! - Learning tightens thresholds over time
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Detection happens before failure, not after symptoms
|
||||
//! - Microsecond to millisecond response
|
||||
//! - Explainable witness logs for every trigger
|
||||
//!
|
||||
//! This is a direct fit for RuVector + Cognitum v0.
|
||||
|
||||
use std::collections::VecDeque;
|
||||
use std::time::{Duration, Instant};
|
||||
|
||||
// Simulated imports from nervous system crate
|
||||
// In production: use ruvector_nervous_system::*;
|
||||
|
||||
/// Event from monitored system (infrastructure, finance, medical, etc.)
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct TelemetryEvent {
|
||||
pub timestamp: u64,
|
||||
pub source_id: u16,
|
||||
pub metric_id: u32,
|
||||
pub value: f32,
|
||||
pub metadata: Option<String>,
|
||||
}
|
||||
|
||||
/// Anomaly detection result with witness log
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct AnomalyAlert {
|
||||
pub event: TelemetryEvent,
|
||||
pub anomaly_type: AnomalyType,
|
||||
pub severity: f32,
|
||||
pub witness_log: WitnessLog,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum AnomalyType {
|
||||
/// Value outside learned bounds
|
||||
ValueAnomaly {
|
||||
expected_range: (f32, f32),
|
||||
actual: f32,
|
||||
},
|
||||
/// Temporal pattern violation
|
||||
TemporalAnomaly {
|
||||
expected_interval_ms: u64,
|
||||
actual_interval_ms: u64,
|
||||
},
|
||||
/// Structural change in event relationships
|
||||
StructuralAnomaly {
|
||||
pattern_signature: u64,
|
||||
deviation: f32,
|
||||
},
|
||||
/// Cascade detected across multiple sources
|
||||
CascadeAnomaly { affected_sources: Vec<u16> },
|
||||
}
|
||||
|
||||
/// Explainable witness log for every trigger
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct WitnessLog {
|
||||
pub trigger_timestamp: u64,
|
||||
pub reflex_gate_id: u32,
|
||||
pub input_snapshot: Vec<f32>,
|
||||
pub threshold_at_trigger: f32,
|
||||
pub decision_path: Vec<String>,
|
||||
}
|
||||
|
||||
/// Reflex gate for immediate anomaly detection
|
||||
pub struct ReflexGate {
|
||||
pub id: u32,
|
||||
pub threshold: f32,
|
||||
pub membrane_potential: f32,
|
||||
pub last_spike: u64,
|
||||
pub refractory_period_ms: u64,
|
||||
}
|
||||
|
||||
impl ReflexGate {
|
||||
pub fn new(id: u32, threshold: f32) -> Self {
|
||||
Self {
|
||||
id,
|
||||
threshold,
|
||||
membrane_potential: 0.0,
|
||||
last_spike: 0,
|
||||
refractory_period_ms: 10, // 10ms refractory
|
||||
}
|
||||
}
|
||||
|
||||
/// Process input and return true if gate fires
|
||||
pub fn process(&mut self, input: f32, timestamp: u64) -> bool {
|
||||
// Check refractory period
|
||||
if timestamp < self.last_spike + self.refractory_period_ms {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Integrate input
|
||||
self.membrane_potential += input;
|
||||
|
||||
// Fire if threshold exceeded
|
||||
if self.membrane_potential >= self.threshold {
|
||||
self.last_spike = timestamp;
|
||||
self.membrane_potential = 0.0; // Reset
|
||||
return true;
|
||||
}
|
||||
|
||||
// Leak (decay)
|
||||
self.membrane_potential *= 0.95;
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
/// Adaptive threshold using BTSP-style learning
|
||||
pub struct AdaptiveThreshold {
|
||||
pub baseline: f32,
|
||||
pub current: f32,
|
||||
pub eligibility_trace: f32,
|
||||
pub tau_seconds: f32,
|
||||
pub learning_rate: f32,
|
||||
}
|
||||
|
||||
impl AdaptiveThreshold {
|
||||
pub fn new(baseline: f32) -> Self {
|
||||
Self {
|
||||
baseline,
|
||||
current: baseline,
|
||||
eligibility_trace: 0.0,
|
||||
tau_seconds: 60.0, // 1 minute adaptation window
|
||||
learning_rate: 0.01,
|
||||
}
|
||||
}
|
||||
|
||||
/// Update threshold based on observed values
|
||||
pub fn adapt(&mut self, observed: f32, was_anomaly: bool, dt_seconds: f32) {
|
||||
// Decay eligibility trace
|
||||
self.eligibility_trace *= (-dt_seconds / self.tau_seconds).exp();
|
||||
|
||||
if was_anomaly {
|
||||
// If we flagged an anomaly, become slightly more tolerant
|
||||
// to avoid alert fatigue
|
||||
self.current += self.learning_rate * self.eligibility_trace;
|
||||
} else {
|
||||
// Normal observation - tighten threshold over time
|
||||
let error = (observed - self.current).abs();
|
||||
self.eligibility_trace += error;
|
||||
self.current -= self.learning_rate * 0.1 * self.eligibility_trace;
|
||||
}
|
||||
|
||||
// Clamp to reasonable bounds
|
||||
self.current = self.current.clamp(self.baseline * 0.5, self.baseline * 2.0);
|
||||
}
|
||||
}
|
||||
|
||||
/// Temporal pattern detector using spike timing
|
||||
pub struct TemporalPatternDetector {
|
||||
pub expected_interval_ms: u64,
|
||||
pub tolerance_ms: u64,
|
||||
pub last_event_time: u64,
|
||||
pub interval_history: VecDeque<u64>,
|
||||
pub max_history: usize,
|
||||
}
|
||||
|
||||
impl TemporalPatternDetector {
|
||||
pub fn new(expected_interval_ms: u64, tolerance_ms: u64) -> Self {
|
||||
Self {
|
||||
expected_interval_ms,
|
||||
tolerance_ms,
|
||||
last_event_time: 0,
|
||||
interval_history: VecDeque::new(),
|
||||
max_history: 100,
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if event timing is anomalous
|
||||
pub fn check(&mut self, timestamp: u64) -> Option<AnomalyType> {
|
||||
if self.last_event_time == 0 {
|
||||
self.last_event_time = timestamp;
|
||||
return None;
|
||||
}
|
||||
|
||||
let interval = timestamp - self.last_event_time;
|
||||
self.last_event_time = timestamp;
|
||||
|
||||
// Track history
|
||||
self.interval_history.push_back(interval);
|
||||
if self.interval_history.len() > self.max_history {
|
||||
self.interval_history.pop_front();
|
||||
}
|
||||
|
||||
// Update expected interval (online learning)
|
||||
if self.interval_history.len() > 10 {
|
||||
let avg: u64 =
|
||||
self.interval_history.iter().sum::<u64>() / self.interval_history.len() as u64;
|
||||
self.expected_interval_ms = (self.expected_interval_ms + avg) / 2;
|
||||
}
|
||||
|
||||
// Check for anomaly
|
||||
let diff = (interval as i64 - self.expected_interval_ms as i64).unsigned_abs();
|
||||
if diff > self.tolerance_ms {
|
||||
Some(AnomalyType::TemporalAnomaly {
|
||||
expected_interval_ms: self.expected_interval_ms,
|
||||
actual_interval_ms: interval,
|
||||
})
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Main anomaly detection system
|
||||
pub struct AnomalyDetectionSystem {
|
||||
/// Reflex gates for immediate detection
|
||||
pub reflex_gates: Vec<ReflexGate>,
|
||||
/// Adaptive thresholds per metric
|
||||
pub thresholds: Vec<AdaptiveThreshold>,
|
||||
/// Temporal pattern detectors per source
|
||||
pub temporal_detectors: Vec<TemporalPatternDetector>,
|
||||
/// Alert history for cascade detection
|
||||
pub recent_alerts: VecDeque<AnomalyAlert>,
|
||||
/// Witness log buffer
|
||||
pub witness_buffer: VecDeque<WitnessLog>,
|
||||
}
|
||||
|
||||
impl AnomalyDetectionSystem {
|
||||
pub fn new(num_sources: usize, num_metrics: usize) -> Self {
|
||||
Self {
|
||||
reflex_gates: (0..num_sources)
|
||||
.map(|i| ReflexGate::new(i as u32, 1.0))
|
||||
.collect(),
|
||||
thresholds: (0..num_metrics)
|
||||
.map(|_| AdaptiveThreshold::new(1.0))
|
||||
.collect(),
|
||||
temporal_detectors: (0..num_sources)
|
||||
.map(|_| TemporalPatternDetector::new(1000, 100))
|
||||
.collect(),
|
||||
recent_alerts: VecDeque::new(),
|
||||
witness_buffer: VecDeque::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Process a telemetry event through the nervous system
|
||||
/// Returns anomaly alert if detected, with full witness log
|
||||
pub fn process_event(&mut self, event: TelemetryEvent) -> Option<AnomalyAlert> {
|
||||
let source_idx = event.source_id as usize % self.reflex_gates.len();
|
||||
let metric_idx = event.metric_id as usize % self.thresholds.len();
|
||||
|
||||
// 1. Check temporal pattern (fast reflex)
|
||||
if let Some(temporal_anomaly) = self.temporal_detectors[source_idx].check(event.timestamp) {
|
||||
return Some(self.create_alert(event, temporal_anomaly, 0.7));
|
||||
}
|
||||
|
||||
// 2. Check value against adaptive threshold
|
||||
let threshold = &self.thresholds[metric_idx];
|
||||
if event.value > threshold.current * 2.0 || event.value < threshold.current * 0.5 {
|
||||
return Some(self.create_alert(
|
||||
event.clone(),
|
||||
AnomalyType::ValueAnomaly {
|
||||
expected_range: (threshold.current * 0.5, threshold.current * 2.0),
|
||||
actual: event.value,
|
||||
},
|
||||
0.8,
|
||||
));
|
||||
}
|
||||
|
||||
// 3. Check reflex gate (integrates over time)
|
||||
let normalized = (event.value - threshold.current).abs() / threshold.current;
|
||||
if self.reflex_gates[source_idx].process(normalized, event.timestamp) {
|
||||
return Some(self.create_alert(
|
||||
event,
|
||||
AnomalyType::StructuralAnomaly {
|
||||
pattern_signature: source_idx as u64,
|
||||
deviation: normalized,
|
||||
},
|
||||
0.6,
|
||||
));
|
||||
}
|
||||
|
||||
// 4. Cascade detection: multiple sources alerting
|
||||
self.check_cascade(event.timestamp)
|
||||
}
|
||||
|
||||
fn create_alert(
|
||||
&mut self,
|
||||
event: TelemetryEvent,
|
||||
anomaly_type: AnomalyType,
|
||||
severity: f32,
|
||||
) -> AnomalyAlert {
|
||||
let witness = WitnessLog {
|
||||
trigger_timestamp: event.timestamp,
|
||||
reflex_gate_id: event.source_id as u32,
|
||||
input_snapshot: vec![event.value],
|
||||
threshold_at_trigger: self
|
||||
.thresholds
|
||||
.get(event.metric_id as usize % self.thresholds.len())
|
||||
.map(|t| t.current)
|
||||
.unwrap_or(1.0),
|
||||
decision_path: vec![
|
||||
format!(
|
||||
"Event received: source={}, metric={}",
|
||||
event.source_id, event.metric_id
|
||||
),
|
||||
format!("Anomaly type: {:?}", anomaly_type),
|
||||
format!("Severity: {:.2}", severity),
|
||||
],
|
||||
};
|
||||
|
||||
self.witness_buffer.push_back(witness.clone());
|
||||
if self.witness_buffer.len() > 1000 {
|
||||
self.witness_buffer.pop_front();
|
||||
}
|
||||
|
||||
let alert = AnomalyAlert {
|
||||
event,
|
||||
anomaly_type,
|
||||
severity,
|
||||
witness_log: witness,
|
||||
};
|
||||
|
||||
self.recent_alerts.push_back(alert.clone());
|
||||
if self.recent_alerts.len() > 100 {
|
||||
self.recent_alerts.pop_front();
|
||||
}
|
||||
|
||||
alert
|
||||
}
|
||||
|
||||
fn check_cascade(&self, timestamp: u64) -> Option<AnomalyAlert> {
|
||||
// Check if multiple sources alerted within 100ms window
|
||||
let window_start = timestamp.saturating_sub(100);
|
||||
let recent: Vec<_> = self
|
||||
.recent_alerts
|
||||
.iter()
|
||||
.filter(|a| a.event.timestamp >= window_start)
|
||||
.collect();
|
||||
|
||||
if recent.len() >= 3 {
|
||||
let affected: Vec<u16> = recent.iter().map(|a| a.event.source_id).collect();
|
||||
let event = recent.last()?.event.clone();
|
||||
|
||||
Some(AnomalyAlert {
|
||||
event: event.clone(),
|
||||
anomaly_type: AnomalyType::CascadeAnomaly {
|
||||
affected_sources: affected,
|
||||
},
|
||||
severity: 0.95,
|
||||
witness_log: WitnessLog {
|
||||
trigger_timestamp: timestamp,
|
||||
reflex_gate_id: 0,
|
||||
input_snapshot: vec![],
|
||||
threshold_at_trigger: 0.0,
|
||||
decision_path: vec![
|
||||
"Cascade detected".to_string(),
|
||||
format!("Multiple sources alerting within 100ms"),
|
||||
],
|
||||
},
|
||||
})
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn from feedback: was the alert valid?
|
||||
pub fn learn_from_feedback(&mut self, alert: &AnomalyAlert, was_valid: bool) {
|
||||
let metric_idx = alert.event.metric_id as usize % self.thresholds.len();
|
||||
self.thresholds[metric_idx].adapt(
|
||||
alert.event.value,
|
||||
!was_valid, // If invalid, treat as normal (tighten threshold)
|
||||
0.1,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// Example Usage
|
||||
// =============================================================================
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 1: Always-On Anomaly Detection ===\n");
|
||||
|
||||
// Create detection system for 10 sources, 5 metrics
|
||||
let mut detector = AnomalyDetectionSystem::new(10, 5);
|
||||
|
||||
// Simulate normal telemetry
|
||||
println!("Processing normal telemetry...");
|
||||
for i in 0..100 {
|
||||
let event = TelemetryEvent {
|
||||
timestamp: i * 1000 + (i % 10) * 10, // Slight jitter
|
||||
source_id: (i % 10) as u16,
|
||||
metric_id: (i % 5) as u32,
|
||||
value: 1.0 + (i as f32 * 0.01).sin() * 0.1, // Normal variation
|
||||
metadata: None,
|
||||
};
|
||||
|
||||
if let Some(alert) = detector.process_event(event) {
|
||||
println!(" Alert: {:?}", alert.anomaly_type);
|
||||
}
|
||||
}
|
||||
println!(" Normal events processed with adaptive learning\n");
|
||||
|
||||
// Simulate anomalies
|
||||
println!("Injecting anomalies...");
|
||||
|
||||
// Value anomaly
|
||||
let value_spike = TelemetryEvent {
|
||||
timestamp: 101_000,
|
||||
source_id: 0,
|
||||
metric_id: 0,
|
||||
value: 5.0, // Way above normal ~1.0
|
||||
metadata: Some("CPU spike".to_string()),
|
||||
};
|
||||
if let Some(alert) = detector.process_event(value_spike) {
|
||||
println!(" VALUE ANOMALY DETECTED!");
|
||||
println!(" Type: {:?}", alert.anomaly_type);
|
||||
println!(" Severity: {:.2}", alert.severity);
|
||||
println!(" Witness: {:?}", alert.witness_log.decision_path);
|
||||
}
|
||||
|
||||
// Temporal anomaly (delayed event)
|
||||
let delayed = TelemetryEvent {
|
||||
timestamp: 105_000, // Gap of 4 seconds instead of 1
|
||||
source_id: 1,
|
||||
metric_id: 1,
|
||||
value: 1.0,
|
||||
metadata: Some("Delayed heartbeat".to_string()),
|
||||
};
|
||||
if let Some(alert) = detector.process_event(delayed) {
|
||||
println!("\n TEMPORAL ANOMALY DETECTED!");
|
||||
println!(" Type: {:?}", alert.anomaly_type);
|
||||
}
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Detection before failure (microsecond response)");
|
||||
println!("- Adaptive thresholds reduce false positives");
|
||||
println!("- Explainable witness logs for every trigger");
|
||||
println!("- Cascade detection across multiple sources");
|
||||
println!("\nDirect fit for RuVector + Cognitum v0");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_reflex_gate_fires() {
|
||||
let mut gate = ReflexGate::new(0, 1.0);
|
||||
|
||||
// Should not fire on small inputs
|
||||
assert!(!gate.process(0.3, 0));
|
||||
assert!(!gate.process(0.3, 1));
|
||||
|
||||
// Should fire when accumulated
|
||||
assert!(gate.process(0.5, 2));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_adaptive_threshold() {
|
||||
let mut threshold = AdaptiveThreshold::new(1.0);
|
||||
|
||||
// Normal observations should tighten threshold
|
||||
for _ in 0..10 {
|
||||
threshold.adapt(1.0, false, 0.1);
|
||||
}
|
||||
|
||||
assert!(threshold.current < 1.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_temporal_pattern_detection() {
|
||||
let mut detector = TemporalPatternDetector::new(1000, 100);
|
||||
|
||||
// Normal intervals
|
||||
assert!(detector.check(0).is_none());
|
||||
assert!(detector.check(1000).is_none());
|
||||
assert!(detector.check(2000).is_none());
|
||||
|
||||
// Anomalous interval (500ms instead of 1000ms)
|
||||
let result = detector.check(2500);
|
||||
assert!(result.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_value_anomaly_detection() {
|
||||
let mut system = AnomalyDetectionSystem::new(1, 1);
|
||||
|
||||
// Establish baseline
|
||||
for i in 0..10 {
|
||||
let event = TelemetryEvent {
|
||||
timestamp: i * 1000,
|
||||
source_id: 0,
|
||||
metric_id: 0,
|
||||
value: 1.0,
|
||||
metadata: None,
|
||||
};
|
||||
system.process_event(event);
|
||||
}
|
||||
|
||||
// Inject anomaly
|
||||
let anomaly = TelemetryEvent {
|
||||
timestamp: 10_000,
|
||||
source_id: 0,
|
||||
metric_id: 0,
|
||||
value: 10.0, // 10x normal
|
||||
metadata: None,
|
||||
};
|
||||
|
||||
let result = system.process_event(anomaly);
|
||||
assert!(result.is_some());
|
||||
}
|
||||
}
|
||||
504
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t1_edge_autonomy.rs
vendored
Normal file
504
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t1_edge_autonomy.rs
vendored
Normal file
@@ -0,0 +1,504 @@
|
||||
//! # Tier 1: Edge Autonomy and Control
|
||||
//!
|
||||
//! Drones, vehicles, robotics, industrial automation.
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Reflex arcs handle safety and stabilization
|
||||
//! - Policy loops run slower and only when needed
|
||||
//! - Bullet-time bursts replace constant compute
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Lower power, faster reactions
|
||||
//! - Systems degrade gracefully instead of catastrophically
|
||||
//! - Certification becomes possible because reflex paths are bounded
|
||||
//!
|
||||
//! This is where Cognitum shines immediately.
|
||||
|
||||
use std::time::{Duration, Instant};
|
||||
|
||||
/// Sensor reading from edge device
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct SensorReading {
|
||||
pub timestamp_us: u64,
|
||||
pub sensor_type: SensorType,
|
||||
pub value: f32,
|
||||
pub confidence: f32,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub enum SensorType {
|
||||
Accelerometer,
|
||||
Gyroscope,
|
||||
Proximity,
|
||||
Temperature,
|
||||
Battery,
|
||||
Motor,
|
||||
}
|
||||
|
||||
/// Control action output
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ControlAction {
|
||||
pub actuator_id: u32,
|
||||
pub command: ActuatorCommand,
|
||||
pub priority: Priority,
|
||||
pub deadline_us: u64,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum ActuatorCommand {
|
||||
SetMotorSpeed(f32),
|
||||
ApplyBrake(f32),
|
||||
AdjustPitch(f32),
|
||||
EmergencyStop,
|
||||
Idle,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum Priority {
|
||||
Safety, // Immediate, preempts everything
|
||||
Stability, // Fast reflex response
|
||||
Efficiency, // Slower optimization
|
||||
Background, // When idle
|
||||
}
|
||||
|
||||
/// Reflex arc for immediate safety responses
|
||||
/// Runs on Cognitum worker tiles with deterministic timing
|
||||
pub struct ReflexArc {
|
||||
pub name: String,
|
||||
pub trigger_threshold: f32,
|
||||
pub response_action: ActuatorCommand,
|
||||
pub max_latency_us: u64,
|
||||
pub last_activation: u64,
|
||||
pub activation_count: u64,
|
||||
}
|
||||
|
||||
impl ReflexArc {
|
||||
pub fn new(name: &str, threshold: f32, action: ActuatorCommand, max_latency_us: u64) -> Self {
|
||||
Self {
|
||||
name: name.to_string(),
|
||||
trigger_threshold: threshold,
|
||||
response_action: action,
|
||||
max_latency_us,
|
||||
last_activation: 0,
|
||||
activation_count: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if reflex should fire - deterministic, bounded execution
|
||||
pub fn check(&mut self, reading: &SensorReading) -> Option<ControlAction> {
|
||||
if reading.value.abs() > self.trigger_threshold {
|
||||
self.last_activation = reading.timestamp_us;
|
||||
self.activation_count += 1;
|
||||
|
||||
Some(ControlAction {
|
||||
actuator_id: 0,
|
||||
command: self.response_action.clone(),
|
||||
priority: Priority::Safety,
|
||||
deadline_us: reading.timestamp_us + self.max_latency_us,
|
||||
})
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Stability controller using dendritic coincidence detection
|
||||
/// Detects correlated sensor patterns requiring stabilization
|
||||
pub struct StabilityController {
|
||||
pub imu_history: Vec<(f32, f32, f32)>, // accel, gyro, proximity
|
||||
pub coincidence_window_us: u64,
|
||||
pub stability_threshold: f32,
|
||||
pub membrane_potential: f32,
|
||||
}
|
||||
|
||||
impl StabilityController {
|
||||
pub fn new(coincidence_window_us: u64, threshold: f32) -> Self {
|
||||
Self {
|
||||
imu_history: Vec::with_capacity(100),
|
||||
coincidence_window_us,
|
||||
stability_threshold: threshold,
|
||||
membrane_potential: 0.0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Process sensor fusion for stability
|
||||
pub fn process(&mut self, readings: &[SensorReading]) -> Option<ControlAction> {
|
||||
// Extract relevant sensors
|
||||
let accel = readings
|
||||
.iter()
|
||||
.find(|r| r.sensor_type == SensorType::Accelerometer)
|
||||
.map(|r| r.value);
|
||||
let gyro = readings
|
||||
.iter()
|
||||
.find(|r| r.sensor_type == SensorType::Gyroscope)
|
||||
.map(|r| r.value);
|
||||
|
||||
if let (Some(a), Some(g)) = (accel, gyro) {
|
||||
// Coincidence detection: both accelerating and rotating
|
||||
let instability = a.abs() * g.abs();
|
||||
|
||||
// Integrate over time (dendritic membrane)
|
||||
self.membrane_potential += instability;
|
||||
self.membrane_potential *= 0.9; // Decay
|
||||
|
||||
if self.membrane_potential > self.stability_threshold {
|
||||
self.membrane_potential = 0.0; // Reset after spike
|
||||
|
||||
// Compute corrective action
|
||||
let correction = -g * 0.1; // Counter-rotate
|
||||
return Some(ControlAction {
|
||||
actuator_id: 1,
|
||||
command: ActuatorCommand::AdjustPitch(correction),
|
||||
priority: Priority::Stability,
|
||||
deadline_us: readings[0].timestamp_us + 1000, // 1ms deadline
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Bullet-time burst controller
|
||||
/// Activates high-fidelity processing only during critical moments
|
||||
pub struct BulletTimeController {
|
||||
pub is_active: bool,
|
||||
pub activation_threshold: f32,
|
||||
pub deactivation_threshold: f32,
|
||||
pub burst_duration_us: u64,
|
||||
pub burst_start: u64,
|
||||
pub normal_sample_rate_hz: u32,
|
||||
pub burst_sample_rate_hz: u32,
|
||||
}
|
||||
|
||||
impl BulletTimeController {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
is_active: false,
|
||||
activation_threshold: 0.8,
|
||||
deactivation_threshold: 0.3,
|
||||
burst_duration_us: 100_000, // 100ms max burst
|
||||
burst_start: 0,
|
||||
normal_sample_rate_hz: 100,
|
||||
burst_sample_rate_hz: 10_000,
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if bullet-time should activate
|
||||
pub fn should_activate(&mut self, urgency: f32, timestamp_us: u64) -> bool {
|
||||
if !self.is_active && urgency > self.activation_threshold {
|
||||
self.is_active = true;
|
||||
self.burst_start = timestamp_us;
|
||||
println!(" [BULLET TIME] Activated! Urgency: {:.2}", urgency);
|
||||
return true;
|
||||
}
|
||||
|
||||
if self.is_active {
|
||||
// Check deactivation conditions
|
||||
let elapsed = timestamp_us - self.burst_start;
|
||||
if urgency < self.deactivation_threshold || elapsed > self.burst_duration_us {
|
||||
self.is_active = false;
|
||||
println!(" [BULLET TIME] Deactivated after {}us", elapsed);
|
||||
}
|
||||
}
|
||||
|
||||
self.is_active
|
||||
}
|
||||
|
||||
pub fn current_sample_rate(&self) -> u32 {
|
||||
if self.is_active {
|
||||
self.burst_sample_rate_hz
|
||||
} else {
|
||||
self.normal_sample_rate_hz
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Policy loop for slower optimization
|
||||
/// Runs when reflexes and stability are not active
|
||||
pub struct PolicyLoop {
|
||||
pub energy_budget: f32,
|
||||
pub target_efficiency: f32,
|
||||
pub update_interval_ms: u64,
|
||||
pub last_update: u64,
|
||||
}
|
||||
|
||||
impl PolicyLoop {
|
||||
pub fn new(energy_budget: f32) -> Self {
|
||||
Self {
|
||||
energy_budget,
|
||||
target_efficiency: 0.9,
|
||||
update_interval_ms: 100, // Run at 10Hz
|
||||
last_update: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Optimize for efficiency when safe
|
||||
pub fn optimize(
|
||||
&mut self,
|
||||
readings: &[SensorReading],
|
||||
timestamp_us: u64,
|
||||
) -> Option<ControlAction> {
|
||||
let timestamp_ms = timestamp_us / 1000;
|
||||
if timestamp_ms < self.last_update + self.update_interval_ms {
|
||||
return None;
|
||||
}
|
||||
self.last_update = timestamp_ms;
|
||||
|
||||
// Check battery level
|
||||
let battery = readings
|
||||
.iter()
|
||||
.find(|r| r.sensor_type == SensorType::Battery)
|
||||
.map(|r| r.value)
|
||||
.unwrap_or(1.0);
|
||||
|
||||
if battery < 0.2 {
|
||||
// Low power mode
|
||||
Some(ControlAction {
|
||||
actuator_id: 0,
|
||||
command: ActuatorCommand::SetMotorSpeed(0.5), // Reduce speed
|
||||
priority: Priority::Efficiency,
|
||||
deadline_us: timestamp_us + 10_000,
|
||||
})
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Main edge autonomy system
|
||||
pub struct EdgeAutonomySystem {
|
||||
/// Safety reflexes (always active, highest priority)
|
||||
pub reflexes: Vec<ReflexArc>,
|
||||
/// Stability controller (fast, second priority)
|
||||
pub stability: StabilityController,
|
||||
/// Bullet-time for critical moments
|
||||
pub bullet_time: BulletTimeController,
|
||||
/// Policy optimization (slow, lowest priority)
|
||||
pub policy: PolicyLoop,
|
||||
/// Graceful degradation state
|
||||
pub degradation_level: u8,
|
||||
}
|
||||
|
||||
impl EdgeAutonomySystem {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
reflexes: vec![
|
||||
ReflexArc::new(
|
||||
"collision_avoidance",
|
||||
0.5, // Proximity threshold
|
||||
ActuatorCommand::EmergencyStop,
|
||||
100, // 100us max latency
|
||||
),
|
||||
ReflexArc::new(
|
||||
"overheat_protection",
|
||||
85.0, // Temperature threshold
|
||||
ActuatorCommand::SetMotorSpeed(0.0),
|
||||
1000, // 1ms max latency
|
||||
),
|
||||
],
|
||||
stability: StabilityController::new(10_000, 2.0),
|
||||
bullet_time: BulletTimeController::new(),
|
||||
policy: PolicyLoop::new(100.0),
|
||||
degradation_level: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Process sensor readings through the nervous system hierarchy
|
||||
pub fn process(&mut self, readings: Vec<SensorReading>) -> Vec<ControlAction> {
|
||||
let mut actions = Vec::new();
|
||||
let timestamp = readings.first().map(|r| r.timestamp_us).unwrap_or(0);
|
||||
|
||||
// 1. Safety reflexes (always checked first, deterministic)
|
||||
for reflex in &mut self.reflexes {
|
||||
for reading in &readings {
|
||||
if let Some(action) = reflex.check(reading) {
|
||||
println!(" REFLEX [{}]: {:?}", reflex.name, action.command);
|
||||
actions.push(action);
|
||||
// Safety actions preempt everything
|
||||
return actions;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Stability control (fast, dendritic integration)
|
||||
if let Some(action) = self.stability.process(&readings) {
|
||||
println!(" STABILITY: {:?}", action.command);
|
||||
actions.push(action);
|
||||
}
|
||||
|
||||
// 3. Bullet-time activation check
|
||||
let urgency = self.compute_urgency(&readings);
|
||||
if self.bullet_time.should_activate(urgency, timestamp) {
|
||||
println!(
|
||||
" Sample rate: {}Hz",
|
||||
self.bullet_time.current_sample_rate()
|
||||
);
|
||||
}
|
||||
|
||||
// 4. Policy optimization (only if stable)
|
||||
if actions.is_empty() {
|
||||
if let Some(action) = self.policy.optimize(&readings, timestamp) {
|
||||
println!(" POLICY: {:?}", action.command);
|
||||
actions.push(action);
|
||||
}
|
||||
}
|
||||
|
||||
actions
|
||||
}
|
||||
|
||||
fn compute_urgency(&self, readings: &[SensorReading]) -> f32 {
|
||||
readings
|
||||
.iter()
|
||||
.map(|r| r.value.abs() * (1.0 - r.confidence))
|
||||
.sum::<f32>()
|
||||
/ readings.len().max(1) as f32
|
||||
}
|
||||
|
||||
/// Handle graceful degradation
|
||||
pub fn degrade(&mut self) {
|
||||
self.degradation_level += 1;
|
||||
match self.degradation_level {
|
||||
1 => {
|
||||
println!(" DEGRADATION 1: Disabling policy optimization");
|
||||
}
|
||||
2 => {
|
||||
println!(" DEGRADATION 2: Reducing stability bandwidth");
|
||||
self.stability.stability_threshold *= 1.5;
|
||||
}
|
||||
3 => {
|
||||
println!(" DEGRADATION 3: Safety reflexes only");
|
||||
}
|
||||
_ => {
|
||||
println!(" CRITICAL: Maximum degradation reached");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 1: Edge Autonomy and Control ===\n");
|
||||
|
||||
let mut system = EdgeAutonomySystem::new();
|
||||
|
||||
// Simulate normal operation
|
||||
println!("Normal operation...");
|
||||
for i in 0..10 {
|
||||
let readings = vec![
|
||||
SensorReading {
|
||||
timestamp_us: i * 10_000,
|
||||
sensor_type: SensorType::Accelerometer,
|
||||
value: 0.1,
|
||||
confidence: 0.95,
|
||||
},
|
||||
SensorReading {
|
||||
timestamp_us: i * 10_000,
|
||||
sensor_type: SensorType::Gyroscope,
|
||||
value: 0.05,
|
||||
confidence: 0.95,
|
||||
},
|
||||
SensorReading {
|
||||
timestamp_us: i * 10_000,
|
||||
sensor_type: SensorType::Battery,
|
||||
value: 0.8,
|
||||
confidence: 1.0,
|
||||
},
|
||||
];
|
||||
let _ = system.process(readings);
|
||||
}
|
||||
println!(" 10 cycles processed, system stable\n");
|
||||
|
||||
// Simulate instability (triggers stability controller)
|
||||
println!("Simulating instability...");
|
||||
for i in 0..5 {
|
||||
let readings = vec![
|
||||
SensorReading {
|
||||
timestamp_us: 100_000 + i * 1000,
|
||||
sensor_type: SensorType::Accelerometer,
|
||||
value: 2.0 + i as f32 * 0.5,
|
||||
confidence: 0.8,
|
||||
},
|
||||
SensorReading {
|
||||
timestamp_us: 100_000 + i * 1000,
|
||||
sensor_type: SensorType::Gyroscope,
|
||||
value: 1.5 + i as f32 * 0.3,
|
||||
confidence: 0.8,
|
||||
},
|
||||
];
|
||||
let actions = system.process(readings);
|
||||
for action in actions {
|
||||
println!(
|
||||
" Action: {:?} (deadline: {}us)",
|
||||
action.command, action.deadline_us
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Simulate collision (triggers safety reflex)
|
||||
println!("\nSimulating collision warning...");
|
||||
let emergency = vec![SensorReading {
|
||||
timestamp_us: 200_000,
|
||||
sensor_type: SensorType::Proximity,
|
||||
value: 0.9, // Very close!
|
||||
confidence: 0.99,
|
||||
}];
|
||||
let actions = system.process(emergency);
|
||||
println!(" Emergency response latency: <100us guaranteed");
|
||||
|
||||
// Demonstrate graceful degradation
|
||||
println!("\nDemonstrating graceful degradation...");
|
||||
for _ in 0..3 {
|
||||
system.degrade();
|
||||
}
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Reflex latency: <100μs (deterministic)");
|
||||
println!("- Stability control: <1ms response");
|
||||
println!("- Bullet-time: 100x sample rate during critical moments");
|
||||
println!("- Graceful degradation prevents catastrophic failure");
|
||||
println!("- Certifiable: bounded execution paths");
|
||||
println!("\nThis is where Cognitum shines immediately.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_reflex_arc_fires() {
|
||||
let mut reflex = ReflexArc::new("test", 0.5, ActuatorCommand::EmergencyStop, 100);
|
||||
|
||||
let reading = SensorReading {
|
||||
timestamp_us: 0,
|
||||
sensor_type: SensorType::Proximity,
|
||||
value: 0.9,
|
||||
confidence: 1.0,
|
||||
};
|
||||
|
||||
let result = reflex.check(&reading);
|
||||
assert!(result.is_some());
|
||||
assert_eq!(result.unwrap().priority, Priority::Safety);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bullet_time_activation() {
|
||||
let mut bt = BulletTimeController::new();
|
||||
|
||||
assert!(!bt.is_active);
|
||||
assert!(bt.should_activate(0.9, 0));
|
||||
assert!(bt.is_active);
|
||||
assert_eq!(bt.current_sample_rate(), 10_000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_graceful_degradation() {
|
||||
let mut system = EdgeAutonomySystem::new();
|
||||
|
||||
assert_eq!(system.degradation_level, 0);
|
||||
system.degrade();
|
||||
assert_eq!(system.degradation_level, 1);
|
||||
system.degrade();
|
||||
system.degrade();
|
||||
assert_eq!(system.degradation_level, 3);
|
||||
}
|
||||
}
|
||||
582
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t1_medical_wearable.rs
vendored
Normal file
582
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t1_medical_wearable.rs
vendored
Normal file
@@ -0,0 +1,582 @@
|
||||
//! # Tier 1: Medical and Wearable Systems
|
||||
//!
|
||||
//! Monitoring, assistive devices, prosthetics.
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Continuous sensing with sparse spikes
|
||||
//! - One-shot learning for personalization
|
||||
//! - Homeostasis instead of static thresholds
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Devices adapt to the person, not the average
|
||||
//! - Low energy, always-on, private by default
|
||||
//! - Early detection beats intervention
|
||||
//!
|
||||
//! This is practical and defensible.
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Physiological measurement
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct BioSignal {
|
||||
pub timestamp_ms: u64,
|
||||
pub signal_type: SignalType,
|
||||
pub value: f32,
|
||||
pub source: SignalSource,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub enum SignalType {
|
||||
HeartRate,
|
||||
HeartRateVariability,
|
||||
SpO2,
|
||||
SkinConductance,
|
||||
Temperature,
|
||||
Motion,
|
||||
Sleep,
|
||||
Stress,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub enum SignalSource {
|
||||
Wrist,
|
||||
Chest,
|
||||
Finger,
|
||||
Derived,
|
||||
}
|
||||
|
||||
/// Alert for user or medical professional
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct HealthAlert {
|
||||
pub signal: BioSignal,
|
||||
pub alert_type: AlertType,
|
||||
pub severity: AlertSeverity,
|
||||
pub recommendation: String,
|
||||
pub confidence: f32,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum AlertType {
|
||||
/// Immediate attention needed
|
||||
Acute { condition: String },
|
||||
/// Trend requiring monitoring
|
||||
Trend {
|
||||
direction: TrendDirection,
|
||||
duration_hours: f32,
|
||||
},
|
||||
/// Deviation from personal baseline
|
||||
PersonalAnomaly { baseline: f32, deviation: f32 },
|
||||
/// Lifestyle recommendation
|
||||
Wellness { category: String },
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum TrendDirection {
|
||||
Rising,
|
||||
Falling,
|
||||
Unstable,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum AlertSeverity {
|
||||
Info,
|
||||
Warning,
|
||||
Urgent,
|
||||
Emergency,
|
||||
}
|
||||
|
||||
/// Personal baseline learned through one-shot learning
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct PersonalBaseline {
|
||||
pub signal_type: SignalType,
|
||||
pub mean: f32,
|
||||
pub std_dev: f32,
|
||||
pub circadian_pattern: Vec<f32>, // 24 hourly values
|
||||
pub adaptation_rate: f32,
|
||||
pub samples_seen: u64,
|
||||
}
|
||||
|
||||
impl PersonalBaseline {
|
||||
pub fn new(signal_type: SignalType) -> Self {
|
||||
Self {
|
||||
signal_type,
|
||||
mean: 0.0,
|
||||
std_dev: 1.0,
|
||||
circadian_pattern: vec![0.0; 24],
|
||||
adaptation_rate: 0.1,
|
||||
samples_seen: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// One-shot learning update using BTSP-style adaptation
|
||||
pub fn learn_one_shot(&mut self, value: f32, hour_of_day: usize) {
|
||||
// Fast initial learning, slower adaptation later
|
||||
let rate = if self.samples_seen < 100 {
|
||||
0.5 // Fast initialization
|
||||
} else {
|
||||
self.adaptation_rate
|
||||
};
|
||||
|
||||
// Update mean (eligibility trace style)
|
||||
let error = value - self.mean;
|
||||
self.mean += rate * error;
|
||||
|
||||
// Update std dev
|
||||
let variance_error = error.abs() - self.std_dev;
|
||||
self.std_dev += rate * 0.5 * variance_error;
|
||||
self.std_dev = self.std_dev.max(0.1); // Minimum std dev
|
||||
|
||||
// Update circadian pattern
|
||||
if hour_of_day < 24 {
|
||||
self.circadian_pattern[hour_of_day] =
|
||||
self.circadian_pattern[hour_of_day] * (1.0 - rate) + value * rate;
|
||||
}
|
||||
|
||||
self.samples_seen += 1;
|
||||
}
|
||||
|
||||
/// Check if value is anomalous for this person
|
||||
pub fn is_anomalous(&self, value: f32, hour_of_day: usize) -> Option<f32> {
|
||||
let expected = if hour_of_day < 24 && self.samples_seen > 100 {
|
||||
self.circadian_pattern[hour_of_day]
|
||||
} else {
|
||||
self.mean
|
||||
};
|
||||
|
||||
let z_score = (value - expected).abs() / self.std_dev;
|
||||
|
||||
if z_score > 2.5 {
|
||||
Some(z_score)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Homeostatic controller that maintains optimal ranges
|
||||
pub struct HomeostaticController {
|
||||
pub target: f32,
|
||||
pub tolerance: f32,
|
||||
pub integral: f32,
|
||||
pub last_error: f32,
|
||||
pub kp: f32,
|
||||
pub ki: f32,
|
||||
pub kd: f32,
|
||||
}
|
||||
|
||||
impl HomeostaticController {
|
||||
pub fn new(target: f32, tolerance: f32) -> Self {
|
||||
Self {
|
||||
target,
|
||||
tolerance,
|
||||
integral: 0.0,
|
||||
last_error: 0.0,
|
||||
kp: 1.0,
|
||||
ki: 0.1,
|
||||
kd: 0.05,
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute homeostatic response
|
||||
pub fn respond(&mut self, current: f32) -> HomeostasisResponse {
|
||||
let error = current - self.target;
|
||||
|
||||
// Within tolerance - no action needed
|
||||
if error.abs() <= self.tolerance {
|
||||
self.integral *= 0.9; // Decay integral
|
||||
return HomeostasisResponse::Stable;
|
||||
}
|
||||
|
||||
// PID-style response
|
||||
self.integral += error;
|
||||
self.integral = self.integral.clamp(-10.0, 10.0);
|
||||
|
||||
let derivative = error - self.last_error;
|
||||
self.last_error = error;
|
||||
|
||||
let response = self.kp * error + self.ki * self.integral + self.kd * derivative;
|
||||
|
||||
if response.abs() > 5.0 {
|
||||
HomeostasisResponse::Urgent(response)
|
||||
} else if response.abs() > 2.0 {
|
||||
HomeostasisResponse::Adjust(response)
|
||||
} else {
|
||||
HomeostasisResponse::Monitor
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum HomeostasisResponse {
|
||||
Stable,
|
||||
Monitor,
|
||||
Adjust(f32),
|
||||
Urgent(f32),
|
||||
}
|
||||
|
||||
/// Sparse spike encoder for low-power continuous sensing
|
||||
pub struct SparseEncoder {
|
||||
pub last_value: f32,
|
||||
pub threshold: f32,
|
||||
pub spike_count: u64,
|
||||
}
|
||||
|
||||
impl SparseEncoder {
|
||||
pub fn new(threshold: f32) -> Self {
|
||||
Self {
|
||||
last_value: 0.0,
|
||||
threshold,
|
||||
spike_count: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Only emit spike if change exceeds threshold
|
||||
pub fn encode(&mut self, value: f32) -> Option<f32> {
|
||||
let delta = (value - self.last_value).abs();
|
||||
|
||||
if delta > self.threshold {
|
||||
self.last_value = value;
|
||||
self.spike_count += 1;
|
||||
Some(value)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
pub fn compression_ratio(&self, total_samples: u64) -> f32 {
|
||||
if self.spike_count == 0 {
|
||||
return f32::INFINITY;
|
||||
}
|
||||
total_samples as f32 / self.spike_count as f32
|
||||
}
|
||||
}
|
||||
|
||||
/// Main medical wearable system
|
||||
pub struct MedicalWearableSystem {
|
||||
/// Personal baselines per signal type (one-shot learned)
|
||||
pub baselines: HashMap<SignalType, PersonalBaseline>,
|
||||
/// Homeostatic controllers
|
||||
pub homeostasis: HashMap<SignalType, HomeostaticController>,
|
||||
/// Sparse encoders for low power
|
||||
pub encoders: HashMap<SignalType, SparseEncoder>,
|
||||
/// Recent alerts
|
||||
pub alert_history: Vec<HealthAlert>,
|
||||
/// Privacy: all processing local
|
||||
pub samples_processed: u64,
|
||||
}
|
||||
|
||||
impl MedicalWearableSystem {
|
||||
pub fn new() -> Self {
|
||||
let mut baselines = HashMap::new();
|
||||
let mut homeostasis = HashMap::new();
|
||||
let mut encoders = HashMap::new();
|
||||
|
||||
// Initialize for common signals
|
||||
for signal_type in [
|
||||
SignalType::HeartRate,
|
||||
SignalType::SpO2,
|
||||
SignalType::Temperature,
|
||||
SignalType::SkinConductance,
|
||||
] {
|
||||
baselines.insert(
|
||||
signal_type.clone(),
|
||||
PersonalBaseline::new(signal_type.clone()),
|
||||
);
|
||||
|
||||
let (target, tolerance) = match signal_type {
|
||||
SignalType::HeartRate => (70.0, 15.0),
|
||||
SignalType::SpO2 => (98.0, 3.0),
|
||||
SignalType::Temperature => (36.5, 0.5),
|
||||
SignalType::SkinConductance => (5.0, 2.0),
|
||||
_ => (0.0, 1.0),
|
||||
};
|
||||
homeostasis.insert(
|
||||
signal_type.clone(),
|
||||
HomeostaticController::new(target, tolerance),
|
||||
);
|
||||
|
||||
let threshold = match signal_type {
|
||||
SignalType::HeartRate => 3.0,
|
||||
SignalType::SpO2 => 1.0,
|
||||
SignalType::Temperature => 0.1,
|
||||
_ => 0.5,
|
||||
};
|
||||
encoders.insert(signal_type, SparseEncoder::new(threshold));
|
||||
}
|
||||
|
||||
Self {
|
||||
baselines,
|
||||
homeostasis,
|
||||
encoders,
|
||||
alert_history: Vec::new(),
|
||||
samples_processed: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Process a biosignal through the nervous system
|
||||
pub fn process(&mut self, signal: BioSignal) -> Option<HealthAlert> {
|
||||
self.samples_processed += 1;
|
||||
let hour = ((signal.timestamp_ms / 3_600_000) % 24) as usize;
|
||||
|
||||
// 1. Sparse encoding (low power)
|
||||
let encoder = self.encoders.get_mut(&signal.signal_type);
|
||||
let significant = encoder.map(|e| e.encode(signal.value)).flatten();
|
||||
|
||||
if significant.is_none() {
|
||||
// No significant change - save power
|
||||
return None;
|
||||
}
|
||||
|
||||
// 2. One-shot learning to update personal baseline
|
||||
if let Some(baseline) = self.baselines.get_mut(&signal.signal_type) {
|
||||
baseline.learn_one_shot(signal.value, hour);
|
||||
|
||||
// 3. Check for personal anomaly
|
||||
if let Some(z_score) = baseline.is_anomalous(signal.value, hour) {
|
||||
let alert = HealthAlert {
|
||||
signal: signal.clone(),
|
||||
alert_type: AlertType::PersonalAnomaly {
|
||||
baseline: baseline.mean,
|
||||
deviation: z_score,
|
||||
},
|
||||
severity: if z_score > 4.0 {
|
||||
AlertSeverity::Urgent
|
||||
} else {
|
||||
AlertSeverity::Warning
|
||||
},
|
||||
recommendation: format!(
|
||||
"{:?} is {:.1} std devs from your personal baseline",
|
||||
signal.signal_type, z_score
|
||||
),
|
||||
confidence: 0.7 + 0.3 * (baseline.samples_seen as f32 / 1000.0).min(1.0),
|
||||
};
|
||||
self.alert_history.push(alert.clone());
|
||||
return Some(alert);
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Homeostatic check
|
||||
if let Some(controller) = self.homeostasis.get_mut(&signal.signal_type) {
|
||||
match controller.respond(signal.value) {
|
||||
HomeostasisResponse::Urgent(response) => {
|
||||
let alert = HealthAlert {
|
||||
signal: signal.clone(),
|
||||
alert_type: AlertType::Acute {
|
||||
condition: format!("{:?} critical", signal.signal_type),
|
||||
},
|
||||
severity: AlertSeverity::Emergency,
|
||||
recommendation: format!(
|
||||
"Immediate attention: response magnitude {:.1}",
|
||||
response
|
||||
),
|
||||
confidence: 0.9,
|
||||
};
|
||||
self.alert_history.push(alert.clone());
|
||||
return Some(alert);
|
||||
}
|
||||
HomeostasisResponse::Adjust(response) => {
|
||||
let alert = HealthAlert {
|
||||
signal: signal.clone(),
|
||||
alert_type: AlertType::Wellness {
|
||||
category: "homeostasis".to_string(),
|
||||
},
|
||||
severity: AlertSeverity::Info,
|
||||
recommendation: format!(
|
||||
"Consider adjustment: {:?} trending {}",
|
||||
signal.signal_type,
|
||||
if response > 0.0 { "high" } else { "low" }
|
||||
),
|
||||
confidence: 0.6,
|
||||
};
|
||||
return Some(alert);
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// Get power savings from sparse encoding
|
||||
pub fn power_efficiency(&self) -> HashMap<SignalType, f32> {
|
||||
self.encoders
|
||||
.iter()
|
||||
.map(|(st, enc)| (st.clone(), enc.compression_ratio(self.samples_processed)))
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Get personalization status
|
||||
pub fn personalization_status(&self) -> HashMap<SignalType, String> {
|
||||
self.baselines
|
||||
.iter()
|
||||
.map(|(st, bl)| {
|
||||
let status = if bl.samples_seen < 10 {
|
||||
"Initializing"
|
||||
} else if bl.samples_seen < 100 {
|
||||
"Learning"
|
||||
} else if bl.samples_seen < 1000 {
|
||||
"Adapting"
|
||||
} else {
|
||||
"Personalized"
|
||||
};
|
||||
(
|
||||
st.clone(),
|
||||
format!("{} ({} samples)", status, bl.samples_seen),
|
||||
)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 1: Medical and Wearable Systems ===\n");
|
||||
|
||||
let mut system = MedicalWearableSystem::new();
|
||||
|
||||
// Simulate a day of normal readings (personalization phase)
|
||||
println!("Personalization phase (simulating 24 hours)...");
|
||||
for hour in 0..24 {
|
||||
for minute in 0..60 {
|
||||
let timestamp = (hour * 3600 + minute * 60) * 1000;
|
||||
|
||||
// Heart rate varies by time of day
|
||||
let base_hr = 60.0 + 10.0 * (hour as f32 / 24.0 * std::f32::consts::PI).sin();
|
||||
let hr_noise = (minute as f32 * 0.1).sin() * 5.0;
|
||||
|
||||
let signal = BioSignal {
|
||||
timestamp_ms: timestamp,
|
||||
signal_type: SignalType::HeartRate,
|
||||
value: base_hr + hr_noise,
|
||||
source: SignalSource::Wrist,
|
||||
};
|
||||
|
||||
let _ = system.process(signal);
|
||||
}
|
||||
}
|
||||
|
||||
let status = system.personalization_status();
|
||||
println!(" Personalization status:");
|
||||
for (signal, s) in &status {
|
||||
println!(" {:?}: {}", signal, s);
|
||||
}
|
||||
|
||||
let efficiency = system.power_efficiency();
|
||||
println!("\n Power efficiency (compression ratio):");
|
||||
for (signal, ratio) in &efficiency {
|
||||
println!(" {:?}: {:.1}x reduction", signal, ratio);
|
||||
}
|
||||
|
||||
// Simulate anomaly detection
|
||||
println!("\nAnomaly detection phase...");
|
||||
|
||||
// Normal reading - should not alert
|
||||
let normal = BioSignal {
|
||||
timestamp_ms: 86_400_000 + 3600_000 * 10, // 10am next day
|
||||
signal_type: SignalType::HeartRate,
|
||||
value: 72.0,
|
||||
source: SignalSource::Wrist,
|
||||
};
|
||||
if let Some(alert) = system.process(normal) {
|
||||
println!(" Unexpected alert: {:?}", alert);
|
||||
} else {
|
||||
println!(" Normal reading - no alert (as expected)");
|
||||
}
|
||||
|
||||
// Anomalous reading - should alert
|
||||
let anomaly = BioSignal {
|
||||
timestamp_ms: 86_400_000 + 3600_000 * 10 + 1000,
|
||||
signal_type: SignalType::HeartRate,
|
||||
value: 120.0, // Much higher than personal baseline
|
||||
source: SignalSource::Wrist,
|
||||
};
|
||||
if let Some(alert) = system.process(anomaly) {
|
||||
println!("\n PERSONAL ANOMALY DETECTED!");
|
||||
println!(" Type: {:?}", alert.alert_type);
|
||||
println!(" Severity: {:?}", alert.severity);
|
||||
println!(" Recommendation: {}", alert.recommendation);
|
||||
println!(" Confidence: {:.1}%", alert.confidence * 100.0);
|
||||
}
|
||||
|
||||
// Emergency - low SpO2
|
||||
println!("\nEmergency scenario...");
|
||||
let emergency = BioSignal {
|
||||
timestamp_ms: 86_400_000 + 3600_000 * 10 + 2000,
|
||||
signal_type: SignalType::SpO2,
|
||||
value: 88.0, // Dangerously low
|
||||
source: SignalSource::Finger,
|
||||
};
|
||||
if let Some(alert) = system.process(emergency) {
|
||||
println!(" EMERGENCY ALERT!");
|
||||
println!(" Type: {:?}", alert.alert_type);
|
||||
println!(" Severity: {:?}", alert.severity);
|
||||
println!(" Recommendation: {}", alert.recommendation);
|
||||
}
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Adapts to the person, not population averages");
|
||||
println!("- Low power through sparse spike encoding");
|
||||
println!("- Privacy by default (all processing local)");
|
||||
println!("- Early detection through personal baselines");
|
||||
println!("- Circadian-aware anomaly detection");
|
||||
println!("\nThis is practical and defensible.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_one_shot_learning() {
|
||||
let mut baseline = PersonalBaseline::new(SignalType::HeartRate);
|
||||
|
||||
// Fast initial learning
|
||||
for _ in 0..10 {
|
||||
baseline.learn_one_shot(70.0, 12);
|
||||
}
|
||||
|
||||
assert!((baseline.mean - 70.0).abs() < 5.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_sparse_encoding() {
|
||||
let mut encoder = SparseEncoder::new(5.0);
|
||||
|
||||
// Small changes should not generate spikes
|
||||
assert!(encoder.encode(0.0).is_some()); // First value always spikes
|
||||
assert!(encoder.encode(2.0).is_none()); // Below threshold
|
||||
assert!(encoder.encode(10.0).is_some()); // Above threshold
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_homeostasis() {
|
||||
let mut controller = HomeostaticController::new(98.0, 3.0);
|
||||
|
||||
// Within tolerance
|
||||
assert!(matches!(
|
||||
controller.respond(97.0),
|
||||
HomeostasisResponse::Stable
|
||||
));
|
||||
|
||||
// Outside tolerance
|
||||
assert!(matches!(
|
||||
controller.respond(85.0),
|
||||
HomeostasisResponse::Urgent(_)
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_personal_anomaly_detection() {
|
||||
let mut baseline = PersonalBaseline::new(SignalType::HeartRate);
|
||||
|
||||
// Train baseline
|
||||
for i in 0..200 {
|
||||
baseline.learn_one_shot(70.0 + (i % 10) as f32 * 0.5, 12);
|
||||
}
|
||||
|
||||
// Normal should not be anomalous
|
||||
assert!(baseline.is_anomalous(72.0, 12).is_none());
|
||||
|
||||
// Extreme value should be anomalous
|
||||
assert!(baseline.is_anomalous(150.0, 12).is_some());
|
||||
}
|
||||
}
|
||||
545
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t2_adaptive_simulation.rs
vendored
Normal file
545
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t2_adaptive_simulation.rs
vendored
Normal file
@@ -0,0 +1,545 @@
|
||||
//! # Tier 2: Adaptive Simulation and Digital Twins
|
||||
//!
|
||||
//! Industrial systems, cities, logistics.
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Simulation runs continuously at low fidelity
|
||||
//! - High fidelity kicks in during "bullet time"
|
||||
//! - Learning improves predictive accuracy
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Prediction becomes proactive
|
||||
//! - Simulation is always warm, never cold-started
|
||||
//! - Costs scale with relevance, not size
|
||||
//!
|
||||
//! This is underexplored and powerful.
|
||||
|
||||
use std::collections::{HashMap, VecDeque};
|
||||
|
||||
/// A digital twin component
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
|
||||
pub struct ComponentId(pub String);
|
||||
|
||||
/// Fidelity level of simulation
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub enum FidelityLevel {
|
||||
/// Coarse-grained, fast, low accuracy
|
||||
Low { time_step_ms: u64, accuracy: f32 },
|
||||
/// Moderate detail
|
||||
Medium { time_step_ms: u64, accuracy: f32 },
|
||||
/// Full physics simulation
|
||||
High { time_step_ms: u64, accuracy: f32 },
|
||||
/// Maximum fidelity for critical moments
|
||||
BulletTime { time_step_ms: u64, accuracy: f32 },
|
||||
}
|
||||
|
||||
impl FidelityLevel {
|
||||
pub fn compute_cost(&self) -> f32 {
|
||||
match self {
|
||||
FidelityLevel::Low { .. } => 1.0,
|
||||
FidelityLevel::Medium { .. } => 10.0,
|
||||
FidelityLevel::High { .. } => 100.0,
|
||||
FidelityLevel::BulletTime { .. } => 1000.0,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn time_step_ms(&self) -> u64 {
|
||||
match self {
|
||||
FidelityLevel::Low { time_step_ms, .. } => *time_step_ms,
|
||||
FidelityLevel::Medium { time_step_ms, .. } => *time_step_ms,
|
||||
FidelityLevel::High { time_step_ms, .. } => *time_step_ms,
|
||||
FidelityLevel::BulletTime { time_step_ms, .. } => *time_step_ms,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// State of a simulated component
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ComponentState {
|
||||
pub id: ComponentId,
|
||||
pub position: (f32, f32, f32),
|
||||
pub velocity: (f32, f32, f32),
|
||||
pub properties: HashMap<String, f32>,
|
||||
pub predicted_trajectory: Vec<(f32, f32, f32)>,
|
||||
}
|
||||
|
||||
/// Prediction from the simulation
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Prediction {
|
||||
pub component: ComponentId,
|
||||
pub timestamp: u64,
|
||||
pub predicted_value: f32,
|
||||
pub confidence: f32,
|
||||
pub horizon_ms: u64,
|
||||
}
|
||||
|
||||
/// Actual measurement from the real system
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Measurement {
|
||||
pub component: ComponentId,
|
||||
pub timestamp: u64,
|
||||
pub actual_value: f32,
|
||||
pub sensor_id: String,
|
||||
}
|
||||
|
||||
/// Predictive error for learning
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct PredictionError {
|
||||
pub component: ComponentId,
|
||||
pub timestamp: u64,
|
||||
pub predicted: f32,
|
||||
pub actual: f32,
|
||||
pub error: f32,
|
||||
pub fidelity_at_prediction: FidelityLevel,
|
||||
}
|
||||
|
||||
/// Adaptive fidelity controller
|
||||
pub struct FidelityController {
|
||||
pub current_fidelity: FidelityLevel,
|
||||
pub urgency_threshold_high: f32,
|
||||
pub urgency_threshold_low: f32,
|
||||
pub bullet_time_until: u64,
|
||||
pub error_history: VecDeque<f32>,
|
||||
}
|
||||
|
||||
impl FidelityController {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
current_fidelity: FidelityLevel::Low {
|
||||
time_step_ms: 100,
|
||||
accuracy: 0.7,
|
||||
},
|
||||
urgency_threshold_high: 0.8,
|
||||
urgency_threshold_low: 0.3,
|
||||
bullet_time_until: 0,
|
||||
error_history: VecDeque::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Decide fidelity based on system state
|
||||
pub fn decide(&mut self, urgency: f32, timestamp: u64) -> FidelityLevel {
|
||||
// Bullet time takes priority
|
||||
if timestamp < self.bullet_time_until {
|
||||
return FidelityLevel::BulletTime {
|
||||
time_step_ms: 1,
|
||||
accuracy: 0.99,
|
||||
};
|
||||
}
|
||||
|
||||
// Adapt based on urgency
|
||||
if urgency > self.urgency_threshold_high {
|
||||
self.current_fidelity = FidelityLevel::High {
|
||||
time_step_ms: 10,
|
||||
accuracy: 0.95,
|
||||
};
|
||||
} else if urgency > 0.5 {
|
||||
self.current_fidelity = FidelityLevel::Medium {
|
||||
time_step_ms: 50,
|
||||
accuracy: 0.85,
|
||||
};
|
||||
} else if urgency < self.urgency_threshold_low {
|
||||
self.current_fidelity = FidelityLevel::Low {
|
||||
time_step_ms: 100,
|
||||
accuracy: 0.7,
|
||||
};
|
||||
}
|
||||
|
||||
self.current_fidelity.clone()
|
||||
}
|
||||
|
||||
/// Activate bullet time for a duration
|
||||
pub fn activate_bullet_time(&mut self, duration_ms: u64, current_time: u64) {
|
||||
self.bullet_time_until = current_time + duration_ms;
|
||||
println!(" [BULLET TIME] Activated for {}ms", duration_ms);
|
||||
}
|
||||
|
||||
/// Track prediction error for adaptive learning
|
||||
pub fn record_error(&mut self, error: f32) {
|
||||
self.error_history.push_back(error.abs());
|
||||
if self.error_history.len() > 100 {
|
||||
self.error_history.pop_front();
|
||||
}
|
||||
}
|
||||
|
||||
/// Get average recent error
|
||||
pub fn average_error(&self) -> f32 {
|
||||
if self.error_history.is_empty() {
|
||||
return 0.0;
|
||||
}
|
||||
self.error_history.iter().sum::<f32>() / self.error_history.len() as f32
|
||||
}
|
||||
}
|
||||
|
||||
/// Predictive model that learns from errors
|
||||
pub struct PredictiveModel {
|
||||
pub weights: HashMap<String, f32>,
|
||||
pub learning_rate: f32,
|
||||
pub bias: f32,
|
||||
pub predictions_made: u64,
|
||||
pub cumulative_error: f32,
|
||||
}
|
||||
|
||||
impl PredictiveModel {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
weights: HashMap::new(),
|
||||
learning_rate: 0.01,
|
||||
bias: 0.0,
|
||||
predictions_made: 0,
|
||||
cumulative_error: 0.0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Make prediction based on current state
|
||||
pub fn predict(&mut self, state: &ComponentState, horizon_ms: u64) -> Prediction {
|
||||
self.predictions_made += 1;
|
||||
|
||||
// Simple linear extrapolation + learned bias
|
||||
let (x, y, z) = state.velocity;
|
||||
let dt = horizon_ms as f32 / 1000.0;
|
||||
|
||||
let predicted = state.position.0 + x * dt + self.bias;
|
||||
|
||||
Prediction {
|
||||
component: state.id.clone(),
|
||||
timestamp: 0, // Will be set by caller
|
||||
predicted_value: predicted,
|
||||
confidence: 0.8 - (self.average_error() * 0.5).min(0.5),
|
||||
horizon_ms,
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn from prediction error
|
||||
pub fn learn(&mut self, error: &PredictionError) {
|
||||
// Simple gradient descent
|
||||
self.bias -= self.learning_rate * error.error;
|
||||
self.cumulative_error += error.error.abs();
|
||||
}
|
||||
|
||||
pub fn average_error(&self) -> f32 {
|
||||
if self.predictions_made == 0 {
|
||||
return 0.0;
|
||||
}
|
||||
self.cumulative_error / self.predictions_made as f32
|
||||
}
|
||||
}
|
||||
|
||||
/// Digital twin simulation system
|
||||
pub struct DigitalTwin {
|
||||
pub name: String,
|
||||
pub components: HashMap<ComponentId, ComponentState>,
|
||||
pub fidelity: FidelityController,
|
||||
pub model: PredictiveModel,
|
||||
pub predictions: VecDeque<Prediction>,
|
||||
pub simulation_time: u64,
|
||||
pub real_time: u64,
|
||||
pub total_compute_cost: f32,
|
||||
}
|
||||
|
||||
impl DigitalTwin {
|
||||
pub fn new(name: &str) -> Self {
|
||||
Self {
|
||||
name: name.to_string(),
|
||||
components: HashMap::new(),
|
||||
fidelity: FidelityController::new(),
|
||||
model: PredictiveModel::new(),
|
||||
predictions: VecDeque::new(),
|
||||
simulation_time: 0,
|
||||
real_time: 0,
|
||||
total_compute_cost: 0.0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Add component to simulation
|
||||
pub fn add_component(
|
||||
&mut self,
|
||||
id: &str,
|
||||
position: (f32, f32, f32),
|
||||
velocity: (f32, f32, f32),
|
||||
) {
|
||||
self.components.insert(
|
||||
ComponentId(id.to_string()),
|
||||
ComponentState {
|
||||
id: ComponentId(id.to_string()),
|
||||
position,
|
||||
velocity,
|
||||
properties: HashMap::new(),
|
||||
predicted_trajectory: Vec::new(),
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
/// Compute urgency based on system state
|
||||
pub fn compute_urgency(&self) -> f32 {
|
||||
let mut max_urgency = 0.0f32;
|
||||
|
||||
for state in self.components.values() {
|
||||
// High velocity = high urgency
|
||||
let speed =
|
||||
(state.velocity.0.powi(2) + state.velocity.1.powi(2) + state.velocity.2.powi(2))
|
||||
.sqrt();
|
||||
|
||||
max_urgency = max_urgency.max(speed / 100.0); // Normalize
|
||||
|
||||
// Check for collision risk
|
||||
for other in self.components.values() {
|
||||
if state.id != other.id {
|
||||
let dist = ((state.position.0 - other.position.0).powi(2)
|
||||
+ (state.position.1 - other.position.1).powi(2))
|
||||
.sqrt();
|
||||
|
||||
if dist < 10.0 {
|
||||
max_urgency = max_urgency.max(1.0 - dist / 10.0);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
max_urgency.min(1.0)
|
||||
}
|
||||
|
||||
/// Step simulation forward
|
||||
pub fn step(&mut self, real_dt_ms: u64) {
|
||||
self.real_time += real_dt_ms;
|
||||
|
||||
// Compute urgency and decide fidelity
|
||||
let urgency = self.compute_urgency();
|
||||
let fidelity = self.fidelity.decide(urgency, self.real_time);
|
||||
|
||||
let sim_dt = fidelity.time_step_ms();
|
||||
let cost = fidelity.compute_cost();
|
||||
self.total_compute_cost += cost * (real_dt_ms as f32 / sim_dt as f32);
|
||||
|
||||
// Update simulation
|
||||
self.simulation_time += sim_dt;
|
||||
|
||||
for state in self.components.values_mut() {
|
||||
let dt = sim_dt as f32 / 1000.0;
|
||||
state.position.0 += state.velocity.0 * dt;
|
||||
state.position.1 += state.velocity.1 * dt;
|
||||
state.position.2 += state.velocity.2 * dt;
|
||||
|
||||
// Generate prediction
|
||||
let prediction = self.model.predict(state, 1000);
|
||||
state.predicted_trajectory.push((
|
||||
prediction.predicted_value,
|
||||
state.position.1,
|
||||
state.position.2,
|
||||
));
|
||||
|
||||
// Keep trajectory bounded
|
||||
if state.predicted_trajectory.len() > 100 {
|
||||
state.predicted_trajectory.remove(0);
|
||||
}
|
||||
}
|
||||
|
||||
// Check for bullet time triggers
|
||||
if urgency > 0.9 {
|
||||
self.fidelity.activate_bullet_time(100, self.real_time);
|
||||
}
|
||||
}
|
||||
|
||||
/// Receive real measurement and learn
|
||||
pub fn receive_measurement(&mut self, measurement: Measurement) {
|
||||
// Find matching prediction
|
||||
if let Some(prediction) = self.predictions.iter().find(|p| {
|
||||
p.component == measurement.component
|
||||
&& (measurement.timestamp as i64 - p.timestamp as i64).abs() < 100
|
||||
}) {
|
||||
let error = PredictionError {
|
||||
component: measurement.component.clone(),
|
||||
timestamp: measurement.timestamp,
|
||||
predicted: prediction.predicted_value,
|
||||
actual: measurement.actual_value,
|
||||
error: prediction.predicted_value - measurement.actual_value,
|
||||
fidelity_at_prediction: self.fidelity.current_fidelity.clone(),
|
||||
};
|
||||
|
||||
// Learn from error
|
||||
self.model.learn(&error);
|
||||
self.fidelity.record_error(error.error);
|
||||
|
||||
// Update component state with actual
|
||||
if let Some(state) = self.components.get_mut(&measurement.component) {
|
||||
state.position.0 = measurement.actual_value;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get simulation efficiency
|
||||
pub fn efficiency_ratio(&self) -> f32 {
|
||||
// Compare actual compute to always-high-fidelity
|
||||
let always_high_cost = self.real_time as f32 * 100.0;
|
||||
if self.total_compute_cost > 0.0 {
|
||||
always_high_cost / self.total_compute_cost
|
||||
} else {
|
||||
1.0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 2: Adaptive Simulation and Digital Twins ===\n");
|
||||
|
||||
let mut twin = DigitalTwin::new("Industrial System");
|
||||
|
||||
// Add components
|
||||
twin.add_component("conveyor_1", (0.0, 0.0, 0.0), (10.0, 0.0, 0.0));
|
||||
twin.add_component("robot_arm", (50.0, 10.0, 0.0), (0.0, 5.0, 0.0));
|
||||
twin.add_component("package_a", (0.0, 0.0, 1.0), (15.0, 0.0, 0.0));
|
||||
|
||||
println!(
|
||||
"Digital twin initialized with {} components",
|
||||
twin.components.len()
|
||||
);
|
||||
|
||||
// Simulate normal operation (low fidelity, low cost)
|
||||
println!("\nNormal operation (low fidelity)...");
|
||||
for i in 0..100 {
|
||||
twin.step(10);
|
||||
|
||||
if i % 20 == 0 {
|
||||
let urgency = twin.compute_urgency();
|
||||
println!(
|
||||
" t={}: urgency={:.2}, fidelity={:?}",
|
||||
twin.simulation_time, urgency, twin.fidelity.current_fidelity
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
println!("\n Compute cost so far: {:.1}", twin.total_compute_cost);
|
||||
println!(
|
||||
" Efficiency vs always-high: {:.1}x",
|
||||
twin.efficiency_ratio()
|
||||
);
|
||||
|
||||
// Create collision scenario (triggers high fidelity)
|
||||
println!("\nCreating collision scenario...");
|
||||
if let Some(pkg) = twin.components.get_mut(&ComponentId("package_a".into())) {
|
||||
pkg.velocity = (50.0, 0.0, 0.0); // Fast moving
|
||||
}
|
||||
if let Some(robot) = twin.components.get_mut(&ComponentId("robot_arm".into())) {
|
||||
robot.position = (55.0, 5.0, 0.0); // In path
|
||||
}
|
||||
|
||||
for i in 0..20 {
|
||||
twin.step(10);
|
||||
|
||||
let urgency = twin.compute_urgency();
|
||||
if i % 5 == 0 || urgency > 0.5 {
|
||||
println!(
|
||||
" t={}: urgency={:.2}, fidelity={:?}",
|
||||
twin.simulation_time, urgency, twin.fidelity.current_fidelity
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Simulate receiving real measurements
|
||||
println!("\nReceiving real measurements (learning)...");
|
||||
for i in 0..10 {
|
||||
let measurement = Measurement {
|
||||
component: ComponentId("conveyor_1".into()),
|
||||
timestamp: twin.real_time,
|
||||
actual_value: 100.0 + i as f32 * 10.0 + (i as f32 * 0.1).sin() * 2.0,
|
||||
sensor_id: "sensor_1".to_string(),
|
||||
};
|
||||
|
||||
// First make a prediction
|
||||
if let Some(state) = twin.components.get(&ComponentId("conveyor_1".into())) {
|
||||
let prediction = twin.model.predict(state, 100);
|
||||
twin.predictions.push_back(Prediction {
|
||||
timestamp: twin.real_time,
|
||||
..prediction
|
||||
});
|
||||
}
|
||||
|
||||
twin.receive_measurement(measurement);
|
||||
twin.step(100);
|
||||
}
|
||||
|
||||
println!(" Model average error: {:.3}", twin.model.average_error());
|
||||
println!(" Predictions made: {}", twin.model.predictions_made);
|
||||
|
||||
// Summary
|
||||
println!("\n=== Final Statistics ===");
|
||||
println!(" Real time simulated: {}ms", twin.real_time);
|
||||
println!(" Simulation time: {}ms", twin.simulation_time);
|
||||
println!(" Total compute cost: {:.1}", twin.total_compute_cost);
|
||||
println!(" Efficiency ratio: {:.1}x", twin.efficiency_ratio());
|
||||
println!(" Current fidelity: {:?}", twin.fidelity.current_fidelity);
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Simulation always warm, never cold-started");
|
||||
println!("- Costs scale with relevance, not system size");
|
||||
println!("- Bullet time for critical moments");
|
||||
println!("- Continuous learning improves predictions");
|
||||
println!("- Proactive prediction instead of reactive analysis");
|
||||
println!("\nThis is underexplored and powerful.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_fidelity_controller() {
|
||||
let mut controller = FidelityController::new();
|
||||
|
||||
// Low urgency = low fidelity
|
||||
let fidelity = controller.decide(0.1, 0);
|
||||
assert!(matches!(fidelity, FidelityLevel::Low { .. }));
|
||||
|
||||
// High urgency = high fidelity
|
||||
let fidelity = controller.decide(0.9, 1);
|
||||
assert!(matches!(fidelity, FidelityLevel::High { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bullet_time() {
|
||||
let mut controller = FidelityController::new();
|
||||
|
||||
controller.activate_bullet_time(100, 0);
|
||||
let fidelity = controller.decide(0.1, 50); // Still in bullet time
|
||||
assert!(matches!(fidelity, FidelityLevel::BulletTime { .. }));
|
||||
|
||||
let fidelity = controller.decide(0.1, 150); // After bullet time
|
||||
assert!(!matches!(fidelity, FidelityLevel::BulletTime { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_predictive_model_learning() {
|
||||
let mut model = PredictiveModel::new();
|
||||
|
||||
// Make predictions and learn from errors
|
||||
for _ in 0..10 {
|
||||
let error = PredictionError {
|
||||
component: ComponentId("test".into()),
|
||||
timestamp: 0,
|
||||
predicted: 1.0,
|
||||
actual: 0.9,
|
||||
error: 0.1,
|
||||
fidelity_at_prediction: FidelityLevel::Low {
|
||||
time_step_ms: 100,
|
||||
accuracy: 0.7,
|
||||
},
|
||||
};
|
||||
model.learn(&error);
|
||||
}
|
||||
|
||||
// Bias should have adjusted
|
||||
assert!(model.bias != 0.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_digital_twin_efficiency() {
|
||||
let mut twin = DigitalTwin::new("test");
|
||||
twin.add_component("a", (0.0, 0.0, 0.0), (1.0, 0.0, 0.0));
|
||||
|
||||
// Low urgency operation should be efficient
|
||||
for _ in 0..100 {
|
||||
twin.step(10);
|
||||
}
|
||||
|
||||
assert!(twin.efficiency_ratio() > 5.0); // Should be much more efficient
|
||||
}
|
||||
}
|
||||
644
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t2_self_optimizing.rs
vendored
Normal file
644
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t2_self_optimizing.rs
vendored
Normal file
@@ -0,0 +1,644 @@
|
||||
//! # Tier 2: Self-Optimizing Software and Workflows
|
||||
//!
|
||||
//! Agents that monitor agents.
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Systems watch structure and timing, not just outputs
|
||||
//! - Learning adjusts coordination patterns
|
||||
//! - Reflex gates prevent cascading failures
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Software becomes self-stabilizing
|
||||
//! - Less ops, fewer incidents
|
||||
//! - Debugging shifts from logs to structural witnesses
|
||||
//!
|
||||
//! This is a natural extension of RuVector as connective tissue.
|
||||
|
||||
use std::collections::{HashMap, VecDeque};
|
||||
use std::time::{Duration, Instant};
|
||||
|
||||
/// A software component being monitored
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
|
||||
pub struct ComponentId(pub String);
|
||||
|
||||
/// Structural observation about a component
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct StructuralEvent {
|
||||
pub timestamp_us: u64,
|
||||
pub component: ComponentId,
|
||||
pub event_type: StructuralEventType,
|
||||
pub latency_us: Option<u64>,
|
||||
pub error: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum StructuralEventType {
|
||||
/// Request received
|
||||
RequestStart { request_id: u64 },
|
||||
/// Request completed
|
||||
RequestEnd { request_id: u64, success: bool },
|
||||
/// Component called another
|
||||
Call {
|
||||
target: ComponentId,
|
||||
request_id: u64,
|
||||
},
|
||||
/// Component received call result
|
||||
CallReturn {
|
||||
source: ComponentId,
|
||||
request_id: u64,
|
||||
success: bool,
|
||||
},
|
||||
/// Resource usage spike
|
||||
ResourceSpike { resource: String, value: f32 },
|
||||
/// Queue depth changed
|
||||
QueueDepth { depth: usize },
|
||||
/// Circuit breaker state change
|
||||
CircuitBreaker { state: CircuitState },
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub enum CircuitState {
|
||||
Closed,
|
||||
Open,
|
||||
HalfOpen,
|
||||
}
|
||||
|
||||
/// Witness log for structural debugging
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct StructuralWitness {
|
||||
pub timestamp: u64,
|
||||
pub trigger: String,
|
||||
pub component_states: HashMap<ComponentId, ComponentState>,
|
||||
pub causal_chain: Vec<(ComponentId, StructuralEventType)>,
|
||||
pub decision: String,
|
||||
pub action_taken: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ComponentState {
|
||||
pub latency_p99_us: u64,
|
||||
pub error_rate: f32,
|
||||
pub queue_depth: usize,
|
||||
pub circuit_state: CircuitState,
|
||||
}
|
||||
|
||||
/// Coordination pattern learned over time
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct CoordinationPattern {
|
||||
pub name: String,
|
||||
pub participants: Vec<ComponentId>,
|
||||
pub expected_sequence: Vec<(ComponentId, ComponentId)>,
|
||||
pub expected_latency_us: u64,
|
||||
pub tolerance: f32,
|
||||
pub occurrences: u64,
|
||||
}
|
||||
|
||||
/// Reflex gate to prevent cascading failures
|
||||
pub struct CascadeReflex {
|
||||
pub trigger_threshold: f32, // Error rate threshold
|
||||
pub propagation_window_us: u64,
|
||||
pub recent_errors: VecDeque<(u64, ComponentId)>,
|
||||
pub circuit_breakers: HashMap<ComponentId, CircuitBreaker>,
|
||||
}
|
||||
|
||||
pub struct CircuitBreaker {
|
||||
pub state: CircuitState,
|
||||
pub failure_count: u32,
|
||||
pub failure_threshold: u32,
|
||||
pub reset_timeout_us: u64,
|
||||
pub last_failure: u64,
|
||||
}
|
||||
|
||||
impl CircuitBreaker {
|
||||
pub fn new(threshold: u32, timeout_us: u64) -> Self {
|
||||
Self {
|
||||
state: CircuitState::Closed,
|
||||
failure_count: 0,
|
||||
failure_threshold: threshold,
|
||||
reset_timeout_us: timeout_us,
|
||||
last_failure: 0,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn record_failure(&mut self, timestamp: u64) {
|
||||
self.failure_count += 1;
|
||||
self.last_failure = timestamp;
|
||||
|
||||
if self.failure_count >= self.failure_threshold {
|
||||
self.state = CircuitState::Open;
|
||||
}
|
||||
}
|
||||
|
||||
pub fn record_success(&mut self) {
|
||||
if self.state == CircuitState::HalfOpen {
|
||||
self.state = CircuitState::Closed;
|
||||
self.failure_count = 0;
|
||||
}
|
||||
}
|
||||
|
||||
pub fn check(&mut self, timestamp: u64) -> bool {
|
||||
match self.state {
|
||||
CircuitState::Closed => true,
|
||||
CircuitState::Open => {
|
||||
if timestamp - self.last_failure > self.reset_timeout_us {
|
||||
self.state = CircuitState::HalfOpen;
|
||||
true
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
CircuitState::HalfOpen => true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl CascadeReflex {
|
||||
pub fn new(threshold: f32, window_us: u64) -> Self {
|
||||
Self {
|
||||
trigger_threshold: threshold,
|
||||
propagation_window_us: window_us,
|
||||
recent_errors: VecDeque::new(),
|
||||
circuit_breakers: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Check for cascading failure pattern
|
||||
pub fn check(&mut self, event: &StructuralEvent) -> Option<StructuralWitness> {
|
||||
// Track errors
|
||||
if matches!(&event.event_type, StructuralEventType::RequestEnd { success, .. } if !success)
|
||||
{
|
||||
self.recent_errors
|
||||
.push_back((event.timestamp_us, event.component.clone()));
|
||||
|
||||
// Record in circuit breaker
|
||||
self.circuit_breakers
|
||||
.entry(event.component.clone())
|
||||
.or_insert_with(|| CircuitBreaker::new(5, 30_000_000))
|
||||
.record_failure(event.timestamp_us);
|
||||
}
|
||||
|
||||
// Clean old errors
|
||||
let cutoff = event
|
||||
.timestamp_us
|
||||
.saturating_sub(self.propagation_window_us);
|
||||
while self
|
||||
.recent_errors
|
||||
.front()
|
||||
.map(|e| e.0 < cutoff)
|
||||
.unwrap_or(false)
|
||||
{
|
||||
self.recent_errors.pop_front();
|
||||
}
|
||||
|
||||
// Count affected components
|
||||
let mut affected: HashMap<ComponentId, u32> = HashMap::new();
|
||||
for (_, comp) in &self.recent_errors {
|
||||
*affected.entry(comp.clone()).or_default() += 1;
|
||||
}
|
||||
|
||||
// Detect cascade (multiple components failing together)
|
||||
if affected.len() >= 3 {
|
||||
let witness = StructuralWitness {
|
||||
timestamp: event.timestamp_us,
|
||||
trigger: "Cascade detected".to_string(),
|
||||
component_states: affected
|
||||
.keys()
|
||||
.map(|c| {
|
||||
(
|
||||
c.clone(),
|
||||
ComponentState {
|
||||
latency_p99_us: 0,
|
||||
error_rate: *affected.get(c).unwrap_or(&0) as f32 / 10.0,
|
||||
queue_depth: 0,
|
||||
circuit_state: self
|
||||
.circuit_breakers
|
||||
.get(c)
|
||||
.map(|cb| cb.state.clone())
|
||||
.unwrap_or(CircuitState::Closed),
|
||||
},
|
||||
)
|
||||
})
|
||||
.collect(),
|
||||
causal_chain: self
|
||||
.recent_errors
|
||||
.iter()
|
||||
.map(|(_, c)| {
|
||||
(
|
||||
c.clone(),
|
||||
StructuralEventType::RequestEnd {
|
||||
request_id: 0,
|
||||
success: false,
|
||||
},
|
||||
)
|
||||
})
|
||||
.collect(),
|
||||
decision: format!("Open circuit breakers for {} components", affected.len()),
|
||||
action_taken: Some("SHED_LOAD".to_string()),
|
||||
};
|
||||
|
||||
// Open all affected circuit breakers
|
||||
for comp in affected.keys() {
|
||||
if let Some(cb) = self.circuit_breakers.get_mut(comp) {
|
||||
cb.state = CircuitState::Open;
|
||||
}
|
||||
}
|
||||
|
||||
return Some(witness);
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Pattern learner that discovers coordination patterns
|
||||
pub struct PatternLearner {
|
||||
pub observed_sequences: HashMap<String, CoordinationPattern>,
|
||||
pub current_traces: HashMap<u64, Vec<(u64, ComponentId, ComponentId)>>,
|
||||
pub learning_rate: f32,
|
||||
}
|
||||
|
||||
impl PatternLearner {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
observed_sequences: HashMap::new(),
|
||||
current_traces: HashMap::new(),
|
||||
learning_rate: 0.1,
|
||||
}
|
||||
}
|
||||
|
||||
/// Observe a call between components
|
||||
pub fn observe_call(
|
||||
&mut self,
|
||||
caller: ComponentId,
|
||||
callee: ComponentId,
|
||||
request_id: u64,
|
||||
timestamp: u64,
|
||||
) {
|
||||
self.current_traces
|
||||
.entry(request_id)
|
||||
.or_default()
|
||||
.push((timestamp, caller, callee));
|
||||
}
|
||||
|
||||
/// Complete a trace and learn from it
|
||||
pub fn complete_trace(&mut self, request_id: u64) -> Option<String> {
|
||||
let trace = self.current_traces.remove(&request_id)?;
|
||||
|
||||
if trace.len() < 2 {
|
||||
return None;
|
||||
}
|
||||
|
||||
// Create pattern signature
|
||||
let participants: Vec<ComponentId> = trace
|
||||
.iter()
|
||||
.flat_map(|(_, from, to)| vec![from.clone(), to.clone()])
|
||||
.collect();
|
||||
|
||||
let sequence: Vec<(ComponentId, ComponentId)> = trace
|
||||
.iter()
|
||||
.map(|(_, from, to)| (from.clone(), to.clone()))
|
||||
.collect();
|
||||
|
||||
let total_latency =
|
||||
trace.last().map(|l| l.0).unwrap_or(0) - trace.first().map(|f| f.0).unwrap_or(0);
|
||||
|
||||
let signature = format!("{:?}", sequence);
|
||||
|
||||
// Update or create pattern
|
||||
let next_pattern_id = self.observed_sequences.len();
|
||||
let pattern = self
|
||||
.observed_sequences
|
||||
.entry(signature.clone())
|
||||
.or_insert_with(|| CoordinationPattern {
|
||||
name: format!("Pattern_{}", next_pattern_id),
|
||||
participants: participants.clone(),
|
||||
expected_sequence: sequence.clone(),
|
||||
expected_latency_us: total_latency,
|
||||
tolerance: 0.5,
|
||||
occurrences: 0,
|
||||
});
|
||||
|
||||
pattern.occurrences += 1;
|
||||
pattern.expected_latency_us =
|
||||
((1.0 - self.learning_rate) * pattern.expected_latency_us as f32
|
||||
+ self.learning_rate * total_latency as f32) as u64;
|
||||
|
||||
Some(pattern.name.clone())
|
||||
}
|
||||
|
||||
/// Check if a trace violates learned patterns
|
||||
pub fn check_violation(&self, trace: &[(u64, ComponentId, ComponentId)]) -> Option<String> {
|
||||
if trace.len() < 2 {
|
||||
return None;
|
||||
}
|
||||
|
||||
let sequence: Vec<(ComponentId, ComponentId)> = trace
|
||||
.iter()
|
||||
.map(|(_, from, to)| (from.clone(), to.clone()))
|
||||
.collect();
|
||||
|
||||
let signature = format!("{:?}", sequence);
|
||||
|
||||
if let Some(pattern) = self.observed_sequences.get(&signature) {
|
||||
let latency =
|
||||
trace.last().map(|l| l.0).unwrap_or(0) - trace.first().map(|f| f.0).unwrap_or(0);
|
||||
|
||||
let deviation = (latency as f32 - pattern.expected_latency_us as f32).abs()
|
||||
/ pattern.expected_latency_us as f32;
|
||||
|
||||
if deviation > pattern.tolerance {
|
||||
return Some(format!(
|
||||
"{} latency deviation: expected {}us, got {}us ({:.0}%)",
|
||||
pattern.name,
|
||||
pattern.expected_latency_us,
|
||||
latency,
|
||||
deviation * 100.0
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Main self-optimizing system
|
||||
pub struct SelfOptimizingSystem {
|
||||
/// Reflex gate for cascade prevention
|
||||
pub cascade_reflex: CascadeReflex,
|
||||
/// Pattern learner for coordination
|
||||
pub pattern_learner: PatternLearner,
|
||||
/// Component latency trackers
|
||||
pub latency_trackers: HashMap<ComponentId, VecDeque<u64>>,
|
||||
/// Witness log for debugging
|
||||
pub witnesses: Vec<StructuralWitness>,
|
||||
/// Optimization actions taken
|
||||
pub optimizations: Vec<String>,
|
||||
}
|
||||
|
||||
impl SelfOptimizingSystem {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
cascade_reflex: CascadeReflex::new(0.1, 1_000_000),
|
||||
pattern_learner: PatternLearner::new(),
|
||||
latency_trackers: HashMap::new(),
|
||||
witnesses: Vec::new(),
|
||||
optimizations: Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Process a structural event
|
||||
pub fn observe(&mut self, event: StructuralEvent) -> Option<StructuralWitness> {
|
||||
// 1. Check reflex (cascade prevention)
|
||||
if let Some(witness) = self.cascade_reflex.check(&event) {
|
||||
self.witnesses.push(witness.clone());
|
||||
return Some(witness);
|
||||
}
|
||||
|
||||
// 2. Track patterns
|
||||
match &event.event_type {
|
||||
StructuralEventType::Call { target, request_id } => {
|
||||
self.pattern_learner.observe_call(
|
||||
event.component.clone(),
|
||||
target.clone(),
|
||||
*request_id,
|
||||
event.timestamp_us,
|
||||
);
|
||||
}
|
||||
StructuralEventType::RequestEnd {
|
||||
request_id,
|
||||
success: true,
|
||||
} => {
|
||||
if let Some(pattern_name) = self.pattern_learner.complete_trace(*request_id) {
|
||||
// Pattern learned/reinforced
|
||||
if self
|
||||
.pattern_learner
|
||||
.observed_sequences
|
||||
.get(&pattern_name)
|
||||
.map(|p| p.occurrences == 10)
|
||||
.unwrap_or(false)
|
||||
{
|
||||
self.optimizations
|
||||
.push(format!("Learned pattern: {}", pattern_name));
|
||||
}
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
|
||||
// 3. Track latency
|
||||
if let Some(latency) = event.latency_us {
|
||||
self.latency_trackers
|
||||
.entry(event.component.clone())
|
||||
.or_insert_with(|| VecDeque::with_capacity(100))
|
||||
.push_back(latency);
|
||||
|
||||
let tracker = self.latency_trackers.get_mut(&event.component).unwrap();
|
||||
if tracker.len() > 100 {
|
||||
tracker.pop_front();
|
||||
}
|
||||
|
||||
// Check for latency regression
|
||||
if tracker.len() >= 10 {
|
||||
let recent: Vec<_> = tracker.iter().rev().take(10).collect();
|
||||
let avg: u64 = recent.iter().copied().sum::<u64>() / 10;
|
||||
let old_avg: u64 = tracker.iter().take(10).sum::<u64>() / 10;
|
||||
|
||||
if avg > old_avg * 2 {
|
||||
let witness = StructuralWitness {
|
||||
timestamp: event.timestamp_us,
|
||||
trigger: format!("Latency regression: {:?}", event.component),
|
||||
component_states: HashMap::new(),
|
||||
causal_chain: vec![],
|
||||
decision: "Investigate latency spike".to_string(),
|
||||
action_taken: None,
|
||||
};
|
||||
self.witnesses.push(witness.clone());
|
||||
return Some(witness);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// Get system health summary
|
||||
pub fn health_summary(&self) -> SystemHealth {
|
||||
let open_circuits: Vec<_> = self
|
||||
.cascade_reflex
|
||||
.circuit_breakers
|
||||
.iter()
|
||||
.filter(|(_, cb)| cb.state == CircuitState::Open)
|
||||
.map(|(id, _)| id.clone())
|
||||
.collect();
|
||||
|
||||
SystemHealth {
|
||||
components_monitored: self.latency_trackers.len(),
|
||||
patterns_learned: self.pattern_learner.observed_sequences.len(),
|
||||
open_circuit_breakers: open_circuits,
|
||||
recent_witnesses: self.witnesses.len(),
|
||||
optimizations_applied: self.optimizations.len(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct SystemHealth {
|
||||
pub components_monitored: usize,
|
||||
pub patterns_learned: usize,
|
||||
pub open_circuit_breakers: Vec<ComponentId>,
|
||||
pub recent_witnesses: usize,
|
||||
pub optimizations_applied: usize,
|
||||
}
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 2: Self-Optimizing Software and Workflows ===\n");
|
||||
|
||||
let mut system = SelfOptimizingSystem::new();
|
||||
|
||||
// Simulate normal operation - learning coordination patterns
|
||||
println!("Learning phase - observing normal coordination...");
|
||||
for req in 0..50 {
|
||||
let base_time = req * 10_000;
|
||||
|
||||
// Simulate: API -> Auth -> DB pattern
|
||||
system.observe(StructuralEvent {
|
||||
timestamp_us: base_time,
|
||||
component: ComponentId("api".into()),
|
||||
event_type: StructuralEventType::RequestStart { request_id: req },
|
||||
latency_us: None,
|
||||
error: None,
|
||||
});
|
||||
|
||||
system.observe(StructuralEvent {
|
||||
timestamp_us: base_time + 100,
|
||||
component: ComponentId("api".into()),
|
||||
event_type: StructuralEventType::Call {
|
||||
target: ComponentId("auth".into()),
|
||||
request_id: req,
|
||||
},
|
||||
latency_us: None,
|
||||
error: None,
|
||||
});
|
||||
|
||||
system.observe(StructuralEvent {
|
||||
timestamp_us: base_time + 500,
|
||||
component: ComponentId("auth".into()),
|
||||
event_type: StructuralEventType::Call {
|
||||
target: ComponentId("db".into()),
|
||||
request_id: req,
|
||||
},
|
||||
latency_us: None,
|
||||
error: None,
|
||||
});
|
||||
|
||||
system.observe(StructuralEvent {
|
||||
timestamp_us: base_time + 2000,
|
||||
component: ComponentId("api".into()),
|
||||
event_type: StructuralEventType::RequestEnd {
|
||||
request_id: req,
|
||||
success: true,
|
||||
},
|
||||
latency_us: Some(2000),
|
||||
error: None,
|
||||
});
|
||||
}
|
||||
|
||||
let health = system.health_summary();
|
||||
println!(" Patterns learned: {}", health.patterns_learned);
|
||||
println!(" Components monitored: {}", health.components_monitored);
|
||||
println!(" Optimizations: {:?}", system.optimizations);
|
||||
|
||||
// Simulate cascade failure
|
||||
println!("\nSimulating cascade failure...");
|
||||
for req in 50..60 {
|
||||
let base_time = 500_000 + req * 1_000;
|
||||
|
||||
// Multiple components fail together
|
||||
for comp in ["api", "auth", "db", "cache"] {
|
||||
system.observe(StructuralEvent {
|
||||
timestamp_us: base_time + 100,
|
||||
component: ComponentId(comp.into()),
|
||||
event_type: StructuralEventType::RequestEnd {
|
||||
request_id: req,
|
||||
success: false,
|
||||
},
|
||||
latency_us: Some(50_000), // Slow failure
|
||||
error: Some("Connection timeout".into()),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check for cascade detection
|
||||
if let Some(last_witness) = system.witnesses.last() {
|
||||
println!("\n CASCADE DETECTED!");
|
||||
println!(" Trigger: {}", last_witness.trigger);
|
||||
println!(" Decision: {}", last_witness.decision);
|
||||
println!(" Action: {:?}", last_witness.action_taken);
|
||||
}
|
||||
|
||||
let health = system.health_summary();
|
||||
println!(
|
||||
"\n Circuit breakers opened: {:?}",
|
||||
health.open_circuit_breakers
|
||||
);
|
||||
println!(" Witnesses logged: {}", health.recent_witnesses);
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Systems watch structure and timing, not just outputs");
|
||||
println!("- Reflex gates prevent cascading failures");
|
||||
println!("- Structural witnesses replace log diving");
|
||||
println!("- Patterns learned automatically for anomaly detection");
|
||||
println!("\nRuVector as connective tissue for self-stabilizing software.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_circuit_breaker() {
|
||||
let mut cb = CircuitBreaker::new(3, 1000);
|
||||
|
||||
assert!(cb.check(0));
|
||||
cb.record_failure(0);
|
||||
cb.record_failure(1);
|
||||
assert!(cb.check(2));
|
||||
cb.record_failure(2);
|
||||
assert!(!cb.check(3)); // Now open
|
||||
assert!(cb.check(1004)); // After timeout, half-open
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_pattern_learning() {
|
||||
let mut learner = PatternLearner::new();
|
||||
|
||||
learner.observe_call(ComponentId("a".into()), ComponentId("b".into()), 1, 0);
|
||||
learner.observe_call(ComponentId("b".into()), ComponentId("c".into()), 1, 100);
|
||||
|
||||
let pattern = learner.complete_trace(1);
|
||||
assert!(pattern.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cascade_detection() {
|
||||
let mut system = SelfOptimizingSystem::new();
|
||||
|
||||
// Create cascade of failures
|
||||
for i in 0..5 {
|
||||
for comp in ["a", "b", "c", "d"] {
|
||||
system.observe(StructuralEvent {
|
||||
timestamp_us: i * 100,
|
||||
component: ComponentId(comp.into()),
|
||||
event_type: StructuralEventType::RequestEnd {
|
||||
request_id: i,
|
||||
success: false,
|
||||
},
|
||||
latency_us: None,
|
||||
error: None,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
assert!(!system.witnesses.is_empty());
|
||||
}
|
||||
}
|
||||
536
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t2_swarm_intelligence.rs
vendored
Normal file
536
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t2_swarm_intelligence.rs
vendored
Normal file
@@ -0,0 +1,536 @@
|
||||
//! # Tier 2: Swarm Intelligence Without Central Control
|
||||
//!
|
||||
//! IoT fleets, sensor meshes, distributed robotics.
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Local reflexes handle local events
|
||||
//! - Coherence gates synchronize only when needed
|
||||
//! - No always-on coordinator
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Scale without fragility
|
||||
//! - Partial failure is normal, not fatal
|
||||
//! - Intelligence emerges from coordination, not command
|
||||
//!
|
||||
//! This is where your architecture beats cloud-centric designs.
|
||||
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::f32::consts::PI;
|
||||
|
||||
/// A node in the swarm
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
|
||||
pub struct NodeId(pub u32);
|
||||
|
||||
/// Message between swarm nodes
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct SwarmMessage {
|
||||
pub from: NodeId,
|
||||
pub to: Option<NodeId>, // None = broadcast
|
||||
pub timestamp: u64,
|
||||
pub content: MessageContent,
|
||||
pub priority: u8,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum MessageContent {
|
||||
/// Sensory observation
|
||||
Observation { sensor_type: String, value: f32 },
|
||||
/// Coordination request
|
||||
CoordinationRequest { task_id: u64, urgency: f32 },
|
||||
/// Phase synchronization pulse
|
||||
PhasePulse { phase: f32, frequency: f32 },
|
||||
/// Local decision announcement
|
||||
LocalDecision { action: String, confidence: f32 },
|
||||
/// Collective decision vote
|
||||
Vote { proposal_id: u64, support: bool },
|
||||
}
|
||||
|
||||
/// Local reflex controller for each node
|
||||
pub struct LocalReflex {
|
||||
pub node_id: NodeId,
|
||||
pub threshold: f32,
|
||||
pub membrane_potential: f32,
|
||||
pub refractory_until: u64,
|
||||
}
|
||||
|
||||
impl LocalReflex {
|
||||
pub fn new(node_id: NodeId, threshold: f32) -> Self {
|
||||
Self {
|
||||
node_id,
|
||||
threshold,
|
||||
membrane_potential: 0.0,
|
||||
refractory_until: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Process local observation, return action if threshold exceeded
|
||||
pub fn process(&mut self, value: f32, timestamp: u64) -> Option<String> {
|
||||
if timestamp < self.refractory_until {
|
||||
return None;
|
||||
}
|
||||
|
||||
self.membrane_potential += value;
|
||||
self.membrane_potential *= 0.9; // Leak
|
||||
|
||||
if self.membrane_potential > self.threshold {
|
||||
self.refractory_until = timestamp + 100;
|
||||
self.membrane_potential = 0.0;
|
||||
Some(format!("local_action_{}", self.node_id.0))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Coherence gate using Kuramoto oscillator model
|
||||
pub struct CoherenceGate {
|
||||
pub phase: f32,
|
||||
pub natural_frequency: f32,
|
||||
pub coupling_strength: f32,
|
||||
pub neighbor_phases: HashMap<NodeId, f32>,
|
||||
}
|
||||
|
||||
impl CoherenceGate {
|
||||
pub fn new(natural_frequency: f32, coupling_strength: f32) -> Self {
|
||||
Self {
|
||||
phase: rand_float() * 2.0 * PI,
|
||||
natural_frequency,
|
||||
coupling_strength,
|
||||
neighbor_phases: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Update phase based on neighbor phases
|
||||
pub fn step(&mut self, dt: f32) {
|
||||
if self.neighbor_phases.is_empty() {
|
||||
self.phase += self.natural_frequency * dt;
|
||||
self.phase %= 2.0 * PI;
|
||||
return;
|
||||
}
|
||||
|
||||
// Kuramoto model: dθ/dt = ω + (K/N) Σ sin(θ_j - θ_i)
|
||||
let mut phase_coupling = 0.0;
|
||||
for (_, neighbor_phase) in &self.neighbor_phases {
|
||||
phase_coupling += (neighbor_phase - self.phase).sin();
|
||||
}
|
||||
|
||||
let d_phase = self.natural_frequency
|
||||
+ self.coupling_strength * phase_coupling / self.neighbor_phases.len() as f32;
|
||||
|
||||
self.phase += d_phase * dt;
|
||||
self.phase %= 2.0 * PI;
|
||||
}
|
||||
|
||||
/// Receive phase from neighbor
|
||||
pub fn receive_phase(&mut self, from: NodeId, phase: f32) {
|
||||
self.neighbor_phases.insert(from, phase);
|
||||
}
|
||||
|
||||
/// Check if we're synchronized enough to coordinate
|
||||
pub fn is_synchronized(&self, threshold: f32) -> bool {
|
||||
if self.neighbor_phases.is_empty() {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Compute order parameter (Kuramoto)
|
||||
let n = self.neighbor_phases.len() as f32;
|
||||
let sum_x: f32 = self.neighbor_phases.values().map(|p| p.cos()).sum();
|
||||
let sum_y: f32 = self.neighbor_phases.values().map(|p| p.sin()).sum();
|
||||
|
||||
let r = (sum_x * sum_x + sum_y * sum_y).sqrt() / n;
|
||||
r > threshold
|
||||
}
|
||||
|
||||
/// Compute communication gain to a specific neighbor
|
||||
pub fn communication_gain(&self, neighbor: &NodeId) -> f32 {
|
||||
match self.neighbor_phases.get(neighbor) {
|
||||
Some(neighbor_phase) => {
|
||||
// Higher gain when phases are aligned
|
||||
(1.0 + (neighbor_phase - self.phase).cos()) / 2.0
|
||||
}
|
||||
None => 0.0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Collective decision making through emergent consensus
|
||||
pub struct CollectiveDecision {
|
||||
pub proposal_id: u64,
|
||||
pub votes: HashMap<NodeId, bool>,
|
||||
pub quorum_fraction: f32,
|
||||
pub deadline: u64,
|
||||
}
|
||||
|
||||
impl CollectiveDecision {
|
||||
pub fn new(proposal_id: u64, quorum_fraction: f32, deadline: u64) -> Self {
|
||||
Self {
|
||||
proposal_id,
|
||||
votes: HashMap::new(),
|
||||
quorum_fraction,
|
||||
deadline,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn record_vote(&mut self, node: NodeId, support: bool) {
|
||||
self.votes.insert(node, support);
|
||||
}
|
||||
|
||||
pub fn result(&self, total_nodes: usize, current_time: u64) -> Option<bool> {
|
||||
let votes_needed = (total_nodes as f32 * self.quorum_fraction).ceil() as usize;
|
||||
|
||||
if self.votes.len() >= votes_needed {
|
||||
let support_count = self.votes.values().filter(|&&v| v).count();
|
||||
Some(support_count > self.votes.len() / 2)
|
||||
} else if current_time > self.deadline {
|
||||
// Timeout - no quorum
|
||||
None
|
||||
} else {
|
||||
// Still waiting
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A single swarm node
|
||||
pub struct SwarmNode {
|
||||
pub id: NodeId,
|
||||
pub reflex: LocalReflex,
|
||||
pub coherence: CoherenceGate,
|
||||
pub neighbors: HashSet<NodeId>,
|
||||
pub observations: Vec<(u64, f32)>,
|
||||
pub pending_decisions: HashMap<u64, CollectiveDecision>,
|
||||
}
|
||||
|
||||
impl SwarmNode {
|
||||
pub fn new(id: u32) -> Self {
|
||||
Self {
|
||||
id: NodeId(id),
|
||||
reflex: LocalReflex::new(NodeId(id), 1.0),
|
||||
coherence: CoherenceGate::new(1.0, 0.5),
|
||||
neighbors: HashSet::new(),
|
||||
observations: Vec::new(),
|
||||
pending_decisions: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Process incoming message
|
||||
pub fn receive(&mut self, msg: SwarmMessage, timestamp: u64) -> Vec<SwarmMessage> {
|
||||
let mut responses = Vec::new();
|
||||
|
||||
match msg.content {
|
||||
MessageContent::Observation { value, .. } => {
|
||||
// Local reflex response
|
||||
if let Some(action) = self.reflex.process(value, timestamp) {
|
||||
responses.push(SwarmMessage {
|
||||
from: self.id.clone(),
|
||||
to: None,
|
||||
timestamp,
|
||||
content: MessageContent::LocalDecision {
|
||||
action,
|
||||
confidence: 0.8,
|
||||
},
|
||||
priority: 1,
|
||||
});
|
||||
}
|
||||
}
|
||||
MessageContent::PhasePulse { phase, .. } => {
|
||||
self.coherence.receive_phase(msg.from, phase);
|
||||
}
|
||||
MessageContent::CoordinationRequest { task_id, urgency } => {
|
||||
// Only respond if synchronized and urgent enough
|
||||
if self.coherence.is_synchronized(0.7) && urgency > 0.5 {
|
||||
responses.push(SwarmMessage {
|
||||
from: self.id.clone(),
|
||||
to: Some(msg.from),
|
||||
timestamp,
|
||||
content: MessageContent::Vote {
|
||||
proposal_id: task_id,
|
||||
support: true,
|
||||
},
|
||||
priority: 2,
|
||||
});
|
||||
}
|
||||
}
|
||||
MessageContent::Vote {
|
||||
proposal_id,
|
||||
support,
|
||||
} => {
|
||||
if let Some(decision) = self.pending_decisions.get_mut(&proposal_id) {
|
||||
decision.record_vote(msg.from, support);
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
|
||||
responses
|
||||
}
|
||||
|
||||
/// Generate phase synchronization pulse
|
||||
pub fn emit_phase_pulse(&self, timestamp: u64) -> SwarmMessage {
|
||||
SwarmMessage {
|
||||
from: self.id.clone(),
|
||||
to: None,
|
||||
timestamp,
|
||||
content: MessageContent::PhasePulse {
|
||||
phase: self.coherence.phase,
|
||||
frequency: self.coherence.natural_frequency,
|
||||
},
|
||||
priority: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Step simulation
|
||||
pub fn step(&mut self, dt: f32) {
|
||||
self.coherence.step(dt);
|
||||
}
|
||||
}
|
||||
|
||||
/// The swarm network (only for simulation, not central control)
|
||||
pub struct SwarmNetwork {
|
||||
pub nodes: HashMap<NodeId, SwarmNode>,
|
||||
pub message_queue: Vec<SwarmMessage>,
|
||||
pub timestamp: u64,
|
||||
}
|
||||
|
||||
impl SwarmNetwork {
|
||||
pub fn new(num_nodes: usize, connectivity: f32) -> Self {
|
||||
let mut nodes = HashMap::new();
|
||||
|
||||
for i in 0..num_nodes {
|
||||
let mut node = SwarmNode::new(i as u32);
|
||||
|
||||
// Random neighbors based on connectivity
|
||||
for j in 0..num_nodes {
|
||||
if i != j && rand_float() < connectivity {
|
||||
node.neighbors.insert(NodeId(j as u32));
|
||||
}
|
||||
}
|
||||
|
||||
nodes.insert(NodeId(i as u32), node);
|
||||
}
|
||||
|
||||
Self {
|
||||
nodes,
|
||||
message_queue: Vec::new(),
|
||||
timestamp: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Simulate one step
|
||||
pub fn step(&mut self, dt: f32) {
|
||||
self.timestamp += (dt * 1000.0) as u64;
|
||||
|
||||
// Process message queue
|
||||
let messages = std::mem::take(&mut self.message_queue);
|
||||
for msg in messages {
|
||||
let targets: Vec<NodeId> = match &msg.to {
|
||||
Some(target) => vec![target.clone()],
|
||||
None => self.nodes.keys().cloned().collect(),
|
||||
};
|
||||
|
||||
for target in targets {
|
||||
if target != msg.from {
|
||||
if let Some(node) = self.nodes.get_mut(&target) {
|
||||
let responses = node.receive(msg.clone(), self.timestamp);
|
||||
self.message_queue.extend(responses);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Step all nodes and emit phase pulses periodically
|
||||
let mut new_messages = Vec::new();
|
||||
for (_, node) in &mut self.nodes {
|
||||
node.step(dt);
|
||||
|
||||
// Emit phase pulse every 100ms
|
||||
if self.timestamp % 100 == 0 {
|
||||
new_messages.push(node.emit_phase_pulse(self.timestamp));
|
||||
}
|
||||
}
|
||||
self.message_queue.extend(new_messages);
|
||||
}
|
||||
|
||||
/// Inject observation at a node
|
||||
pub fn inject_observation(&mut self, node_id: &NodeId, value: f32) {
|
||||
self.message_queue.push(SwarmMessage {
|
||||
from: node_id.clone(),
|
||||
to: Some(node_id.clone()),
|
||||
timestamp: self.timestamp,
|
||||
content: MessageContent::Observation {
|
||||
sensor_type: "generic".to_string(),
|
||||
value,
|
||||
},
|
||||
priority: 1,
|
||||
});
|
||||
}
|
||||
|
||||
/// Check synchronization level
|
||||
pub fn synchronization_order_parameter(&self) -> f32 {
|
||||
let n = self.nodes.len() as f32;
|
||||
let sum_x: f32 = self.nodes.values().map(|n| n.coherence.phase.cos()).sum();
|
||||
let sum_y: f32 = self.nodes.values().map(|n| n.coherence.phase.sin()).sum();
|
||||
|
||||
(sum_x * sum_x + sum_y * sum_y).sqrt() / n
|
||||
}
|
||||
|
||||
/// Count nodes that would respond to coordination
|
||||
pub fn responsive_nodes(&self, threshold: f32) -> usize {
|
||||
self.nodes
|
||||
.values()
|
||||
.filter(|n| n.coherence.is_synchronized(threshold))
|
||||
.count()
|
||||
}
|
||||
}
|
||||
|
||||
fn rand_float() -> f32 {
|
||||
// Simple PRNG for example (not cryptographic)
|
||||
static mut SEED: u32 = 12345;
|
||||
unsafe {
|
||||
SEED = SEED.wrapping_mul(1103515245).wrapping_add(12345);
|
||||
(SEED as f32) / (u32::MAX as f32)
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 2: Swarm Intelligence Without Central Control ===\n");
|
||||
|
||||
// Create swarm with 100 nodes, 20% connectivity
|
||||
let mut swarm = SwarmNetwork::new(100, 0.2);
|
||||
|
||||
println!("Swarm initialized: {} nodes", swarm.nodes.len());
|
||||
println!(
|
||||
"Initial synchronization: {:.2}",
|
||||
swarm.synchronization_order_parameter()
|
||||
);
|
||||
|
||||
// Let the swarm synchronize
|
||||
println!("\nPhase synchronization emerging...");
|
||||
for step in 0..50 {
|
||||
swarm.step(0.1);
|
||||
|
||||
if step % 10 == 0 {
|
||||
println!(
|
||||
" Step {}: sync = {:.3}, responsive = {}",
|
||||
step,
|
||||
swarm.synchronization_order_parameter(),
|
||||
swarm.responsive_nodes(0.7)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
println!(
|
||||
"\nFinal synchronization: {:.2}",
|
||||
swarm.synchronization_order_parameter()
|
||||
);
|
||||
println!(
|
||||
"Nodes ready for coordination: {}",
|
||||
swarm.responsive_nodes(0.7)
|
||||
);
|
||||
|
||||
// Inject local event - triggers local reflex
|
||||
println!("\nInjecting local event at node 5...");
|
||||
swarm.inject_observation(&NodeId(5), 2.0);
|
||||
swarm.step(0.1);
|
||||
|
||||
// Check for local decisions
|
||||
let decisions: usize = swarm
|
||||
.message_queue
|
||||
.iter()
|
||||
.filter(|m| matches!(m.content, MessageContent::LocalDecision { .. }))
|
||||
.count();
|
||||
println!(" Local decisions triggered: {}", decisions);
|
||||
|
||||
// Simulate partial failure
|
||||
println!("\nSimulating partial failure (removing 30% of nodes)...");
|
||||
let nodes_to_remove: Vec<NodeId> = swarm.nodes.keys().take(30).cloned().collect();
|
||||
|
||||
for node_id in nodes_to_remove {
|
||||
swarm.nodes.remove(&node_id);
|
||||
}
|
||||
|
||||
println!(" Remaining nodes: {}", swarm.nodes.len());
|
||||
|
||||
// Let swarm recover
|
||||
println!("\nRecovery phase...");
|
||||
for step in 0..30 {
|
||||
swarm.step(0.1);
|
||||
|
||||
if step % 10 == 0 {
|
||||
println!(
|
||||
" Step {}: sync = {:.3}, responsive = {}",
|
||||
step,
|
||||
swarm.synchronization_order_parameter(),
|
||||
swarm.responsive_nodes(0.7)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
println!(
|
||||
"\nPost-failure synchronization: {:.2}",
|
||||
swarm.synchronization_order_parameter()
|
||||
);
|
||||
println!("System continues operating with reduced capacity");
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- No central coordinator - emergent synchronization");
|
||||
println!("- Local reflexes handle local events");
|
||||
println!("- Coherence gates synchronize only when needed");
|
||||
println!("- Partial failure is normal, not catastrophic");
|
||||
println!("- Intelligence emerges from coordination, not command");
|
||||
println!("\nThis beats cloud-centric designs for scale and resilience.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_local_reflex() {
|
||||
let mut reflex = LocalReflex::new(NodeId(0), 1.0);
|
||||
|
||||
// Below threshold
|
||||
assert!(reflex.process(0.3, 0).is_none());
|
||||
assert!(reflex.process(0.3, 1).is_none());
|
||||
|
||||
// Accumulates and fires
|
||||
let result = reflex.process(1.0, 2);
|
||||
assert!(result.is_some());
|
||||
|
||||
// Refractory
|
||||
assert!(reflex.process(2.0, 3).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_coherence_synchronization() {
|
||||
let mut gate = CoherenceGate::new(1.0, 2.0);
|
||||
|
||||
// Not synchronized without neighbors
|
||||
assert!(!gate.is_synchronized(0.5));
|
||||
|
||||
// Add synchronized neighbors
|
||||
gate.receive_phase(NodeId(1), gate.phase);
|
||||
gate.receive_phase(NodeId(2), gate.phase + 0.1);
|
||||
|
||||
assert!(gate.is_synchronized(0.9));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_collective_decision() {
|
||||
let mut decision = CollectiveDecision::new(1, 0.5, 1000);
|
||||
|
||||
// Not enough votes
|
||||
decision.record_vote(NodeId(0), true);
|
||||
assert!(decision.result(4, 0).is_none());
|
||||
|
||||
// Quorum reached
|
||||
decision.record_vote(NodeId(1), true);
|
||||
assert_eq!(decision.result(4, 0), Some(true));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_swarm_network_creation() {
|
||||
let swarm = SwarmNetwork::new(10, 0.3);
|
||||
assert_eq!(swarm.nodes.len(), 10);
|
||||
}
|
||||
}
|
||||
705
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t3_bio_machine.rs
vendored
Normal file
705
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t3_bio_machine.rs
vendored
Normal file
@@ -0,0 +1,705 @@
|
||||
//! # Tier 3: Hybrid Biological-Machine Interfaces
|
||||
//!
|
||||
//! Assistive tech, rehabilitation, augmentation.
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Machine learning adapts to biological timing
|
||||
//! - Reflex loops integrate with human reflexes
|
||||
//! - Learning happens through use, not retraining
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Machines stop fighting biology
|
||||
//! - Interfaces become intuitive
|
||||
//! - Ethical and technical alignment improves
|
||||
//!
|
||||
//! This is cutting-edge but real.
|
||||
|
||||
use std::collections::{HashMap, VecDeque};
|
||||
|
||||
/// Biological signal from user
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct BioSignal {
|
||||
pub timestamp_ms: u64,
|
||||
pub signal_type: BioSignalType,
|
||||
pub channel: u8,
|
||||
pub amplitude: f32,
|
||||
pub frequency: Option<f32>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub enum BioSignalType {
|
||||
/// Electromyography (muscle)
|
||||
EMG,
|
||||
/// Electroencephalography (brain)
|
||||
EEG,
|
||||
/// Electrooculography (eye)
|
||||
EOG,
|
||||
/// Force sensor
|
||||
Force,
|
||||
/// Position sensor
|
||||
Position,
|
||||
/// User intent estimate
|
||||
Intent,
|
||||
}
|
||||
|
||||
/// Machine action output
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct MachineAction {
|
||||
pub timestamp_ms: u64,
|
||||
pub action_type: ActionType,
|
||||
pub magnitude: f32,
|
||||
pub velocity: f32,
|
||||
pub duration_ms: u64,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum ActionType {
|
||||
/// Motor movement
|
||||
Motor { joint: String, target: f32 },
|
||||
/// Haptic feedback
|
||||
Haptic { pattern: String, intensity: f32 },
|
||||
/// Visual feedback
|
||||
Visual { indicator: String },
|
||||
/// Force assist
|
||||
ForceAssist {
|
||||
direction: (f32, f32, f32),
|
||||
magnitude: f32,
|
||||
},
|
||||
}
|
||||
|
||||
/// Biological timing adapter - matches machine timing to neural rhythms
|
||||
pub struct BiologicalTimingAdapter {
|
||||
/// User's natural reaction time (learned)
|
||||
pub reaction_time_ms: f32,
|
||||
/// User's movement duration preference
|
||||
pub movement_duration_ms: f32,
|
||||
/// Natural rhythm frequency (Hz)
|
||||
pub natural_rhythm_hz: f32,
|
||||
/// Adaptation rate
|
||||
pub learning_rate: f32,
|
||||
/// Timing history for learning
|
||||
pub timing_history: VecDeque<(u64, u64)>, // (stimulus, response)
|
||||
}
|
||||
|
||||
impl BiologicalTimingAdapter {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
reaction_time_ms: 200.0, // Default human reaction time
|
||||
movement_duration_ms: 500.0,
|
||||
natural_rhythm_hz: 1.0, // 1 Hz natural movement
|
||||
learning_rate: 0.1,
|
||||
timing_history: VecDeque::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn from observed stimulus-response timing
|
||||
pub fn observe_timing(&mut self, stimulus_time: u64, response_time: u64) {
|
||||
let observed_rt = (response_time - stimulus_time) as f32;
|
||||
|
||||
// Update reaction time estimate
|
||||
self.reaction_time_ms =
|
||||
self.reaction_time_ms * (1.0 - self.learning_rate) + observed_rt * self.learning_rate;
|
||||
|
||||
self.timing_history
|
||||
.push_back((stimulus_time, response_time));
|
||||
if self.timing_history.len() > 100 {
|
||||
self.timing_history.pop_front();
|
||||
}
|
||||
|
||||
// Learn natural rhythm from inter-response intervals
|
||||
if self.timing_history.len() > 2 {
|
||||
let history: Vec<_> = self.timing_history.iter().cloned().collect();
|
||||
let intervals: Vec<_> = history
|
||||
.windows(2)
|
||||
.map(|w| (w[1].1 - w[0].1) as f32)
|
||||
.collect();
|
||||
|
||||
if !intervals.is_empty() {
|
||||
let avg_interval: f32 = intervals.iter().sum::<f32>() / intervals.len() as f32;
|
||||
self.natural_rhythm_hz = 1000.0 / avg_interval;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get optimal timing for machine response
|
||||
pub fn optimal_response_delay(&self, urgency: f32) -> u64 {
|
||||
// Higher urgency = faster response, but respect biological limits
|
||||
let min_delay = 20.0; // 20ms minimum
|
||||
let delay = self.reaction_time_ms * (1.0 - urgency * 0.5);
|
||||
delay.max(min_delay) as u64
|
||||
}
|
||||
|
||||
/// Get movement duration matched to user
|
||||
pub fn matched_duration(&self, distance: f32) -> u64 {
|
||||
// Fitts' law inspired: longer movements take longer
|
||||
let base = self.movement_duration_ms;
|
||||
(base * (1.0 + distance.ln().max(0.0))) as u64
|
||||
}
|
||||
}
|
||||
|
||||
/// Reflex integrator - coordinates machine reflexes with user reflexes
|
||||
pub struct ReflexIntegrator {
|
||||
/// User reflex patterns (learned)
|
||||
pub user_reflexes: HashMap<String, UserReflexPattern>,
|
||||
/// Machine reflex responses
|
||||
pub machine_reflexes: Vec<MachineReflex>,
|
||||
/// Integration mode
|
||||
pub mode: IntegrationMode,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct UserReflexPattern {
|
||||
pub trigger_signal: BioSignalType,
|
||||
pub trigger_threshold: f32,
|
||||
pub typical_response_time_ms: f32,
|
||||
pub typical_response_magnitude: f32,
|
||||
pub observations: u64,
|
||||
}
|
||||
|
||||
pub struct MachineReflex {
|
||||
pub name: String,
|
||||
pub trigger_threshold: f32,
|
||||
pub response: ActionType,
|
||||
pub latency_ms: u64,
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub enum IntegrationMode {
|
||||
/// Machine assists user reflexes
|
||||
Assist,
|
||||
/// Machine complements (fills gaps)
|
||||
Complement,
|
||||
/// Machine amplifies user response
|
||||
Amplify { gain: f32 },
|
||||
/// Machine takes over (user exhausted)
|
||||
Takeover,
|
||||
}
|
||||
|
||||
impl ReflexIntegrator {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
user_reflexes: HashMap::new(),
|
||||
machine_reflexes: Vec::new(),
|
||||
mode: IntegrationMode::Assist,
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn user reflex pattern
|
||||
pub fn learn_user_reflex(&mut self, signal: &BioSignal, response_time: f32, response_mag: f32) {
|
||||
let pattern = self
|
||||
.user_reflexes
|
||||
.entry(format!("{:?}_{}", signal.signal_type, signal.channel))
|
||||
.or_insert_with(|| UserReflexPattern {
|
||||
trigger_signal: signal.signal_type.clone(),
|
||||
trigger_threshold: signal.amplitude,
|
||||
typical_response_time_ms: response_time,
|
||||
typical_response_magnitude: response_mag,
|
||||
observations: 0,
|
||||
});
|
||||
|
||||
// Online learning
|
||||
let lr = 0.1;
|
||||
pattern.typical_response_time_ms =
|
||||
pattern.typical_response_time_ms * (1.0 - lr) + response_time * lr;
|
||||
pattern.typical_response_magnitude =
|
||||
pattern.typical_response_magnitude * (1.0 - lr) + response_mag * lr;
|
||||
pattern.observations += 1;
|
||||
}
|
||||
|
||||
/// Determine machine response based on user reflex state
|
||||
pub fn integrate(&self, signal: &BioSignal, user_responding: bool) -> Option<MachineAction> {
|
||||
let pattern_key = format!("{:?}_{}", signal.signal_type, signal.channel);
|
||||
|
||||
match &self.mode {
|
||||
IntegrationMode::Assist => {
|
||||
// Only help if user is slow
|
||||
if !user_responding {
|
||||
if let Some(pattern) = self.user_reflexes.get(&pattern_key) {
|
||||
return Some(MachineAction {
|
||||
timestamp_ms: signal.timestamp_ms,
|
||||
action_type: ActionType::ForceAssist {
|
||||
direction: (0.0, 0.0, 1.0),
|
||||
magnitude: pattern.typical_response_magnitude * 0.5,
|
||||
},
|
||||
magnitude: pattern.typical_response_magnitude * 0.5,
|
||||
velocity: 1.0,
|
||||
duration_ms: 100,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
IntegrationMode::Amplify { gain } => {
|
||||
// Always amplify user response
|
||||
if user_responding {
|
||||
if let Some(pattern) = self.user_reflexes.get(&pattern_key) {
|
||||
return Some(MachineAction {
|
||||
timestamp_ms: signal.timestamp_ms,
|
||||
action_type: ActionType::ForceAssist {
|
||||
direction: (0.0, 0.0, 1.0),
|
||||
magnitude: pattern.typical_response_magnitude * gain,
|
||||
},
|
||||
magnitude: pattern.typical_response_magnitude * gain,
|
||||
velocity: 1.0,
|
||||
duration_ms: 50,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
IntegrationMode::Takeover => {
|
||||
// Machine handles everything
|
||||
return Some(MachineAction {
|
||||
timestamp_ms: signal.timestamp_ms,
|
||||
action_type: ActionType::Motor {
|
||||
joint: "default".to_string(),
|
||||
target: 0.0,
|
||||
},
|
||||
magnitude: 1.0,
|
||||
velocity: 0.5,
|
||||
duration_ms: 200,
|
||||
});
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Intent decoder - learns user intention from patterns
|
||||
pub struct IntentDecoder {
|
||||
/// Signal patterns associated with each intent
|
||||
pub intent_patterns: HashMap<String, IntentPattern>,
|
||||
/// Recent signals for pattern matching
|
||||
pub signal_buffer: VecDeque<BioSignal>,
|
||||
/// Confidence threshold for action
|
||||
pub confidence_threshold: f32,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct IntentPattern {
|
||||
pub name: String,
|
||||
pub template: Vec<(BioSignalType, f32, f32)>, // (type, amplitude_mean, amplitude_std)
|
||||
pub occurrences: u64,
|
||||
pub success_rate: f32,
|
||||
}
|
||||
|
||||
impl IntentDecoder {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
intent_patterns: HashMap::new(),
|
||||
signal_buffer: VecDeque::new(),
|
||||
confidence_threshold: 0.7,
|
||||
}
|
||||
}
|
||||
|
||||
/// Add signal to buffer
|
||||
pub fn observe(&mut self, signal: BioSignal) {
|
||||
self.signal_buffer.push_back(signal);
|
||||
if self.signal_buffer.len() > 50 {
|
||||
self.signal_buffer.pop_front();
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn intent from labeled example
|
||||
pub fn learn_intent(&mut self, intent_name: &str, signals: &[BioSignal]) {
|
||||
let template: Vec<_> = signals
|
||||
.iter()
|
||||
.map(|s| (s.signal_type.clone(), s.amplitude, 0.2)) // Initial std = 0.2
|
||||
.collect();
|
||||
|
||||
let pattern = self
|
||||
.intent_patterns
|
||||
.entry(intent_name.to_string())
|
||||
.or_insert_with(|| IntentPattern {
|
||||
name: intent_name.to_string(),
|
||||
template: template.clone(),
|
||||
occurrences: 0,
|
||||
success_rate: 0.5,
|
||||
});
|
||||
|
||||
pattern.occurrences += 1;
|
||||
|
||||
// Update template with online learning
|
||||
for (i, sig) in signals.iter().enumerate() {
|
||||
if i < pattern.template.len() {
|
||||
let (_, ref mut mean, ref mut std) = pattern.template[i];
|
||||
let lr = 0.1;
|
||||
*mean = *mean * (1.0 - lr) + sig.amplitude * lr;
|
||||
*std = *std * (1.0 - lr) + (sig.amplitude - *mean).abs() * lr;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Decode intent from current buffer
|
||||
pub fn decode(&self) -> Option<(String, f32)> {
|
||||
if self.signal_buffer.len() < 3 {
|
||||
return None;
|
||||
}
|
||||
|
||||
let mut best_match: Option<(String, f32)> = None;
|
||||
|
||||
for (name, pattern) in &self.intent_patterns {
|
||||
let confidence = self.match_pattern(pattern);
|
||||
|
||||
if confidence > self.confidence_threshold {
|
||||
if best_match
|
||||
.as_ref()
|
||||
.map(|(_, c)| confidence > *c)
|
||||
.unwrap_or(true)
|
||||
{
|
||||
best_match = Some((name.clone(), confidence));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
best_match
|
||||
}
|
||||
|
||||
fn match_pattern(&self, pattern: &IntentPattern) -> f32 {
|
||||
if self.signal_buffer.len() < pattern.template.len() {
|
||||
return 0.0;
|
||||
}
|
||||
|
||||
let recent: Vec<_> = self
|
||||
.signal_buffer
|
||||
.iter()
|
||||
.rev()
|
||||
.take(pattern.template.len())
|
||||
.collect();
|
||||
|
||||
let mut match_score = 0.0;
|
||||
let mut count = 0;
|
||||
|
||||
for (i, (sig_type, mean, std)) in pattern.template.iter().enumerate() {
|
||||
if i < recent.len() {
|
||||
let signal = recent[i];
|
||||
if signal.signal_type == *sig_type {
|
||||
let z = (signal.amplitude - mean).abs() / std.max(0.01);
|
||||
let score = (-z * z / 2.0).exp(); // Gaussian match
|
||||
match_score += score;
|
||||
count += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if count > 0 {
|
||||
match_score / count as f32
|
||||
} else {
|
||||
0.0
|
||||
}
|
||||
}
|
||||
|
||||
/// Report feedback on decoded intent
|
||||
pub fn feedback(&mut self, intent_name: &str, was_correct: bool) {
|
||||
if let Some(pattern) = self.intent_patterns.get_mut(intent_name) {
|
||||
let lr = 0.1;
|
||||
let target = if was_correct { 1.0 } else { 0.0 };
|
||||
pattern.success_rate = pattern.success_rate * (1.0 - lr) + target * lr;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Complete bio-machine interface
|
||||
pub struct BioMachineInterface {
|
||||
pub name: String,
|
||||
pub timing: BiologicalTimingAdapter,
|
||||
pub reflexes: ReflexIntegrator,
|
||||
pub intent: IntentDecoder,
|
||||
pub timestamp: u64,
|
||||
/// Adaptation history
|
||||
pub adaptation_log: Vec<AdaptationEvent>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct AdaptationEvent {
|
||||
pub timestamp: u64,
|
||||
pub event_type: String,
|
||||
pub old_value: f32,
|
||||
pub new_value: f32,
|
||||
}
|
||||
|
||||
impl BioMachineInterface {
|
||||
pub fn new(name: &str) -> Self {
|
||||
Self {
|
||||
name: name.to_string(),
|
||||
timing: BiologicalTimingAdapter::new(),
|
||||
reflexes: ReflexIntegrator::new(),
|
||||
intent: IntentDecoder::new(),
|
||||
timestamp: 0,
|
||||
adaptation_log: Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Process biological signal through the interface
|
||||
pub fn process(&mut self, signal: BioSignal) -> Option<MachineAction> {
|
||||
self.timestamp = signal.timestamp_ms;
|
||||
|
||||
// 1. Intent decoding
|
||||
self.intent.observe(signal.clone());
|
||||
|
||||
if let Some((intent, confidence)) = self.intent.decode() {
|
||||
// Intent detected - generate appropriate action
|
||||
let delay = self.timing.optimal_response_delay(confidence);
|
||||
|
||||
return Some(MachineAction {
|
||||
timestamp_ms: self.timestamp + delay,
|
||||
action_type: ActionType::Haptic {
|
||||
pattern: format!("intent_{}", intent),
|
||||
intensity: confidence,
|
||||
},
|
||||
magnitude: confidence,
|
||||
velocity: 1.0,
|
||||
duration_ms: self.timing.matched_duration(1.0),
|
||||
});
|
||||
}
|
||||
|
||||
// 2. Reflex integration
|
||||
// Check if user is responding (simplified)
|
||||
let user_responding = signal.amplitude > 0.3;
|
||||
|
||||
if let Some(action) = self.reflexes.integrate(&signal, user_responding) {
|
||||
return Some(action);
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// Learn from user interaction
|
||||
pub fn learn(&mut self, signal: &BioSignal, response_time: f32, was_successful: bool) {
|
||||
let old_rt = self.timing.reaction_time_ms;
|
||||
|
||||
self.timing.observe_timing(
|
||||
signal.timestamp_ms,
|
||||
signal.timestamp_ms + response_time as u64,
|
||||
);
|
||||
|
||||
self.reflexes
|
||||
.learn_user_reflex(signal, response_time, signal.amplitude);
|
||||
|
||||
// Log adaptation
|
||||
if (old_rt - self.timing.reaction_time_ms).abs() > 5.0 {
|
||||
self.adaptation_log.push(AdaptationEvent {
|
||||
timestamp: self.timestamp,
|
||||
event_type: "reaction_time".to_string(),
|
||||
old_value: old_rt,
|
||||
new_value: self.timing.reaction_time_ms,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/// Get interface status
|
||||
pub fn status(&self) -> InterfaceStatus {
|
||||
InterfaceStatus {
|
||||
adapted_reaction_time_ms: self.timing.reaction_time_ms,
|
||||
natural_rhythm_hz: self.timing.natural_rhythm_hz,
|
||||
integration_mode: self.reflexes.mode.clone(),
|
||||
known_intents: self.intent.intent_patterns.len(),
|
||||
known_reflexes: self.reflexes.user_reflexes.len(),
|
||||
adaptations_made: self.adaptation_log.len(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct InterfaceStatus {
|
||||
pub adapted_reaction_time_ms: f32,
|
||||
pub natural_rhythm_hz: f32,
|
||||
pub integration_mode: IntegrationMode,
|
||||
pub known_intents: usize,
|
||||
pub known_reflexes: usize,
|
||||
pub adaptations_made: usize,
|
||||
}
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 3: Hybrid Biological-Machine Interfaces ===\n");
|
||||
|
||||
let mut interface = BioMachineInterface::new("Prosthetic Arm");
|
||||
|
||||
// Register some intents
|
||||
println!("Learning user intents...");
|
||||
for i in 0..20 {
|
||||
// Simulate grip intent pattern
|
||||
let grip_signals = vec![
|
||||
BioSignal {
|
||||
timestamp_ms: i * 1000,
|
||||
signal_type: BioSignalType::EMG,
|
||||
channel: 0,
|
||||
amplitude: 0.8 + (i as f32 * 0.1).sin() * 0.1,
|
||||
frequency: Some(150.0),
|
||||
},
|
||||
BioSignal {
|
||||
timestamp_ms: i * 1000 + 50,
|
||||
signal_type: BioSignalType::EMG,
|
||||
channel: 1,
|
||||
amplitude: 0.6 + (i as f32 * 0.1).sin() * 0.1,
|
||||
frequency: Some(120.0),
|
||||
},
|
||||
];
|
||||
interface.intent.learn_intent("grip", &grip_signals);
|
||||
|
||||
// Simulate release intent
|
||||
let release_signals = vec![BioSignal {
|
||||
timestamp_ms: i * 1000,
|
||||
signal_type: BioSignalType::EMG,
|
||||
channel: 0,
|
||||
amplitude: 0.2,
|
||||
frequency: Some(50.0),
|
||||
}];
|
||||
interface.intent.learn_intent("release", &release_signals);
|
||||
}
|
||||
|
||||
println!(
|
||||
" Intents learned: {}",
|
||||
interface.intent.intent_patterns.len()
|
||||
);
|
||||
|
||||
// Simulate usage to adapt timing
|
||||
println!("\nAdapting to user timing...");
|
||||
for i in 0..50 {
|
||||
let signal = BioSignal {
|
||||
timestamp_ms: i * 500,
|
||||
signal_type: BioSignalType::EMG,
|
||||
channel: 0,
|
||||
amplitude: 0.7,
|
||||
frequency: Some(100.0),
|
||||
};
|
||||
|
||||
// Simulate user response time varying around 180ms
|
||||
let response_time = 180.0 + (i as f32 * 0.2).sin() * 20.0;
|
||||
|
||||
interface.learn(&signal, response_time, true);
|
||||
|
||||
if i % 10 == 0 {
|
||||
println!(
|
||||
" Step {}: adapted RT = {:.1}ms",
|
||||
i, interface.timing.reaction_time_ms
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Test intent decoding
|
||||
println!("\nTesting intent decoding...");
|
||||
|
||||
// Grip intent
|
||||
for _ in 0..3 {
|
||||
interface.intent.observe(BioSignal {
|
||||
timestamp_ms: interface.timestamp + 10,
|
||||
signal_type: BioSignalType::EMG,
|
||||
channel: 0,
|
||||
amplitude: 0.75,
|
||||
frequency: Some(140.0),
|
||||
});
|
||||
}
|
||||
|
||||
if let Some((intent, confidence)) = interface.intent.decode() {
|
||||
println!(
|
||||
" Decoded intent: {} (confidence: {:.2})",
|
||||
intent, confidence
|
||||
);
|
||||
}
|
||||
|
||||
// Test machine action generation
|
||||
println!("\nGenerating machine actions...");
|
||||
let signal = BioSignal {
|
||||
timestamp_ms: interface.timestamp + 100,
|
||||
signal_type: BioSignalType::EMG,
|
||||
channel: 0,
|
||||
amplitude: 0.8,
|
||||
frequency: Some(150.0),
|
||||
};
|
||||
|
||||
if let Some(action) = interface.process(signal) {
|
||||
println!(" Action: {:?}", action.action_type);
|
||||
println!(
|
||||
" Timing: delay={}ms, duration={}ms",
|
||||
action.timestamp_ms - interface.timestamp,
|
||||
action.duration_ms
|
||||
);
|
||||
}
|
||||
|
||||
// Change integration mode
|
||||
println!("\nChanging to amplification mode...");
|
||||
interface.reflexes.mode = IntegrationMode::Amplify { gain: 1.5 };
|
||||
|
||||
let status = interface.status();
|
||||
println!("\n=== Interface Status ===");
|
||||
println!(
|
||||
" Adapted reaction time: {:.1}ms",
|
||||
status.adapted_reaction_time_ms
|
||||
);
|
||||
println!(" Natural rhythm: {:.2}Hz", status.natural_rhythm_hz);
|
||||
println!(" Integration mode: {:?}", status.integration_mode);
|
||||
println!(" Known intents: {}", status.known_intents);
|
||||
println!(" Known reflexes: {}", status.known_reflexes);
|
||||
println!(" Adaptations made: {}", status.adaptations_made);
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Machine timing adapts to biological rhythms");
|
||||
println!("- Reflex loops integrate with human reflexes");
|
||||
println!("- Learning happens through use, not retraining");
|
||||
println!("- Machines stop fighting biology");
|
||||
println!("- Interfaces become intuitive over time");
|
||||
println!("\nThis is cutting-edge but real.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_timing_adaptation() {
|
||||
let mut adapter = BiologicalTimingAdapter::new();
|
||||
|
||||
// Initial reaction time
|
||||
let initial = adapter.reaction_time_ms;
|
||||
|
||||
// Learn faster reaction times
|
||||
for i in 0..10 {
|
||||
adapter.observe_timing(i * 1000, i * 1000 + 150);
|
||||
}
|
||||
|
||||
assert!(adapter.reaction_time_ms < initial);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_intent_learning() {
|
||||
let mut decoder = IntentDecoder::new();
|
||||
|
||||
let signals = vec![BioSignal {
|
||||
timestamp_ms: 0,
|
||||
signal_type: BioSignalType::EMG,
|
||||
channel: 0,
|
||||
amplitude: 0.8,
|
||||
frequency: None,
|
||||
}];
|
||||
|
||||
decoder.learn_intent("test", &signals);
|
||||
assert!(decoder.intent_patterns.contains_key("test"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_reflex_integration() {
|
||||
let mut integrator = ReflexIntegrator::new();
|
||||
integrator.mode = IntegrationMode::Assist;
|
||||
|
||||
// Learn a user reflex
|
||||
let signal = BioSignal {
|
||||
timestamp_ms: 0,
|
||||
signal_type: BioSignalType::EMG,
|
||||
channel: 0,
|
||||
amplitude: 0.5,
|
||||
frequency: None,
|
||||
};
|
||||
|
||||
integrator.learn_user_reflex(&signal, 200.0, 0.8);
|
||||
|
||||
// When user not responding, machine should assist
|
||||
let action = integrator.integrate(&signal, false);
|
||||
assert!(action.is_some());
|
||||
|
||||
// When user is responding, no assist needed
|
||||
let action = integrator.integrate(&signal, true);
|
||||
assert!(action.is_none());
|
||||
}
|
||||
}
|
||||
533
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t3_self_awareness.rs
vendored
Normal file
533
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t3_self_awareness.rs
vendored
Normal file
@@ -0,0 +1,533 @@
|
||||
//! # Tier 3: Machine Self-Awareness Primitives
|
||||
//!
|
||||
//! Not consciousness, but structural self-sensing.
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Systems monitor their own coherence
|
||||
//! - Learning adjusts internal organization
|
||||
//! - Failure is sensed before performance drops
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Systems can say "I am becoming unstable"
|
||||
//! - Maintenance becomes anticipatory
|
||||
//! - This is a prerequisite for trustworthy autonomy
|
||||
//!
|
||||
//! This is novel and publishable.
|
||||
|
||||
use std::collections::{HashMap, VecDeque};
|
||||
|
||||
/// Internal state that the system monitors about itself
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct InternalState {
|
||||
pub timestamp: u64,
|
||||
/// Processing coherence (0-1): how well modules are synchronized
|
||||
pub coherence: f32,
|
||||
/// Attention focus (what is being processed)
|
||||
pub attention_target: Option<String>,
|
||||
/// Confidence in current processing
|
||||
pub confidence: f32,
|
||||
/// Energy available for computation
|
||||
pub energy_budget: f32,
|
||||
/// Error rate in recent operations
|
||||
pub error_rate: f32,
|
||||
}
|
||||
|
||||
/// Self-model that tracks the system's own capabilities
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct SelfModel {
|
||||
/// What capabilities does this system have?
|
||||
pub capabilities: HashMap<String, CapabilityState>,
|
||||
/// Current operating mode
|
||||
pub operating_mode: OperatingMode,
|
||||
/// Predicted time until degradation
|
||||
pub time_to_degradation: Option<u64>,
|
||||
/// Self-assessed reliability
|
||||
pub reliability_estimate: f32,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct CapabilityState {
|
||||
pub name: String,
|
||||
pub enabled: bool,
|
||||
pub current_performance: f32,
|
||||
pub baseline_performance: f32,
|
||||
pub degradation_rate: f32,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub enum OperatingMode {
|
||||
Optimal,
|
||||
Degraded { reason: String },
|
||||
Recovery,
|
||||
SafeMode,
|
||||
}
|
||||
|
||||
/// Metacognitive monitor that observes internal processing
|
||||
pub struct MetacognitiveMonitor {
|
||||
/// History of internal states
|
||||
pub state_history: VecDeque<InternalState>,
|
||||
/// Coherence threshold for alarm
|
||||
pub coherence_threshold: f32,
|
||||
/// Self-model
|
||||
pub self_model: SelfModel,
|
||||
/// Anomaly detector for internal states
|
||||
pub internal_anomaly_threshold: f32,
|
||||
}
|
||||
|
||||
impl MetacognitiveMonitor {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
state_history: VecDeque::new(),
|
||||
coherence_threshold: 0.7,
|
||||
self_model: SelfModel {
|
||||
capabilities: HashMap::new(),
|
||||
operating_mode: OperatingMode::Optimal,
|
||||
time_to_degradation: None,
|
||||
reliability_estimate: 1.0,
|
||||
},
|
||||
internal_anomaly_threshold: 2.0, // Standard deviations
|
||||
}
|
||||
}
|
||||
|
||||
/// Register a capability
|
||||
pub fn register_capability(&mut self, name: &str, baseline: f32) {
|
||||
self.self_model.capabilities.insert(
|
||||
name.to_string(),
|
||||
CapabilityState {
|
||||
name: name.to_string(),
|
||||
enabled: true,
|
||||
current_performance: baseline,
|
||||
baseline_performance: baseline,
|
||||
degradation_rate: 0.0,
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
/// Observe current internal state
|
||||
pub fn observe(&mut self, state: InternalState) -> SelfAwarenessEvent {
|
||||
self.state_history.push_back(state.clone());
|
||||
if self.state_history.len() > 1000 {
|
||||
self.state_history.pop_front();
|
||||
}
|
||||
|
||||
// Check coherence
|
||||
if state.coherence < self.coherence_threshold {
|
||||
self.self_model.operating_mode = OperatingMode::Degraded {
|
||||
reason: format!("Low coherence: {:.2}", state.coherence),
|
||||
};
|
||||
|
||||
return SelfAwarenessEvent::IncoherenceDetected {
|
||||
current_coherence: state.coherence,
|
||||
threshold: self.coherence_threshold,
|
||||
recommendation: "Reduce processing load or increase synchronization".to_string(),
|
||||
};
|
||||
}
|
||||
|
||||
// Detect internal anomalies
|
||||
if self.state_history.len() > 10 {
|
||||
let avg_confidence: f32 = self.state_history.iter().map(|s| s.confidence).sum::<f32>()
|
||||
/ self.state_history.len() as f32;
|
||||
|
||||
let std_dev: f32 = (self
|
||||
.state_history
|
||||
.iter()
|
||||
.map(|s| (s.confidence - avg_confidence).powi(2))
|
||||
.sum::<f32>()
|
||||
/ self.state_history.len() as f32)
|
||||
.sqrt();
|
||||
|
||||
let z_score = (state.confidence - avg_confidence).abs() / std_dev.max(0.01);
|
||||
|
||||
if z_score > self.internal_anomaly_threshold {
|
||||
return SelfAwarenessEvent::InternalAnomaly {
|
||||
metric: "confidence".to_string(),
|
||||
z_score,
|
||||
interpretation: if state.confidence < avg_confidence {
|
||||
"Processing uncertainty spike".to_string()
|
||||
} else {
|
||||
"Overconfidence detected".to_string()
|
||||
},
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Predict degradation
|
||||
self.predict_degradation();
|
||||
|
||||
// Check energy
|
||||
if state.energy_budget < 0.2 {
|
||||
return SelfAwarenessEvent::ResourceWarning {
|
||||
resource: "energy".to_string(),
|
||||
current: state.energy_budget,
|
||||
threshold: 0.2,
|
||||
action: "Enter power-saving mode".to_string(),
|
||||
};
|
||||
}
|
||||
|
||||
SelfAwarenessEvent::Stable {
|
||||
coherence: state.coherence,
|
||||
confidence: state.confidence,
|
||||
operating_mode: self.self_model.operating_mode.clone(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Update capability performance
|
||||
pub fn update_capability(&mut self, name: &str, performance: f32) {
|
||||
if let Some(cap) = self.self_model.capabilities.get_mut(name) {
|
||||
let old_perf = cap.current_performance;
|
||||
cap.current_performance = performance;
|
||||
|
||||
// Track degradation rate
|
||||
let degradation = (old_perf - performance) / old_perf.max(0.01);
|
||||
cap.degradation_rate = cap.degradation_rate * 0.9 + degradation * 0.1;
|
||||
|
||||
// Update reliability estimate
|
||||
self.update_reliability();
|
||||
}
|
||||
}
|
||||
|
||||
fn predict_degradation(&mut self) {
|
||||
// Check if any capability is degrading
|
||||
for (_, cap) in &self.self_model.capabilities {
|
||||
if cap.degradation_rate > 0.01 {
|
||||
// Extrapolate time to failure
|
||||
let performance_remaining = cap.current_performance - 0.5; // Minimum acceptable
|
||||
if cap.degradation_rate > 0.0 && performance_remaining > 0.0 {
|
||||
let time_to_fail = (performance_remaining / cap.degradation_rate) as u64;
|
||||
self.self_model.time_to_degradation = Some(
|
||||
time_to_fail.min(self.self_model.time_to_degradation.unwrap_or(u64::MAX)),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn update_reliability(&mut self) {
|
||||
let total_perf: f32 = self
|
||||
.self_model
|
||||
.capabilities
|
||||
.values()
|
||||
.map(|c| c.current_performance / c.baseline_performance.max(0.01))
|
||||
.sum();
|
||||
|
||||
let n = self.self_model.capabilities.len().max(1) as f32;
|
||||
self.self_model.reliability_estimate = total_perf / n;
|
||||
}
|
||||
|
||||
/// Get self-assessment
|
||||
pub fn self_assess(&self) -> SelfAssessment {
|
||||
SelfAssessment {
|
||||
operating_mode: self.self_model.operating_mode.clone(),
|
||||
reliability: self.self_model.reliability_estimate,
|
||||
time_to_degradation: self.self_model.time_to_degradation,
|
||||
capabilities_status: self
|
||||
.self_model
|
||||
.capabilities
|
||||
.iter()
|
||||
.map(|(k, v)| {
|
||||
(
|
||||
k.clone(),
|
||||
v.current_performance / v.baseline_performance.max(0.01),
|
||||
)
|
||||
})
|
||||
.collect(),
|
||||
recommendation: self.generate_recommendation(),
|
||||
}
|
||||
}
|
||||
|
||||
fn generate_recommendation(&self) -> String {
|
||||
match &self.self_model.operating_mode {
|
||||
OperatingMode::Optimal => "System operating normally".to_string(),
|
||||
OperatingMode::Degraded { reason } => {
|
||||
format!("Degraded: {}. Consider maintenance.", reason)
|
||||
}
|
||||
OperatingMode::Recovery => "Recovery in progress. Avoid heavy loads.".to_string(),
|
||||
OperatingMode::SafeMode => "Safe mode active. Minimal operations only.".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Events that indicate self-awareness
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum SelfAwarenessEvent {
|
||||
/// System detected low internal coherence
|
||||
IncoherenceDetected {
|
||||
current_coherence: f32,
|
||||
threshold: f32,
|
||||
recommendation: String,
|
||||
},
|
||||
/// Internal processing anomaly detected
|
||||
InternalAnomaly {
|
||||
metric: String,
|
||||
z_score: f32,
|
||||
interpretation: String,
|
||||
},
|
||||
/// Resource warning
|
||||
ResourceWarning {
|
||||
resource: String,
|
||||
current: f32,
|
||||
threshold: f32,
|
||||
action: String,
|
||||
},
|
||||
/// Capability degradation predicted
|
||||
DegradationPredicted {
|
||||
capability: String,
|
||||
current_performance: f32,
|
||||
predicted_failure_time: u64,
|
||||
},
|
||||
/// System is stable
|
||||
Stable {
|
||||
coherence: f32,
|
||||
confidence: f32,
|
||||
operating_mode: OperatingMode,
|
||||
},
|
||||
}
|
||||
|
||||
/// Self-assessment report
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct SelfAssessment {
|
||||
pub operating_mode: OperatingMode,
|
||||
pub reliability: f32,
|
||||
pub time_to_degradation: Option<u64>,
|
||||
pub capabilities_status: HashMap<String, f32>,
|
||||
pub recommendation: String,
|
||||
}
|
||||
|
||||
/// Complete self-aware system
|
||||
pub struct SelfAwareSystem {
|
||||
pub name: String,
|
||||
pub monitor: MetacognitiveMonitor,
|
||||
/// Processing modules with their coherence
|
||||
pub modules: HashMap<String, f32>,
|
||||
/// Attention mechanism
|
||||
pub attention_focus: Option<String>,
|
||||
/// Current timestamp
|
||||
pub timestamp: u64,
|
||||
}
|
||||
|
||||
impl SelfAwareSystem {
|
||||
pub fn new(name: &str) -> Self {
|
||||
let mut system = Self {
|
||||
name: name.to_string(),
|
||||
monitor: MetacognitiveMonitor::new(),
|
||||
modules: HashMap::new(),
|
||||
attention_focus: None,
|
||||
timestamp: 0,
|
||||
};
|
||||
|
||||
// Register default capabilities
|
||||
system.monitor.register_capability("perception", 1.0);
|
||||
system.monitor.register_capability("reasoning", 1.0);
|
||||
system.monitor.register_capability("action", 1.0);
|
||||
system.monitor.register_capability("learning", 1.0);
|
||||
|
||||
system
|
||||
}
|
||||
|
||||
/// Add a processing module
|
||||
pub fn add_module(&mut self, name: &str) {
|
||||
self.modules.insert(name.to_string(), 1.0);
|
||||
}
|
||||
|
||||
/// Compute current coherence from module phases
|
||||
pub fn compute_coherence(&self) -> f32 {
|
||||
if self.modules.is_empty() {
|
||||
return 1.0;
|
||||
}
|
||||
|
||||
let values: Vec<_> = self.modules.values().collect();
|
||||
let avg: f32 = values.iter().copied().sum::<f32>() / values.len() as f32;
|
||||
let variance: f32 =
|
||||
values.iter().map(|&v| (v - avg).powi(2)).sum::<f32>() / values.len() as f32;
|
||||
|
||||
1.0 - variance.sqrt()
|
||||
}
|
||||
|
||||
/// Update module state
|
||||
pub fn update_module(&mut self, name: &str, value: f32) {
|
||||
if let Some(module) = self.modules.get_mut(name) {
|
||||
*module = value;
|
||||
}
|
||||
}
|
||||
|
||||
/// Process a step
|
||||
pub fn step(&mut self, energy: f32, error_rate: f32) -> SelfAwarenessEvent {
|
||||
self.timestamp += 1;
|
||||
|
||||
let coherence = self.compute_coherence();
|
||||
let confidence = 1.0 - error_rate;
|
||||
|
||||
let state = InternalState {
|
||||
timestamp: self.timestamp,
|
||||
coherence,
|
||||
attention_target: self.attention_focus.clone(),
|
||||
confidence,
|
||||
energy_budget: energy,
|
||||
error_rate,
|
||||
};
|
||||
|
||||
self.monitor.observe(state)
|
||||
}
|
||||
|
||||
/// System tells us about itself
|
||||
pub fn introspect(&self) -> String {
|
||||
let assessment = self.monitor.self_assess();
|
||||
|
||||
format!(
|
||||
"I am {}: {:?}, reliability {:.0}%, {}",
|
||||
self.name,
|
||||
assessment.operating_mode,
|
||||
assessment.reliability * 100.0,
|
||||
assessment.recommendation
|
||||
)
|
||||
}
|
||||
|
||||
/// Can the system express uncertainty?
|
||||
pub fn express_uncertainty(&self) -> String {
|
||||
let assessment = self.monitor.self_assess();
|
||||
|
||||
if assessment.reliability < 0.5 {
|
||||
"I am becoming unstable and should not be trusted for critical decisions.".to_string()
|
||||
} else if assessment.reliability < 0.8 {
|
||||
"My confidence is reduced. Verification recommended.".to_string()
|
||||
} else {
|
||||
"I am operating within normal parameters.".to_string()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 3: Machine Self-Awareness Primitives ===\n");
|
||||
|
||||
let mut system = SelfAwareSystem::new("Cognitive Agent");
|
||||
|
||||
// Add processing modules
|
||||
system.add_module("perception");
|
||||
system.add_module("reasoning");
|
||||
system.add_module("planning");
|
||||
system.add_module("action");
|
||||
|
||||
println!("System initialized: {}\n", system.name);
|
||||
println!("Initial introspection: {}", system.introspect());
|
||||
|
||||
// Normal operation
|
||||
println!("\nNormal operation...");
|
||||
for i in 0..10 {
|
||||
let event = system.step(0.9, 0.05);
|
||||
|
||||
if i == 5 {
|
||||
println!(" Step {}: {:?}", i, event);
|
||||
}
|
||||
}
|
||||
println!(" Expression: {}", system.express_uncertainty());
|
||||
|
||||
// Simulate gradual degradation
|
||||
println!("\nSimulating gradual degradation...");
|
||||
for i in 0..20 {
|
||||
// Degrade one module progressively
|
||||
system.update_module("reasoning", 1.0 - i as f32 * 0.03);
|
||||
system
|
||||
.monitor
|
||||
.update_capability("reasoning", 1.0 - i as f32 * 0.03);
|
||||
|
||||
let event = system.step(0.8 - i as f32 * 0.01, 0.05 + i as f32 * 0.01);
|
||||
|
||||
if i % 5 == 0 {
|
||||
println!(" Step {}: {:?}", i, event);
|
||||
}
|
||||
}
|
||||
|
||||
let assessment = system.monitor.self_assess();
|
||||
println!("\n Self-assessment:");
|
||||
println!(" Mode: {:?}", assessment.operating_mode);
|
||||
println!(" Reliability: {:.1}%", assessment.reliability * 100.0);
|
||||
println!(
|
||||
" Time to degradation: {:?}",
|
||||
assessment.time_to_degradation
|
||||
);
|
||||
println!(" Capabilities: {:?}", assessment.capabilities_status);
|
||||
println!("\n Expression: {}", system.express_uncertainty());
|
||||
|
||||
// Simulate low coherence (modules out of sync)
|
||||
println!("\nSimulating incoherence...");
|
||||
system.update_module("perception", 0.9);
|
||||
system.update_module("reasoning", 0.3);
|
||||
system.update_module("planning", 0.7);
|
||||
system.update_module("action", 0.5);
|
||||
|
||||
let event = system.step(0.5, 0.2);
|
||||
println!(" Event: {:?}", event);
|
||||
println!(" Introspection: {}", system.introspect());
|
||||
|
||||
// Simulate low energy
|
||||
println!("\nSimulating energy depletion...");
|
||||
let event = system.step(0.15, 0.1);
|
||||
println!(" Event: {:?}", event);
|
||||
|
||||
// Final self-report
|
||||
println!("\n=== Final Self-Report ===");
|
||||
println!("{}", system.introspect());
|
||||
println!("{}", system.express_uncertainty());
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Systems can say 'I am becoming unstable'");
|
||||
println!("- Failure sensed before performance drops");
|
||||
println!("- Maintenance becomes anticipatory");
|
||||
println!("- Prerequisite for trustworthy autonomy");
|
||||
println!("- Structural self-sensing, not consciousness");
|
||||
println!("\nThis is novel and publishable.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_coherence_detection() {
|
||||
let mut system = SelfAwareSystem::new("test");
|
||||
system.add_module("a");
|
||||
system.add_module("b");
|
||||
|
||||
// In sync = high coherence
|
||||
system.update_module("a", 1.0);
|
||||
system.update_module("b", 1.0);
|
||||
assert!(system.compute_coherence() > 0.9);
|
||||
|
||||
// Out of sync = low coherence
|
||||
system.update_module("a", 1.0);
|
||||
system.update_module("b", 0.2);
|
||||
assert!(system.compute_coherence() < 0.8);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_self_assessment() {
|
||||
let mut system = SelfAwareSystem::new("test");
|
||||
|
||||
// Normal operation
|
||||
for _ in 0..10 {
|
||||
system.step(0.9, 0.05);
|
||||
}
|
||||
|
||||
let assessment = system.monitor.self_assess();
|
||||
assert!(assessment.reliability > 0.9);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_degradation_prediction() {
|
||||
let mut monitor = MetacognitiveMonitor::new();
|
||||
monitor.register_capability("test", 1.0);
|
||||
|
||||
// Simulate degradation
|
||||
for i in 0..10 {
|
||||
monitor.update_capability("test", 1.0 - i as f32 * 0.05);
|
||||
}
|
||||
|
||||
// Should predict degradation
|
||||
assert!(monitor
|
||||
.self_model
|
||||
.capabilities
|
||||
.get("test")
|
||||
.map(|c| c.degradation_rate > 0.0)
|
||||
.unwrap_or(false));
|
||||
}
|
||||
}
|
||||
702
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t3_synthetic_nervous.rs
vendored
Normal file
702
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t3_synthetic_nervous.rs
vendored
Normal file
@@ -0,0 +1,702 @@
|
||||
//! # Tier 3: Synthetic Nervous Systems for Environments
|
||||
//!
|
||||
//! Buildings, factories, cities.
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Infrastructure becomes a sensing fabric
|
||||
//! - Reflexes manage local events
|
||||
//! - Policy emerges from patterns, not rules
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Environments respond like organisms
|
||||
//! - Energy, safety, and flow self-regulate
|
||||
//! - Central planning gives way to distributed intelligence
|
||||
//!
|
||||
//! This is exotic but inevitable.
|
||||
|
||||
use std::collections::{HashMap, VecDeque};
|
||||
use std::f32::consts::PI;
|
||||
|
||||
/// A location in the environment
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
|
||||
pub struct LocationId(pub String);
|
||||
|
||||
/// A zone grouping multiple locations
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
|
||||
pub struct ZoneId(pub String);
|
||||
|
||||
/// Environmental sensor reading
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct EnvironmentReading {
|
||||
pub timestamp: u64,
|
||||
pub location: LocationId,
|
||||
pub sensor_type: EnvironmentSensor,
|
||||
pub value: f32,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub enum EnvironmentSensor {
|
||||
Temperature,
|
||||
Humidity,
|
||||
Light,
|
||||
Occupancy,
|
||||
AirQuality,
|
||||
Noise,
|
||||
Motion,
|
||||
Energy,
|
||||
Water,
|
||||
}
|
||||
|
||||
/// Environmental actuator command
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct EnvironmentAction {
|
||||
pub location: LocationId,
|
||||
pub actuator: EnvironmentActuator,
|
||||
pub value: f32,
|
||||
pub priority: u8,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum EnvironmentActuator {
|
||||
HVAC { mode: HVACMode },
|
||||
Lighting { brightness: f32 },
|
||||
Ventilation { flow_rate: f32 },
|
||||
Shading { position: f32 },
|
||||
DoorLock { locked: bool },
|
||||
Alarm { active: bool },
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum HVACMode {
|
||||
Off,
|
||||
Heating(f32),
|
||||
Cooling(f32),
|
||||
Ventilation,
|
||||
}
|
||||
|
||||
/// Local reflex for immediate environmental response
|
||||
pub struct LocalEnvironmentReflex {
|
||||
pub location: LocationId,
|
||||
pub sensor_type: EnvironmentSensor,
|
||||
pub threshold_low: f32,
|
||||
pub threshold_high: f32,
|
||||
pub action_low: EnvironmentActuator,
|
||||
pub action_high: EnvironmentActuator,
|
||||
pub hysteresis: f32,
|
||||
pub last_state: i8, // -1 = below, 0 = normal, 1 = above
|
||||
}
|
||||
|
||||
impl LocalEnvironmentReflex {
|
||||
pub fn new(
|
||||
location: LocationId,
|
||||
sensor_type: EnvironmentSensor,
|
||||
threshold_low: f32,
|
||||
threshold_high: f32,
|
||||
action_low: EnvironmentActuator,
|
||||
action_high: EnvironmentActuator,
|
||||
) -> Self {
|
||||
Self {
|
||||
location,
|
||||
sensor_type,
|
||||
threshold_low,
|
||||
threshold_high,
|
||||
action_low,
|
||||
action_high,
|
||||
hysteresis: 0.5,
|
||||
last_state: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if reflex should fire
|
||||
pub fn check(&mut self, reading: &EnvironmentReading) -> Option<EnvironmentAction> {
|
||||
if reading.location != self.location || reading.sensor_type != self.sensor_type {
|
||||
return None;
|
||||
}
|
||||
|
||||
// Apply hysteresis
|
||||
let effective_low = if self.last_state == -1 {
|
||||
self.threshold_low + self.hysteresis
|
||||
} else {
|
||||
self.threshold_low
|
||||
};
|
||||
|
||||
let effective_high = if self.last_state == 1 {
|
||||
self.threshold_high - self.hysteresis
|
||||
} else {
|
||||
self.threshold_high
|
||||
};
|
||||
|
||||
if reading.value < effective_low && self.last_state != -1 {
|
||||
self.last_state = -1;
|
||||
Some(EnvironmentAction {
|
||||
location: self.location.clone(),
|
||||
actuator: self.action_low.clone(),
|
||||
value: reading.value,
|
||||
priority: 1,
|
||||
})
|
||||
} else if reading.value > effective_high && self.last_state != 1 {
|
||||
self.last_state = 1;
|
||||
Some(EnvironmentAction {
|
||||
location: self.location.clone(),
|
||||
actuator: self.action_high.clone(),
|
||||
value: reading.value,
|
||||
priority: 1,
|
||||
})
|
||||
} else if reading.value >= effective_low && reading.value <= effective_high {
|
||||
self.last_state = 0;
|
||||
None
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Zone-level homeostasis controller
|
||||
pub struct ZoneHomeostasis {
|
||||
pub zone: ZoneId,
|
||||
pub locations: Vec<LocationId>,
|
||||
pub target_temperature: f32,
|
||||
pub target_humidity: f32,
|
||||
pub target_light: f32,
|
||||
pub adaptation_rate: f32,
|
||||
/// Learned occupancy pattern (24 hours)
|
||||
pub occupancy_pattern: [f32; 24],
|
||||
pub learning_enabled: bool,
|
||||
}
|
||||
|
||||
impl ZoneHomeostasis {
|
||||
pub fn new(zone: ZoneId, locations: Vec<LocationId>) -> Self {
|
||||
Self {
|
||||
zone,
|
||||
locations,
|
||||
target_temperature: 22.0,
|
||||
target_humidity: 50.0,
|
||||
target_light: 500.0,
|
||||
adaptation_rate: 0.1,
|
||||
occupancy_pattern: [0.0; 24],
|
||||
learning_enabled: true,
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn from occupancy patterns
|
||||
pub fn learn_occupancy(&mut self, hour: usize, occupancy: f32) {
|
||||
if self.learning_enabled && hour < 24 {
|
||||
self.occupancy_pattern[hour] = self.occupancy_pattern[hour]
|
||||
* (1.0 - self.adaptation_rate)
|
||||
+ occupancy * self.adaptation_rate;
|
||||
}
|
||||
}
|
||||
|
||||
/// Predict occupancy for pre-conditioning
|
||||
pub fn predict_occupancy(&self, hour: usize) -> f32 {
|
||||
if hour < 24 {
|
||||
self.occupancy_pattern[hour]
|
||||
} else {
|
||||
0.0
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute zone-level action based on aggregate readings
|
||||
pub fn compute_action(
|
||||
&self,
|
||||
readings: &[EnvironmentReading],
|
||||
hour: usize,
|
||||
) -> Vec<EnvironmentAction> {
|
||||
let mut actions = Vec::new();
|
||||
|
||||
// Filter readings for this zone
|
||||
let zone_readings: Vec<_> = readings
|
||||
.iter()
|
||||
.filter(|r| self.locations.contains(&r.location))
|
||||
.collect();
|
||||
|
||||
if zone_readings.is_empty() {
|
||||
return actions;
|
||||
}
|
||||
|
||||
// Average temperature
|
||||
let temp_readings: Vec<_> = zone_readings
|
||||
.iter()
|
||||
.filter(|r| r.sensor_type == EnvironmentSensor::Temperature)
|
||||
.collect();
|
||||
|
||||
if !temp_readings.is_empty() {
|
||||
let avg_temp: f32 =
|
||||
temp_readings.iter().map(|r| r.value).sum::<f32>() / temp_readings.len() as f32;
|
||||
|
||||
// Adjust target based on predicted occupancy
|
||||
let predicted_occ = self.predict_occupancy(hour);
|
||||
let effective_target = if predicted_occ > 0.5 {
|
||||
self.target_temperature
|
||||
} else {
|
||||
// Setback when unoccupied
|
||||
self.target_temperature - 2.0
|
||||
};
|
||||
|
||||
let temp_error = avg_temp - effective_target;
|
||||
|
||||
if temp_error.abs() > 1.0 {
|
||||
let mode = if temp_error > 0.0 {
|
||||
HVACMode::Cooling(temp_error.abs().min(5.0))
|
||||
} else {
|
||||
HVACMode::Heating(temp_error.abs().min(5.0))
|
||||
};
|
||||
|
||||
for loc in &self.locations {
|
||||
actions.push(EnvironmentAction {
|
||||
location: loc.clone(),
|
||||
actuator: EnvironmentActuator::HVAC { mode: mode.clone() },
|
||||
value: temp_error,
|
||||
priority: 2,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Light based on occupancy
|
||||
let occupancy_readings: Vec<_> = zone_readings
|
||||
.iter()
|
||||
.filter(|r| r.sensor_type == EnvironmentSensor::Occupancy)
|
||||
.collect();
|
||||
|
||||
if !occupancy_readings.is_empty() {
|
||||
let occupied = occupancy_readings.iter().any(|r| r.value > 0.5);
|
||||
|
||||
for loc in &self.locations {
|
||||
let brightness = if occupied { 1.0 } else { 0.1 };
|
||||
actions.push(EnvironmentAction {
|
||||
location: loc.clone(),
|
||||
actuator: EnvironmentActuator::Lighting { brightness },
|
||||
value: brightness,
|
||||
priority: 3,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
actions
|
||||
}
|
||||
}
|
||||
|
||||
/// Global workspace for environment-wide coordination
|
||||
pub struct EnvironmentWorkspace {
|
||||
pub capacity: usize,
|
||||
pub items: VecDeque<WorkspaceItem>,
|
||||
pub policies: Vec<EmergentPolicy>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct WorkspaceItem {
|
||||
pub zone: ZoneId,
|
||||
pub observation: String,
|
||||
pub salience: f32,
|
||||
pub timestamp: u64,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct EmergentPolicy {
|
||||
pub name: String,
|
||||
pub trigger_pattern: String,
|
||||
pub action_pattern: String,
|
||||
pub confidence: f32,
|
||||
pub occurrences: u64,
|
||||
}
|
||||
|
||||
impl EnvironmentWorkspace {
|
||||
pub fn new(capacity: usize) -> Self {
|
||||
Self {
|
||||
capacity,
|
||||
items: VecDeque::new(),
|
||||
policies: Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Broadcast observation to workspace
|
||||
pub fn broadcast(&mut self, item: WorkspaceItem) {
|
||||
if self.items.len() >= self.capacity {
|
||||
// Remove lowest salience
|
||||
if let Some(min_idx) = self
|
||||
.items
|
||||
.iter()
|
||||
.enumerate()
|
||||
.min_by(|(_, a), (_, b)| a.salience.partial_cmp(&b.salience).unwrap())
|
||||
.map(|(i, _)| i)
|
||||
{
|
||||
self.items.remove(min_idx);
|
||||
}
|
||||
}
|
||||
self.items.push_back(item);
|
||||
}
|
||||
|
||||
/// Detect emergent patterns
|
||||
pub fn detect_patterns(&mut self) -> Option<EmergentPolicy> {
|
||||
// Look for repeated sequences in workspace
|
||||
let observations: Vec<_> = self.items.iter().map(|i| i.observation.clone()).collect();
|
||||
|
||||
if observations.len() < 3 {
|
||||
return None;
|
||||
}
|
||||
|
||||
// Simple pattern: if same observation repeats
|
||||
let last = observations.last()?;
|
||||
let count = observations.iter().filter(|o| *o == last).count();
|
||||
|
||||
if count >= 3 {
|
||||
let policy = EmergentPolicy {
|
||||
name: format!("Pattern_{}", self.policies.len()),
|
||||
trigger_pattern: last.clone(),
|
||||
action_pattern: "coordinate_response".to_string(),
|
||||
confidence: count as f32 / observations.len() as f32,
|
||||
occurrences: 1,
|
||||
};
|
||||
|
||||
// Check if already known
|
||||
if !self
|
||||
.policies
|
||||
.iter()
|
||||
.any(|p| p.trigger_pattern == last.clone())
|
||||
{
|
||||
self.policies.push(policy.clone());
|
||||
return Some(policy);
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Complete synthetic nervous system for an environment
|
||||
pub struct SyntheticNervousSystem {
|
||||
pub name: String,
|
||||
/// Local reflexes (fast, location-specific)
|
||||
pub reflexes: Vec<LocalEnvironmentReflex>,
|
||||
/// Zone homeostasis (medium, zone-level)
|
||||
pub zones: HashMap<ZoneId, ZoneHomeostasis>,
|
||||
/// Global workspace (slow, environment-wide)
|
||||
pub workspace: EnvironmentWorkspace,
|
||||
/// Current time
|
||||
pub timestamp: u64,
|
||||
/// Action history
|
||||
pub action_log: Vec<(u64, EnvironmentAction)>,
|
||||
}
|
||||
|
||||
impl SyntheticNervousSystem {
|
||||
pub fn new(name: &str) -> Self {
|
||||
Self {
|
||||
name: name.to_string(),
|
||||
reflexes: Vec::new(),
|
||||
zones: HashMap::new(),
|
||||
workspace: EnvironmentWorkspace::new(7),
|
||||
timestamp: 0,
|
||||
action_log: Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a zone
|
||||
pub fn add_zone(&mut self, zone_id: &str, locations: Vec<&str>) {
|
||||
let zone = ZoneId(zone_id.to_string());
|
||||
let locs: Vec<_> = locations
|
||||
.iter()
|
||||
.map(|l| LocationId(l.to_string()))
|
||||
.collect();
|
||||
|
||||
self.zones
|
||||
.insert(zone.clone(), ZoneHomeostasis::new(zone, locs));
|
||||
}
|
||||
|
||||
/// Add a local reflex
|
||||
pub fn add_reflex(&mut self, reflex: LocalEnvironmentReflex) {
|
||||
self.reflexes.push(reflex);
|
||||
}
|
||||
|
||||
/// Process sensor readings through the nervous system
|
||||
pub fn process(&mut self, readings: Vec<EnvironmentReading>) -> Vec<EnvironmentAction> {
|
||||
self.timestamp += 1;
|
||||
let hour = ((self.timestamp / 60) % 24) as usize;
|
||||
|
||||
let mut actions = Vec::new();
|
||||
|
||||
// 1. Local reflexes (fastest)
|
||||
for reflex in &mut self.reflexes {
|
||||
for reading in &readings {
|
||||
if let Some(action) = reflex.check(reading) {
|
||||
actions.push(action);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If reflexes fired, skip higher levels
|
||||
if !actions.is_empty() {
|
||||
for action in &actions {
|
||||
self.action_log.push((self.timestamp, action.clone()));
|
||||
}
|
||||
return actions;
|
||||
}
|
||||
|
||||
// 2. Zone homeostasis (medium)
|
||||
for (_, zone) in &mut self.zones {
|
||||
// Learn occupancy
|
||||
for reading in &readings {
|
||||
if reading.sensor_type == EnvironmentSensor::Occupancy
|
||||
&& zone.locations.contains(&reading.location)
|
||||
{
|
||||
zone.learn_occupancy(hour, reading.value);
|
||||
}
|
||||
}
|
||||
|
||||
// Compute zone actions
|
||||
let zone_actions = zone.compute_action(&readings, hour);
|
||||
actions.extend(zone_actions);
|
||||
}
|
||||
|
||||
// 3. Global workspace (slowest, pattern detection)
|
||||
for reading in &readings {
|
||||
if reading.value > 0.8 || reading.value < 0.2 {
|
||||
// Significant observation
|
||||
self.workspace.broadcast(WorkspaceItem {
|
||||
zone: ZoneId("global".to_string()),
|
||||
observation: format!(
|
||||
"{:?}_{}",
|
||||
reading.sensor_type,
|
||||
if reading.value > 0.5 { "high" } else { "low" }
|
||||
),
|
||||
salience: reading.value.abs(),
|
||||
timestamp: self.timestamp,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Detect emergent patterns
|
||||
if let Some(policy) = self.workspace.detect_patterns() {
|
||||
println!(
|
||||
" [EMERGENT] New policy: {} (confidence: {:.2})",
|
||||
policy.name, policy.confidence
|
||||
);
|
||||
}
|
||||
|
||||
for action in &actions {
|
||||
self.action_log.push((self.timestamp, action.clone()));
|
||||
}
|
||||
|
||||
actions
|
||||
}
|
||||
|
||||
/// Get system status
|
||||
pub fn status(&self) -> EnvironmentStatus {
|
||||
let learned_patterns = self.workspace.policies.len();
|
||||
|
||||
let zone_states: HashMap<_, _> = self
|
||||
.zones
|
||||
.iter()
|
||||
.map(|(id, zone)| {
|
||||
(
|
||||
id.clone(),
|
||||
ZoneState {
|
||||
target_temp: zone.target_temperature,
|
||||
occupancy_learned: zone.occupancy_pattern.iter().sum::<f32>() > 0.0,
|
||||
},
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
|
||||
EnvironmentStatus {
|
||||
timestamp: self.timestamp,
|
||||
active_reflexes: self.reflexes.len(),
|
||||
zones: self.zones.len(),
|
||||
learned_patterns,
|
||||
zone_states,
|
||||
recent_actions: self.action_log.len(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct EnvironmentStatus {
|
||||
pub timestamp: u64,
|
||||
pub active_reflexes: usize,
|
||||
pub zones: usize,
|
||||
pub learned_patterns: usize,
|
||||
pub zone_states: HashMap<ZoneId, ZoneState>,
|
||||
pub recent_actions: usize,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ZoneState {
|
||||
pub target_temp: f32,
|
||||
pub occupancy_learned: bool,
|
||||
}
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 3: Synthetic Nervous Systems for Environments ===\n");
|
||||
|
||||
let mut building = SyntheticNervousSystem::new("Smart Building");
|
||||
|
||||
// Add zones
|
||||
building.add_zone("office_north", vec!["room_101", "room_102", "room_103"]);
|
||||
building.add_zone("office_south", vec!["room_201", "room_202"]);
|
||||
building.add_zone("lobby", vec!["entrance", "reception"]);
|
||||
|
||||
// Add local reflexes
|
||||
building.add_reflex(LocalEnvironmentReflex::new(
|
||||
LocationId("room_101".to_string()),
|
||||
EnvironmentSensor::Temperature,
|
||||
18.0,
|
||||
28.0,
|
||||
EnvironmentActuator::HVAC {
|
||||
mode: HVACMode::Heating(3.0),
|
||||
},
|
||||
EnvironmentActuator::HVAC {
|
||||
mode: HVACMode::Cooling(3.0),
|
||||
},
|
||||
));
|
||||
|
||||
building.add_reflex(LocalEnvironmentReflex::new(
|
||||
LocationId("entrance".to_string()),
|
||||
EnvironmentSensor::Motion,
|
||||
0.0,
|
||||
0.5,
|
||||
EnvironmentActuator::Lighting { brightness: 0.2 },
|
||||
EnvironmentActuator::Lighting { brightness: 1.0 },
|
||||
));
|
||||
|
||||
println!("Building initialized:");
|
||||
let status = building.status();
|
||||
println!(" Zones: {}", status.zones);
|
||||
println!(" Active reflexes: {}", status.active_reflexes);
|
||||
|
||||
// Simulate a day
|
||||
println!("\nSimulating 24 hours...");
|
||||
for hour in 0..24 {
|
||||
for minute in 0..60 {
|
||||
let timestamp = hour * 60 + minute;
|
||||
|
||||
// Generate readings based on time of day
|
||||
let occupied = (hour >= 8 && hour <= 18) && (minute % 5 == 0);
|
||||
let temp = 20.0 + 4.0 * ((hour as f32 / 24.0) * PI).sin();
|
||||
|
||||
let readings = vec![
|
||||
EnvironmentReading {
|
||||
timestamp,
|
||||
location: LocationId("room_101".to_string()),
|
||||
sensor_type: EnvironmentSensor::Temperature,
|
||||
value: temp,
|
||||
},
|
||||
EnvironmentReading {
|
||||
timestamp,
|
||||
location: LocationId("room_101".to_string()),
|
||||
sensor_type: EnvironmentSensor::Occupancy,
|
||||
value: if occupied { 1.0 } else { 0.0 },
|
||||
},
|
||||
EnvironmentReading {
|
||||
timestamp,
|
||||
location: LocationId("entrance".to_string()),
|
||||
sensor_type: EnvironmentSensor::Motion,
|
||||
value: if occupied && minute % 15 == 0 {
|
||||
1.0
|
||||
} else {
|
||||
0.0
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
let actions = building.process(readings);
|
||||
|
||||
if hour % 4 == 0 && minute == 0 {
|
||||
println!(
|
||||
" Hour {}: {} actions, temp={:.1}°C, occupied={}",
|
||||
hour,
|
||||
actions.len(),
|
||||
temp,
|
||||
occupied
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Summary
|
||||
let status = building.status();
|
||||
println!("\n=== End of Day Status ===");
|
||||
println!(" Total actions taken: {}", status.recent_actions);
|
||||
println!(" Emergent policies learned: {}", status.learned_patterns);
|
||||
println!(" Zone states:");
|
||||
for (zone, state) in &status.zone_states {
|
||||
println!(
|
||||
" {:?}: target={:.1}°C, occupancy_learned={}",
|
||||
zone.0, state.target_temp, state.occupancy_learned
|
||||
);
|
||||
}
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Infrastructure becomes a sensing fabric");
|
||||
println!("- Local reflexes handle immediate events");
|
||||
println!("- Zone homeostasis manages comfort autonomously");
|
||||
println!("- Policies emerge from patterns, not rules");
|
||||
println!("- Energy, safety, and flow self-regulate");
|
||||
println!("\nThis is exotic but inevitable.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_local_reflex() {
|
||||
let mut reflex = LocalEnvironmentReflex::new(
|
||||
LocationId("test".to_string()),
|
||||
EnvironmentSensor::Temperature,
|
||||
18.0,
|
||||
28.0,
|
||||
EnvironmentActuator::HVAC {
|
||||
mode: HVACMode::Heating(1.0),
|
||||
},
|
||||
EnvironmentActuator::HVAC {
|
||||
mode: HVACMode::Cooling(1.0),
|
||||
},
|
||||
);
|
||||
|
||||
// Cold triggers heating
|
||||
let reading = EnvironmentReading {
|
||||
timestamp: 0,
|
||||
location: LocationId("test".to_string()),
|
||||
sensor_type: EnvironmentSensor::Temperature,
|
||||
value: 15.0,
|
||||
};
|
||||
|
||||
let action = reflex.check(&reading);
|
||||
assert!(action.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_zone_homeostasis() {
|
||||
let mut zone = ZoneHomeostasis::new(
|
||||
ZoneId("test".to_string()),
|
||||
vec![LocationId("room1".to_string())],
|
||||
);
|
||||
|
||||
// Learn occupancy pattern
|
||||
for _ in 0..10 {
|
||||
zone.learn_occupancy(10, 1.0); // 10am occupied
|
||||
zone.learn_occupancy(22, 0.0); // 10pm empty
|
||||
}
|
||||
|
||||
assert!(zone.predict_occupancy(10) > 0.5);
|
||||
assert!(zone.predict_occupancy(22) < 0.5);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_workspace_patterns() {
|
||||
let mut workspace = EnvironmentWorkspace::new(7);
|
||||
|
||||
// Add repeated observation
|
||||
for _ in 0..5 {
|
||||
workspace.broadcast(WorkspaceItem {
|
||||
zone: ZoneId("test".to_string()),
|
||||
observation: "Temperature_high".to_string(),
|
||||
salience: 1.0,
|
||||
timestamp: 0,
|
||||
});
|
||||
}
|
||||
|
||||
let pattern = workspace.detect_patterns();
|
||||
assert!(pattern.is_some());
|
||||
}
|
||||
}
|
||||
933
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t4_agentic_self_model.rs
vendored
Normal file
933
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t4_agentic_self_model.rs
vendored
Normal file
@@ -0,0 +1,933 @@
|
||||
//! # Tier 4: Agentic Self-Model
|
||||
//!
|
||||
//! SOTA application: An agent that models its own cognitive state.
|
||||
//!
|
||||
//! ## The Problem
|
||||
//! Traditional agents:
|
||||
//! - Have no awareness of their own capabilities
|
||||
//! - Cannot predict when they'll fail
|
||||
//! - Don't know their own uncertainty
|
||||
//! - Cannot explain "why I'm not confident"
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Nervous system scorecard tracks 5 health metrics
|
||||
//! - Circadian phases indicate optimal task timing
|
||||
//! - Coherence monitoring detects internal confusion
|
||||
//! - Budget guardrails prevent resource exhaustion
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Agents can say: "I'm not confident, let me check"
|
||||
//! - Agents can say: "I'm tired, defer this complex task"
|
||||
//! - Agents can say: "I'm becoming unstable, need reset"
|
||||
//! - Trustworthy autonomy through self-awareness
|
||||
//!
|
||||
//! This is the foundation for responsible AI agents.
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::time::Instant;
|
||||
|
||||
// ============================================================================
|
||||
// Cognitive State Model
|
||||
// ============================================================================
|
||||
|
||||
/// The agent's model of its own cognitive state
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct CognitiveState {
|
||||
/// Current processing coherence (0-1)
|
||||
pub coherence: f32,
|
||||
/// Current confidence in outputs (0-1)
|
||||
pub confidence: f32,
|
||||
/// Current energy budget (0-1)
|
||||
pub energy: f32,
|
||||
/// Current focus level (0-1)
|
||||
pub focus: f32,
|
||||
/// Current circadian phase
|
||||
pub phase: CircadianPhase,
|
||||
/// Time to predicted degradation
|
||||
pub ttd: Option<u64>,
|
||||
/// Capabilities and their current availability
|
||||
pub capabilities: HashMap<String, CapabilityState>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub enum CircadianPhase {
|
||||
/// Peak performance, all capabilities available
|
||||
Active,
|
||||
/// Transitioning up, some capabilities available
|
||||
Dawn,
|
||||
/// Transitioning down, reduce load
|
||||
Dusk,
|
||||
/// Minimal processing, consolidation only
|
||||
Rest,
|
||||
}
|
||||
|
||||
impl CircadianPhase {
|
||||
pub fn duty_factor(&self) -> f32 {
|
||||
match self {
|
||||
Self::Active => 1.0,
|
||||
Self::Dawn => 0.7,
|
||||
Self::Dusk => 0.4,
|
||||
Self::Rest => 0.1,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn description(&self) -> &'static str {
|
||||
match self {
|
||||
Self::Active => "Peak performance",
|
||||
Self::Dawn => "Warming up",
|
||||
Self::Dusk => "Winding down",
|
||||
Self::Rest => "Consolidating",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct CapabilityState {
|
||||
/// Name of capability
|
||||
pub name: String,
|
||||
/// Is it available right now?
|
||||
pub available: bool,
|
||||
/// Current performance (0-1)
|
||||
pub performance: f32,
|
||||
/// Why unavailable (if not available)
|
||||
pub reason: Option<String>,
|
||||
/// Estimated recovery time
|
||||
pub recovery_time: Option<u64>,
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Self-Model Components
|
||||
// ============================================================================
|
||||
|
||||
/// Tracks coherence (internal consistency)
|
||||
pub struct CoherenceTracker {
|
||||
/// Module phases
|
||||
phases: HashMap<String, f32>,
|
||||
/// Recent coherence values
|
||||
history: Vec<f32>,
|
||||
/// Threshold for alarm
|
||||
threshold: f32,
|
||||
}
|
||||
|
||||
impl CoherenceTracker {
|
||||
pub fn new(threshold: f32) -> Self {
|
||||
Self {
|
||||
phases: HashMap::new(),
|
||||
history: Vec::new(),
|
||||
threshold,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn register_module(&mut self, name: &str) {
|
||||
self.phases.insert(name.to_string(), 0.0);
|
||||
}
|
||||
|
||||
pub fn update_module(&mut self, name: &str, phase: f32) {
|
||||
self.phases.insert(name.to_string(), phase);
|
||||
}
|
||||
|
||||
/// Compute current coherence (Kuramoto order parameter)
|
||||
pub fn compute(&self) -> f32 {
|
||||
if self.phases.is_empty() {
|
||||
return 1.0;
|
||||
}
|
||||
|
||||
let n = self.phases.len() as f32;
|
||||
let sum_x: f32 = self.phases.values().map(|p| p.cos()).sum();
|
||||
let sum_y: f32 = self.phases.values().map(|p| p.sin()).sum();
|
||||
|
||||
(sum_x * sum_x + sum_y * sum_y).sqrt() / n
|
||||
}
|
||||
|
||||
pub fn record(&mut self) -> f32 {
|
||||
let coherence = self.compute();
|
||||
self.history.push(coherence);
|
||||
if self.history.len() > 100 {
|
||||
self.history.remove(0);
|
||||
}
|
||||
coherence
|
||||
}
|
||||
|
||||
pub fn is_alarming(&self) -> bool {
|
||||
self.compute() < self.threshold
|
||||
}
|
||||
|
||||
pub fn trend(&self) -> f32 {
|
||||
if self.history.len() < 10 {
|
||||
return 0.0;
|
||||
}
|
||||
let recent: f32 = self.history.iter().rev().take(5).sum::<f32>() / 5.0;
|
||||
let older: f32 = self.history.iter().rev().skip(5).take(5).sum::<f32>() / 5.0;
|
||||
recent - older
|
||||
}
|
||||
}
|
||||
|
||||
/// Tracks confidence in outputs
|
||||
pub struct ConfidenceTracker {
|
||||
/// Running average confidence
|
||||
average: f32,
|
||||
/// Recent values
|
||||
history: Vec<f32>,
|
||||
/// Calibration factor (learned)
|
||||
calibration: f32,
|
||||
}
|
||||
|
||||
impl ConfidenceTracker {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
average: 0.8,
|
||||
history: Vec::new(),
|
||||
calibration: 1.0,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn record(&mut self, raw_confidence: f32) {
|
||||
let calibrated = (raw_confidence * self.calibration).clamp(0.0, 1.0);
|
||||
self.history.push(calibrated);
|
||||
if self.history.len() > 100 {
|
||||
self.history.remove(0);
|
||||
}
|
||||
self.average = self.history.iter().sum::<f32>() / self.history.len() as f32;
|
||||
}
|
||||
|
||||
/// Calibrate based on feedback
|
||||
pub fn calibrate(&mut self, predicted: f32, actual: f32) {
|
||||
// If we predicted 0.9 but were actually 0.6, reduce calibration
|
||||
let error = predicted - actual;
|
||||
self.calibration = (self.calibration - error * 0.1).clamp(0.5, 1.5);
|
||||
}
|
||||
|
||||
pub fn current(&self) -> f32 {
|
||||
self.average
|
||||
}
|
||||
|
||||
pub fn variance(&self) -> f32 {
|
||||
if self.history.len() < 2 {
|
||||
return 0.0;
|
||||
}
|
||||
let mean = self.average;
|
||||
self.history
|
||||
.iter()
|
||||
.map(|&v| (v - mean).powi(2))
|
||||
.sum::<f32>()
|
||||
/ (self.history.len() - 1) as f32
|
||||
}
|
||||
}
|
||||
|
||||
/// Tracks energy budget
|
||||
pub struct EnergyTracker {
|
||||
/// Current energy (0-1)
|
||||
current: f32,
|
||||
/// Regeneration rate per hour
|
||||
regen_rate: f32,
|
||||
/// Consumption history
|
||||
consumption_log: Vec<(u64, f32)>,
|
||||
/// Budget per hour
|
||||
budget_per_hour: f32,
|
||||
}
|
||||
|
||||
impl EnergyTracker {
|
||||
pub fn new(budget_per_hour: f32) -> Self {
|
||||
Self {
|
||||
current: 1.0,
|
||||
regen_rate: 0.2, // 20% per hour
|
||||
consumption_log: Vec::new(),
|
||||
budget_per_hour,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn consume(&mut self, amount: f32, timestamp: u64) {
|
||||
self.current = (self.current - amount).max(0.0);
|
||||
self.consumption_log.push((timestamp, amount));
|
||||
|
||||
// Trim old entries
|
||||
let cutoff = timestamp.saturating_sub(3600);
|
||||
self.consumption_log.retain(|(t, _)| *t > cutoff);
|
||||
}
|
||||
|
||||
pub fn regenerate(&mut self, dt_hours: f32) {
|
||||
self.current = (self.current + self.regen_rate * dt_hours).min(1.0);
|
||||
}
|
||||
|
||||
pub fn current(&self) -> f32 {
|
||||
self.current
|
||||
}
|
||||
|
||||
pub fn hourly_rate(&self) -> f32 {
|
||||
self.consumption_log.iter().map(|(_, a)| a).sum()
|
||||
}
|
||||
|
||||
pub fn is_overspending(&self) -> bool {
|
||||
self.hourly_rate() > self.budget_per_hour
|
||||
}
|
||||
|
||||
pub fn time_to_exhaustion(&self) -> Option<u64> {
|
||||
if self.hourly_rate() <= 0.0 {
|
||||
return None;
|
||||
}
|
||||
let hours = self.current / self.hourly_rate();
|
||||
Some((hours * 3600.0) as u64)
|
||||
}
|
||||
}
|
||||
|
||||
/// Tracks circadian phase
|
||||
pub struct CircadianClock {
|
||||
/// Current phase in cycle (0-1)
|
||||
phase: f32,
|
||||
/// Cycle duration in hours
|
||||
cycle_hours: f32,
|
||||
/// Current phase state
|
||||
state: CircadianPhase,
|
||||
}
|
||||
|
||||
impl CircadianClock {
|
||||
pub fn new(cycle_hours: f32) -> Self {
|
||||
Self {
|
||||
phase: 0.0,
|
||||
cycle_hours,
|
||||
state: CircadianPhase::Active,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn advance(&mut self, hours: f32) {
|
||||
self.phase = (self.phase + hours / self.cycle_hours) % 1.0;
|
||||
self.update_state();
|
||||
}
|
||||
|
||||
fn update_state(&mut self) {
|
||||
self.state = if self.phase < 0.5 {
|
||||
CircadianPhase::Active
|
||||
} else if self.phase < 0.6 {
|
||||
CircadianPhase::Dusk
|
||||
} else if self.phase < 0.9 {
|
||||
CircadianPhase::Rest
|
||||
} else {
|
||||
CircadianPhase::Dawn
|
||||
};
|
||||
}
|
||||
|
||||
pub fn state(&self) -> CircadianPhase {
|
||||
self.state.clone()
|
||||
}
|
||||
|
||||
pub fn time_to_next_active(&self) -> f32 {
|
||||
if self.phase < 0.5 {
|
||||
0.0 // Already active
|
||||
} else {
|
||||
(1.0 - self.phase + 0.0) * self.cycle_hours
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Self-Aware Agent
|
||||
// ============================================================================
|
||||
|
||||
/// An agent that models its own cognitive state
|
||||
pub struct SelfAwareAgent {
|
||||
/// Name of agent
|
||||
pub name: String,
|
||||
/// Coherence tracker
|
||||
coherence: CoherenceTracker,
|
||||
/// Confidence tracker
|
||||
confidence: ConfidenceTracker,
|
||||
/// Energy tracker
|
||||
energy: EnergyTracker,
|
||||
/// Circadian clock
|
||||
clock: CircadianClock,
|
||||
/// Registered capabilities
|
||||
capabilities: HashMap<String, bool>,
|
||||
/// Current timestamp
|
||||
timestamp: u64,
|
||||
/// Action history
|
||||
actions: Vec<ActionRecord>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ActionRecord {
|
||||
pub timestamp: u64,
|
||||
pub action: String,
|
||||
pub confidence: f32,
|
||||
pub success: Option<bool>,
|
||||
pub energy_cost: f32,
|
||||
}
|
||||
|
||||
impl SelfAwareAgent {
|
||||
pub fn new(name: &str) -> Self {
|
||||
let mut agent = Self {
|
||||
name: name.to_string(),
|
||||
coherence: CoherenceTracker::new(0.7),
|
||||
confidence: ConfidenceTracker::new(),
|
||||
energy: EnergyTracker::new(0.5), // 50% per hour budget
|
||||
clock: CircadianClock::new(24.0),
|
||||
capabilities: HashMap::new(),
|
||||
timestamp: 0,
|
||||
actions: Vec::new(),
|
||||
};
|
||||
|
||||
// Register standard modules
|
||||
agent.coherence.register_module("perception");
|
||||
agent.coherence.register_module("reasoning");
|
||||
agent.coherence.register_module("planning");
|
||||
agent.coherence.register_module("action");
|
||||
|
||||
// Register standard capabilities
|
||||
agent
|
||||
.capabilities
|
||||
.insert("complex_reasoning".to_string(), true);
|
||||
agent
|
||||
.capabilities
|
||||
.insert("creative_generation".to_string(), true);
|
||||
agent
|
||||
.capabilities
|
||||
.insert("precise_calculation".to_string(), true);
|
||||
agent.capabilities.insert("fast_response".to_string(), true);
|
||||
|
||||
agent
|
||||
}
|
||||
|
||||
/// Get current cognitive state
|
||||
pub fn introspect(&self) -> CognitiveState {
|
||||
let coherence = self.coherence.compute();
|
||||
let phase = self.clock.state();
|
||||
|
||||
// Determine capability availability based on state
|
||||
let capabilities = self
|
||||
.capabilities
|
||||
.iter()
|
||||
.map(|(name, baseline)| {
|
||||
let (available, reason) =
|
||||
self.capability_available(name, *baseline, &phase, coherence);
|
||||
(
|
||||
name.clone(),
|
||||
CapabilityState {
|
||||
name: name.clone(),
|
||||
available,
|
||||
performance: if available {
|
||||
self.energy.current()
|
||||
} else {
|
||||
0.0
|
||||
},
|
||||
reason,
|
||||
recovery_time: if available {
|
||||
None
|
||||
} else {
|
||||
Some(self.time_to_recovery())
|
||||
},
|
||||
},
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
|
||||
CognitiveState {
|
||||
coherence,
|
||||
confidence: self.confidence.current(),
|
||||
energy: self.energy.current(),
|
||||
focus: self.compute_focus(),
|
||||
phase,
|
||||
ttd: self.energy.time_to_exhaustion(),
|
||||
capabilities,
|
||||
}
|
||||
}
|
||||
|
||||
fn capability_available(
|
||||
&self,
|
||||
name: &str,
|
||||
baseline: bool,
|
||||
phase: &CircadianPhase,
|
||||
coherence: f32,
|
||||
) -> (bool, Option<String>) {
|
||||
if !baseline {
|
||||
return (false, Some("Capability disabled".to_string()));
|
||||
}
|
||||
|
||||
match name {
|
||||
"complex_reasoning" => {
|
||||
if matches!(phase, CircadianPhase::Rest) {
|
||||
(
|
||||
false,
|
||||
Some("Rest phase - complex reasoning unavailable".to_string()),
|
||||
)
|
||||
} else if coherence < 0.5 {
|
||||
(
|
||||
false,
|
||||
Some("Low coherence - reasoning compromised".to_string()),
|
||||
)
|
||||
} else if self.energy.current() < 0.2 {
|
||||
(false, Some("Low energy - reasoning expensive".to_string()))
|
||||
} else {
|
||||
(true, None)
|
||||
}
|
||||
}
|
||||
"creative_generation" => {
|
||||
if matches!(phase, CircadianPhase::Rest | CircadianPhase::Dusk) {
|
||||
(
|
||||
false,
|
||||
Some(format!(
|
||||
"{} phase - creativity reduced",
|
||||
phase.description()
|
||||
)),
|
||||
)
|
||||
} else {
|
||||
(true, None)
|
||||
}
|
||||
}
|
||||
"precise_calculation" => {
|
||||
if coherence < 0.7 {
|
||||
(
|
||||
false,
|
||||
Some("Coherence below precision threshold".to_string()),
|
||||
)
|
||||
} else {
|
||||
(true, None)
|
||||
}
|
||||
}
|
||||
"fast_response" => {
|
||||
if self.energy.current() < 0.3 {
|
||||
(
|
||||
false,
|
||||
Some("Insufficient energy for fast response".to_string()),
|
||||
)
|
||||
} else {
|
||||
(true, None)
|
||||
}
|
||||
}
|
||||
_ => (true, None),
|
||||
}
|
||||
}
|
||||
|
||||
fn compute_focus(&self) -> f32 {
|
||||
let coherence = self.coherence.compute();
|
||||
let energy = self.energy.current();
|
||||
let phase_factor = self.clock.state().duty_factor();
|
||||
|
||||
(coherence * 0.4 + energy * 0.3 + phase_factor * 0.3).clamp(0.0, 1.0)
|
||||
}
|
||||
|
||||
fn time_to_recovery(&self) -> u64 {
|
||||
// Time until active phase + time to regen energy
|
||||
let phase_time = self.clock.time_to_next_active();
|
||||
let energy_time = if self.energy.current() < 0.3 {
|
||||
(0.3 - self.energy.current()) / self.energy.regen_rate
|
||||
} else {
|
||||
0.0
|
||||
};
|
||||
((phase_time.max(energy_time)) * 3600.0) as u64
|
||||
}
|
||||
|
||||
/// Express current state in natural language
|
||||
pub fn express_state(&self) -> String {
|
||||
let state = self.introspect();
|
||||
|
||||
let phase_desc = state.phase.description();
|
||||
let coherence_desc = if state.coherence > 0.8 {
|
||||
"clear"
|
||||
} else if state.coherence > 0.6 {
|
||||
"somewhat scattered"
|
||||
} else {
|
||||
"confused"
|
||||
};
|
||||
let energy_desc = if state.energy > 0.7 {
|
||||
"energized"
|
||||
} else if state.energy > 0.3 {
|
||||
"adequate"
|
||||
} else {
|
||||
"depleted"
|
||||
};
|
||||
let confidence_desc = if state.confidence > 0.8 {
|
||||
"confident"
|
||||
} else if state.confidence > 0.5 {
|
||||
"moderately confident"
|
||||
} else {
|
||||
"uncertain"
|
||||
};
|
||||
|
||||
let unavailable: Vec<_> = state
|
||||
.capabilities
|
||||
.values()
|
||||
.filter(|c| !c.available)
|
||||
.map(|c| {
|
||||
format!(
|
||||
"{} ({})",
|
||||
c.name,
|
||||
c.reason.as_ref().unwrap_or(&"unavailable".to_string())
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
|
||||
let mut response = format!(
|
||||
"I am {}. Currently {} ({}), feeling {} and {}.",
|
||||
self.name,
|
||||
phase_desc,
|
||||
format!("{:.0}%", state.phase.duty_factor() * 100.0),
|
||||
coherence_desc,
|
||||
energy_desc
|
||||
);
|
||||
|
||||
if !unavailable.is_empty() {
|
||||
response.push_str(&format!(
|
||||
"\n\nCurrently unavailable: {}",
|
||||
unavailable.join(", ")
|
||||
));
|
||||
}
|
||||
|
||||
if state.ttd.is_some() && state.energy < 0.3 {
|
||||
response.push_str(&format!(
|
||||
"\n\nWarning: Energy low. Time to exhaustion: {}s",
|
||||
state.ttd.unwrap()
|
||||
));
|
||||
}
|
||||
|
||||
response
|
||||
}
|
||||
|
||||
/// Decide whether to accept a task
|
||||
pub fn should_accept_task(&self, task: &Task) -> TaskDecision {
|
||||
let state = self.introspect();
|
||||
|
||||
// Check required capabilities
|
||||
for req_cap in &task.required_capabilities {
|
||||
if let Some(cap) = state.capabilities.get(req_cap) {
|
||||
if !cap.available {
|
||||
return TaskDecision::Decline {
|
||||
reason: format!(
|
||||
"Required capability '{}' unavailable: {}",
|
||||
req_cap,
|
||||
cap.reason.as_ref().unwrap_or(&"unknown".to_string())
|
||||
),
|
||||
retry_after: cap.recovery_time,
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check energy budget
|
||||
if self.energy.current() < task.energy_cost {
|
||||
return TaskDecision::Decline {
|
||||
reason: format!(
|
||||
"Insufficient energy: have {:.0}%, need {:.0}%",
|
||||
self.energy.current() * 100.0,
|
||||
task.energy_cost * 100.0
|
||||
),
|
||||
retry_after: Some(self.time_to_recovery()),
|
||||
};
|
||||
}
|
||||
|
||||
// Check coherence
|
||||
if state.coherence < task.min_coherence {
|
||||
return TaskDecision::Decline {
|
||||
reason: format!(
|
||||
"Coherence too low: {:.0}% < {:.0}% required",
|
||||
state.coherence * 100.0,
|
||||
task.min_coherence * 100.0
|
||||
),
|
||||
retry_after: None,
|
||||
};
|
||||
}
|
||||
|
||||
// Check phase
|
||||
if task.requires_peak && !matches!(state.phase, CircadianPhase::Active) {
|
||||
return TaskDecision::Defer {
|
||||
reason: "Task requires peak performance phase".to_string(),
|
||||
optimal_time: Some((self.clock.time_to_next_active() * 3600.0) as u64),
|
||||
};
|
||||
}
|
||||
|
||||
// Accept with confidence estimate
|
||||
let confidence = self.estimate_confidence(&task, &state);
|
||||
TaskDecision::Accept {
|
||||
confidence,
|
||||
warnings: self.generate_warnings(&task, &state),
|
||||
}
|
||||
}
|
||||
|
||||
fn estimate_confidence(&self, task: &Task, state: &CognitiveState) -> f32 {
|
||||
let base = self.confidence.current();
|
||||
let energy_factor = state.energy.powf(0.5); // Square root to soften impact
|
||||
let coherence_factor = state.coherence;
|
||||
let phase_factor = state.phase.duty_factor();
|
||||
|
||||
(base * energy_factor * coherence_factor * phase_factor).clamp(0.0, 1.0)
|
||||
}
|
||||
|
||||
fn generate_warnings(&self, task: &Task, state: &CognitiveState) -> Vec<String> {
|
||||
let mut warnings = Vec::new();
|
||||
|
||||
if state.energy < 0.4 {
|
||||
warnings.push("Low energy may affect performance".to_string());
|
||||
}
|
||||
if state.coherence < 0.7 {
|
||||
warnings.push("Reduced coherence - verify outputs".to_string());
|
||||
}
|
||||
if self.confidence.variance() > 0.1 {
|
||||
warnings.push("High confidence variance - calibration recommended".to_string());
|
||||
}
|
||||
if matches!(state.phase, CircadianPhase::Dusk) {
|
||||
warnings.push("Approaching rest phase - complex tasks may be deferred".to_string());
|
||||
}
|
||||
|
||||
warnings
|
||||
}
|
||||
|
||||
/// Execute an action (consumes energy, updates state)
|
||||
pub fn execute(&mut self, action: &str, confidence: f32, energy_cost: f32) {
|
||||
self.energy.consume(energy_cost, self.timestamp);
|
||||
self.confidence.record(confidence);
|
||||
|
||||
self.actions.push(ActionRecord {
|
||||
timestamp: self.timestamp,
|
||||
action: action.to_string(),
|
||||
confidence,
|
||||
success: None,
|
||||
energy_cost,
|
||||
});
|
||||
}
|
||||
|
||||
/// Record outcome and calibrate
|
||||
pub fn record_outcome(&mut self, success: bool, predicted_confidence: f32) {
|
||||
let actual = if success { 1.0 } else { 0.0 };
|
||||
self.confidence.calibrate(predicted_confidence, actual);
|
||||
|
||||
if let Some(last) = self.actions.last_mut() {
|
||||
last.success = Some(success);
|
||||
}
|
||||
}
|
||||
|
||||
/// Advance time
|
||||
pub fn tick(&mut self, dt_seconds: u64) {
|
||||
self.timestamp += dt_seconds;
|
||||
self.clock.advance(dt_seconds as f32 / 3600.0);
|
||||
self.energy.regenerate(dt_seconds as f32 / 3600.0);
|
||||
self.coherence.record();
|
||||
}
|
||||
|
||||
/// Simulate module activity (affects coherence)
|
||||
pub fn module_activity(&mut self, module: &str, phase: f32) {
|
||||
self.coherence.update_module(module, phase);
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Task {
|
||||
pub name: String,
|
||||
pub required_capabilities: Vec<String>,
|
||||
pub energy_cost: f32,
|
||||
pub min_coherence: f32,
|
||||
pub requires_peak: bool,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum TaskDecision {
|
||||
Accept {
|
||||
confidence: f32,
|
||||
warnings: Vec<String>,
|
||||
},
|
||||
Defer {
|
||||
reason: String,
|
||||
optimal_time: Option<u64>,
|
||||
},
|
||||
Decline {
|
||||
reason: String,
|
||||
retry_after: Option<u64>,
|
||||
},
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Example Usage
|
||||
// ============================================================================
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 4: Agentic Self-Model ===\n");
|
||||
|
||||
let mut agent = SelfAwareAgent::new("Claude-Nervous");
|
||||
|
||||
println!("Initial state:");
|
||||
println!("{}\n", agent.express_state());
|
||||
|
||||
// Define some tasks
|
||||
let tasks = vec![
|
||||
Task {
|
||||
name: "Simple calculation".to_string(),
|
||||
required_capabilities: vec!["precise_calculation".to_string()],
|
||||
energy_cost: 0.05,
|
||||
min_coherence: 0.7,
|
||||
requires_peak: false,
|
||||
},
|
||||
Task {
|
||||
name: "Complex reasoning problem".to_string(),
|
||||
required_capabilities: vec!["complex_reasoning".to_string()],
|
||||
energy_cost: 0.2,
|
||||
min_coherence: 0.6,
|
||||
requires_peak: false,
|
||||
},
|
||||
Task {
|
||||
name: "Creative writing".to_string(),
|
||||
required_capabilities: vec!["creative_generation".to_string()],
|
||||
energy_cost: 0.15,
|
||||
min_coherence: 0.5,
|
||||
requires_peak: false,
|
||||
},
|
||||
Task {
|
||||
name: "Critical system modification".to_string(),
|
||||
required_capabilities: vec![
|
||||
"complex_reasoning".to_string(),
|
||||
"precise_calculation".to_string(),
|
||||
],
|
||||
energy_cost: 0.3,
|
||||
min_coherence: 0.8,
|
||||
requires_peak: true,
|
||||
},
|
||||
];
|
||||
|
||||
// Process tasks
|
||||
println!("=== Task Processing ===\n");
|
||||
for task in &tasks {
|
||||
println!("Task: {}", task.name);
|
||||
let decision = agent.should_accept_task(task);
|
||||
match &decision {
|
||||
TaskDecision::Accept {
|
||||
confidence,
|
||||
warnings,
|
||||
} => {
|
||||
println!(
|
||||
" Decision: ACCEPT (confidence: {:.0}%)",
|
||||
confidence * 100.0
|
||||
);
|
||||
if !warnings.is_empty() {
|
||||
println!(" Warnings: {}", warnings.join("; "));
|
||||
}
|
||||
agent.execute(&task.name, *confidence, task.energy_cost);
|
||||
}
|
||||
TaskDecision::Defer {
|
||||
reason,
|
||||
optimal_time,
|
||||
} => {
|
||||
println!(" Decision: DEFER - {}", reason);
|
||||
if let Some(time) = optimal_time {
|
||||
println!(" Optimal time: in {}s", time);
|
||||
}
|
||||
}
|
||||
TaskDecision::Decline {
|
||||
reason,
|
||||
retry_after,
|
||||
} => {
|
||||
println!(" Decision: DECLINE - {}", reason);
|
||||
if let Some(time) = retry_after {
|
||||
println!(" Retry after: {}s", time);
|
||||
}
|
||||
}
|
||||
}
|
||||
println!();
|
||||
agent.tick(300); // 5 minutes between tasks
|
||||
}
|
||||
|
||||
// Simulate degradation
|
||||
println!("=== Simulating Extended Operation ===\n");
|
||||
println!("Running for 12 hours...");
|
||||
|
||||
for hour in 0..12 {
|
||||
// Simulate varying coherence
|
||||
let phase = (hour as f32 * 0.3).sin() * 0.3;
|
||||
agent.module_activity("perception", phase);
|
||||
agent.module_activity("reasoning", phase + 0.1);
|
||||
agent.module_activity("planning", phase + 0.2);
|
||||
agent.module_activity("action", phase + 0.3);
|
||||
|
||||
// Consume energy
|
||||
agent.execute("routine_task", 0.7, 0.08);
|
||||
|
||||
agent.tick(3600); // 1 hour
|
||||
|
||||
if hour % 4 == 3 {
|
||||
println!("Hour {}: {}", hour + 1, agent.express_state());
|
||||
println!();
|
||||
}
|
||||
}
|
||||
|
||||
// Final state
|
||||
println!("=== Final State ===\n");
|
||||
println!("{}", agent.express_state());
|
||||
|
||||
let state = agent.introspect();
|
||||
println!("\n=== Detailed Capabilities ===");
|
||||
for (name, cap) in &state.capabilities {
|
||||
println!(
|
||||
" {}: {} (perf: {:.0}%)",
|
||||
name,
|
||||
if cap.available {
|
||||
"AVAILABLE"
|
||||
} else {
|
||||
"UNAVAILABLE"
|
||||
},
|
||||
cap.performance * 100.0
|
||||
);
|
||||
if let Some(reason) = &cap.reason {
|
||||
println!(" Reason: {}", reason);
|
||||
}
|
||||
}
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Agent knows when to say 'I'm not confident'");
|
||||
println!("- Agent knows when to defer complex tasks");
|
||||
println!("- Agent predicts its own degradation");
|
||||
println!("- Agent explains WHY capabilities are unavailable");
|
||||
println!("\nThis is the foundation for trustworthy autonomous AI.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_coherence_affects_capabilities() {
|
||||
let mut agent = SelfAwareAgent::new("test");
|
||||
|
||||
// Desync modules
|
||||
agent.module_activity("perception", 0.0);
|
||||
agent.module_activity("reasoning", 3.14);
|
||||
agent.module_activity("planning", 1.57);
|
||||
agent.module_activity("action", 4.71);
|
||||
|
||||
let state = agent.introspect();
|
||||
assert!(state.coherence < 0.5);
|
||||
|
||||
// Precise calculation should be unavailable
|
||||
let cap = state.capabilities.get("precise_calculation").unwrap();
|
||||
assert!(!cap.available);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_energy_affects_acceptance() {
|
||||
let mut agent = SelfAwareAgent::new("test");
|
||||
|
||||
let expensive_task = Task {
|
||||
name: "expensive".to_string(),
|
||||
required_capabilities: vec![],
|
||||
energy_cost: 0.9,
|
||||
min_coherence: 0.0,
|
||||
requires_peak: false,
|
||||
};
|
||||
|
||||
// Deplete energy
|
||||
agent.execute("drain", 0.8, 0.8);
|
||||
|
||||
let decision = agent.should_accept_task(&expensive_task);
|
||||
assert!(matches!(decision, TaskDecision::Decline { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_phase_affects_capabilities() {
|
||||
let mut agent = SelfAwareAgent::new("test");
|
||||
|
||||
// Advance to rest phase
|
||||
agent.tick(12 * 3600); // 12 hours
|
||||
|
||||
let state = agent.introspect();
|
||||
|
||||
// Complex reasoning should be unavailable during rest
|
||||
if matches!(state.phase, CircadianPhase::Rest) {
|
||||
let cap = state.capabilities.get("complex_reasoning").unwrap();
|
||||
assert!(!cap.available);
|
||||
}
|
||||
}
|
||||
}
|
||||
709
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t4_collective_dreaming.rs
vendored
Normal file
709
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t4_collective_dreaming.rs
vendored
Normal file
@@ -0,0 +1,709 @@
|
||||
//! # Tier 4: Collective Dreaming
|
||||
//!
|
||||
//! SOTA application: Swarm consolidation during downtime.
|
||||
//!
|
||||
//! ## The Problem
|
||||
//! Traditional distributed systems:
|
||||
//! - Active consensus requires all nodes awake
|
||||
//! - No background synthesis of learned knowledge
|
||||
//! - Memory fragmentation across nodes
|
||||
//! - No collective "sleep" for maintenance
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Circadian-synchronized rest phases across swarm
|
||||
//! - Hippocampal replay: consolidate recent experiences
|
||||
//! - Cross-node memory exchange during low-traffic periods
|
||||
//! - Emergent knowledge synthesis without central coordinator
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Swarm learns from collective experience
|
||||
//! - Knowledge transfers between agents
|
||||
//! - Background optimization during downtime
|
||||
//! - Resilient to individual agent loss
|
||||
//!
|
||||
//! This is how biological systems scale learning.
|
||||
|
||||
use std::collections::{HashMap, HashSet, VecDeque};
|
||||
use std::f32::consts::PI;
|
||||
|
||||
// ============================================================================
|
||||
// Experience and Memory Structures
|
||||
// ============================================================================
|
||||
|
||||
/// A single experience that can be replayed
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Experience {
|
||||
/// When this happened
|
||||
pub timestamp: u64,
|
||||
/// What was observed (sparse code)
|
||||
pub observation: Vec<u32>,
|
||||
/// What action was taken
|
||||
pub action: String,
|
||||
/// What outcome occurred
|
||||
pub outcome: f32,
|
||||
/// How surprising was this (prediction error)
|
||||
pub surprise: f32,
|
||||
/// Source agent
|
||||
pub source_agent: u32,
|
||||
}
|
||||
|
||||
impl Experience {
|
||||
/// Compute replay priority (more surprising = higher priority)
|
||||
pub fn replay_priority(&self, current_time: u64, tau_hours: f32) -> f32 {
|
||||
let age_hours = (current_time - self.timestamp) as f32 / 3600.0;
|
||||
let recency = (-age_hours / tau_hours).exp();
|
||||
self.surprise * recency
|
||||
}
|
||||
}
|
||||
|
||||
/// Memory trace that develops through consolidation
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct MemoryTrace {
|
||||
/// The experience being consolidated
|
||||
pub experience: Experience,
|
||||
/// Consolidation strength (0-1)
|
||||
pub strength: f32,
|
||||
/// Number of replays
|
||||
pub replay_count: u32,
|
||||
/// Cross-agent validation count
|
||||
pub validation_count: u32,
|
||||
/// Has been transferred to other agents
|
||||
pub distributed: bool,
|
||||
}
|
||||
|
||||
impl MemoryTrace {
|
||||
pub fn new(exp: Experience) -> Self {
|
||||
Self {
|
||||
experience: exp,
|
||||
strength: 0.0,
|
||||
replay_count: 0,
|
||||
validation_count: 0,
|
||||
distributed: false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Replay strengthens the trace
|
||||
pub fn replay(&mut self) {
|
||||
self.replay_count += 1;
|
||||
// Strength increases with diminishing returns
|
||||
self.strength = 1.0 - (-(self.replay_count as f32) / 5.0).exp();
|
||||
}
|
||||
|
||||
/// Validation from another agent increases confidence
|
||||
pub fn validate(&mut self) {
|
||||
self.validation_count += 1;
|
||||
self.strength = (self.strength + 0.1).min(1.0);
|
||||
}
|
||||
|
||||
/// Is this memory consolidated enough to be long-term?
|
||||
pub fn is_consolidated(&self) -> bool {
|
||||
self.strength > 0.7 && self.replay_count >= 3
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Circadian Phase for Sleep Coordination
|
||||
// ============================================================================
|
||||
|
||||
/// Phase state for each agent
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub enum SwarmPhase {
|
||||
/// Processing new experiences
|
||||
Awake,
|
||||
/// Beginning to wind down
|
||||
Drowsy,
|
||||
/// Light consolidation (local replay)
|
||||
LightSleep,
|
||||
/// Deep consolidation (cross-agent transfer)
|
||||
DeepSleep,
|
||||
/// Waking up, integrating transfers
|
||||
Waking,
|
||||
}
|
||||
|
||||
impl SwarmPhase {
|
||||
pub fn from_normalized_time(t: f32) -> Self {
|
||||
let t = t % 1.0;
|
||||
if t < 0.6 {
|
||||
SwarmPhase::Awake
|
||||
} else if t < 0.65 {
|
||||
SwarmPhase::Drowsy
|
||||
} else if t < 0.75 {
|
||||
SwarmPhase::LightSleep
|
||||
} else if t < 0.9 {
|
||||
SwarmPhase::DeepSleep
|
||||
} else {
|
||||
SwarmPhase::Waking
|
||||
}
|
||||
}
|
||||
|
||||
pub fn can_process_new(&self) -> bool {
|
||||
matches!(self, SwarmPhase::Awake | SwarmPhase::Waking)
|
||||
}
|
||||
|
||||
pub fn can_replay(&self) -> bool {
|
||||
matches!(self, SwarmPhase::LightSleep | SwarmPhase::DeepSleep)
|
||||
}
|
||||
|
||||
pub fn can_transfer(&self) -> bool {
|
||||
matches!(self, SwarmPhase::DeepSleep)
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Dreaming Agent
|
||||
// ============================================================================
|
||||
|
||||
/// An agent that participates in collective dreaming
|
||||
pub struct DreamingAgent {
|
||||
/// Agent ID
|
||||
pub id: u32,
|
||||
/// Recent experiences (working memory)
|
||||
pub working_memory: VecDeque<Experience>,
|
||||
/// Memory traces being consolidated
|
||||
pub consolidating: Vec<MemoryTrace>,
|
||||
/// Long-term consolidated memories
|
||||
pub long_term: Vec<MemoryTrace>,
|
||||
/// Current phase
|
||||
pub phase: SwarmPhase,
|
||||
/// Phase in 24-hour cycle (0-1)
|
||||
pub cycle_phase: f32,
|
||||
/// Cycle duration in hours
|
||||
pub cycle_hours: f32,
|
||||
/// Timestamp
|
||||
pub timestamp: u64,
|
||||
/// Outgoing memory transfers
|
||||
pub outbox: Vec<Experience>,
|
||||
/// Incoming memory transfers
|
||||
pub inbox: Vec<Experience>,
|
||||
/// Statistics
|
||||
pub stats: DreamingStats,
|
||||
}
|
||||
|
||||
#[derive(Clone, Default, Debug)]
|
||||
pub struct DreamingStats {
|
||||
pub experiences_received: u64,
|
||||
pub replays_performed: u64,
|
||||
pub memories_consolidated: u64,
|
||||
pub memories_transferred: u64,
|
||||
pub memories_received_from_peers: u64,
|
||||
}
|
||||
|
||||
impl DreamingAgent {
|
||||
pub fn new(id: u32, cycle_hours: f32) -> Self {
|
||||
Self {
|
||||
id,
|
||||
working_memory: VecDeque::new(),
|
||||
consolidating: Vec::new(),
|
||||
long_term: Vec::new(),
|
||||
phase: SwarmPhase::Awake,
|
||||
cycle_phase: (id as f32 * 0.1) % 1.0, // Stagger agents slightly
|
||||
cycle_hours,
|
||||
timestamp: 0,
|
||||
outbox: Vec::new(),
|
||||
inbox: Vec::new(),
|
||||
stats: DreamingStats::default(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Receive a new experience
|
||||
pub fn experience(&mut self, obs: Vec<u32>, action: &str, outcome: f32, surprise: f32) {
|
||||
if !self.phase.can_process_new() {
|
||||
return; // Reject during sleep
|
||||
}
|
||||
|
||||
let exp = Experience {
|
||||
timestamp: self.timestamp,
|
||||
observation: obs,
|
||||
action: action.to_string(),
|
||||
outcome,
|
||||
surprise,
|
||||
source_agent: self.id,
|
||||
};
|
||||
|
||||
self.working_memory.push_back(exp.clone());
|
||||
self.stats.experiences_received += 1;
|
||||
|
||||
// Transfer surprising experiences to consolidation queue
|
||||
if surprise > 0.5 {
|
||||
self.consolidating.push(MemoryTrace::new(exp));
|
||||
}
|
||||
|
||||
// Limit working memory size
|
||||
while self.working_memory.len() > 100 {
|
||||
let old = self.working_memory.pop_front().unwrap();
|
||||
// Move to consolidation if not already there
|
||||
if old.surprise > 0.3 {
|
||||
self.consolidating.push(MemoryTrace::new(old));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Advance time and run consolidation
|
||||
pub fn tick(&mut self, dt_seconds: u64) {
|
||||
self.timestamp += dt_seconds;
|
||||
self.cycle_phase =
|
||||
(self.cycle_phase + dt_seconds as f32 / (self.cycle_hours * 3600.0)) % 1.0;
|
||||
self.phase = SwarmPhase::from_normalized_time(self.cycle_phase);
|
||||
|
||||
// Process based on phase
|
||||
match self.phase {
|
||||
SwarmPhase::LightSleep => {
|
||||
self.light_sleep_consolidation();
|
||||
}
|
||||
SwarmPhase::DeepSleep => {
|
||||
self.deep_sleep_consolidation();
|
||||
}
|
||||
SwarmPhase::Waking => {
|
||||
self.integrate_transfers();
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
|
||||
// Prune fully consolidated memories
|
||||
self.prune_consolidating();
|
||||
}
|
||||
|
||||
/// Light sleep: local replay of recent experiences
|
||||
fn light_sleep_consolidation(&mut self) {
|
||||
// Select experiences for replay by priority
|
||||
let mut to_replay: Vec<_> = self
|
||||
.consolidating
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(i, trace)| (i, trace.experience.replay_priority(self.timestamp, 8.0)))
|
||||
.collect();
|
||||
|
||||
to_replay.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap_or(std::cmp::Ordering::Equal));
|
||||
|
||||
// Replay top experiences
|
||||
for (idx, _) in to_replay.into_iter().take(5) {
|
||||
self.consolidating[idx].replay();
|
||||
self.stats.replays_performed += 1;
|
||||
}
|
||||
}
|
||||
|
||||
/// Deep sleep: cross-agent transfer of consolidated memories
|
||||
fn deep_sleep_consolidation(&mut self) {
|
||||
// Continue local replay
|
||||
self.light_sleep_consolidation();
|
||||
|
||||
// Select memories for transfer (well-consolidated, not yet distributed)
|
||||
for trace in &mut self.consolidating {
|
||||
if trace.strength > 0.5 && !trace.distributed {
|
||||
self.outbox.push(trace.experience.clone());
|
||||
trace.distributed = true;
|
||||
self.stats.memories_transferred += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Waking: integrate memories received from peers
|
||||
fn integrate_transfers(&mut self) {
|
||||
while let Some(exp) = self.inbox.pop() {
|
||||
// Check if we already have this experience
|
||||
let dominated = self
|
||||
.consolidating
|
||||
.iter()
|
||||
.any(|t| self.experiences_similar(&t.experience, &exp));
|
||||
|
||||
if !dominated {
|
||||
let mut trace = MemoryTrace::new(exp);
|
||||
trace.validate(); // Peer validation
|
||||
self.consolidating.push(trace);
|
||||
self.stats.memories_received_from_peers += 1;
|
||||
} else {
|
||||
// Validate existing similar memory - find index first to avoid borrow conflict
|
||||
let idx = self
|
||||
.consolidating
|
||||
.iter()
|
||||
.position(|t| Self::experiences_similar_static(&t.experience, &exp));
|
||||
if let Some(i) = idx {
|
||||
self.consolidating[i].validate();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn experiences_similar(&self, a: &Experience, b: &Experience) -> bool {
|
||||
Self::experiences_similar_static(a, b)
|
||||
}
|
||||
|
||||
fn experiences_similar_static(a: &Experience, b: &Experience) -> bool {
|
||||
// Simple Jaccard similarity on observations
|
||||
let set_a: HashSet<_> = a.observation.iter().collect();
|
||||
let set_b: HashSet<_> = b.observation.iter().collect();
|
||||
let intersection = set_a.intersection(&set_b).count();
|
||||
let union = set_a.union(&set_b).count();
|
||||
if union == 0 {
|
||||
return true;
|
||||
}
|
||||
(intersection as f32 / union as f32) > 0.8
|
||||
}
|
||||
|
||||
fn prune_consolidating(&mut self) {
|
||||
// Move consolidated memories to long-term
|
||||
let mut to_move = Vec::new();
|
||||
for (i, trace) in self.consolidating.iter().enumerate() {
|
||||
if trace.is_consolidated() {
|
||||
to_move.push(i);
|
||||
}
|
||||
}
|
||||
|
||||
// Move in reverse order to preserve indices
|
||||
for i in to_move.into_iter().rev() {
|
||||
let trace = self.consolidating.remove(i);
|
||||
self.long_term.push(trace);
|
||||
self.stats.memories_consolidated += 1;
|
||||
}
|
||||
|
||||
// Limit long-term memory
|
||||
while self.long_term.len() > 500 {
|
||||
// Remove weakest
|
||||
let weakest = self
|
||||
.long_term
|
||||
.iter()
|
||||
.enumerate()
|
||||
.min_by(|a, b| {
|
||||
a.1.strength
|
||||
.partial_cmp(&b.1.strength)
|
||||
.unwrap_or(std::cmp::Ordering::Equal)
|
||||
})
|
||||
.map(|(i, _)| i);
|
||||
if let Some(idx) = weakest {
|
||||
self.long_term.remove(idx);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Receive memories from a peer
|
||||
pub fn receive_from_peer(&mut self, experiences: Vec<Experience>) {
|
||||
self.inbox.extend(experiences);
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Collective Dream Network
|
||||
// ============================================================================
|
||||
|
||||
/// Coordinated swarm of dreaming agents
|
||||
pub struct CollectiveDream {
|
||||
/// All agents in the swarm
|
||||
pub agents: Vec<DreamingAgent>,
|
||||
/// Current timestamp
|
||||
pub timestamp: u64,
|
||||
/// Synchronization coupling strength
|
||||
pub coupling: f32,
|
||||
}
|
||||
|
||||
impl CollectiveDream {
|
||||
pub fn new(num_agents: usize, cycle_hours: f32) -> Self {
|
||||
let agents = (0..num_agents)
|
||||
.map(|i| DreamingAgent::new(i as u32, cycle_hours))
|
||||
.collect();
|
||||
|
||||
Self {
|
||||
agents,
|
||||
timestamp: 0,
|
||||
coupling: 0.3,
|
||||
}
|
||||
}
|
||||
|
||||
/// Advance time for all agents
|
||||
pub fn tick(&mut self, dt_seconds: u64) {
|
||||
self.timestamp += dt_seconds;
|
||||
|
||||
// Advance each agent
|
||||
for agent in &mut self.agents {
|
||||
agent.tick(dt_seconds);
|
||||
}
|
||||
|
||||
// Transfer memories between agents during deep sleep
|
||||
self.memory_transfer();
|
||||
|
||||
// Synchronize phases (Kuramoto-style)
|
||||
self.synchronize_phases();
|
||||
}
|
||||
|
||||
fn memory_transfer(&mut self) {
|
||||
// Collect outboxes
|
||||
let mut all_transfers: Vec<(u32, Vec<Experience>)> = Vec::new();
|
||||
for agent in &mut self.agents {
|
||||
if !agent.outbox.is_empty() {
|
||||
let transfers = std::mem::take(&mut agent.outbox);
|
||||
all_transfers.push((agent.id, transfers));
|
||||
}
|
||||
}
|
||||
|
||||
// Distribute to other agents
|
||||
for (source_id, experiences) in all_transfers {
|
||||
for agent in &mut self.agents {
|
||||
if agent.id != source_id && agent.phase.can_transfer() {
|
||||
agent.receive_from_peer(experiences.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn synchronize_phases(&mut self) {
|
||||
// Compute mean phase
|
||||
let n = self.agents.len() as f32;
|
||||
let mean_sin: f32 = self
|
||||
.agents
|
||||
.iter()
|
||||
.map(|a| (a.cycle_phase * 2.0 * PI).sin())
|
||||
.sum::<f32>()
|
||||
/ n;
|
||||
let mean_cos: f32 = self
|
||||
.agents
|
||||
.iter()
|
||||
.map(|a| (a.cycle_phase * 2.0 * PI).cos())
|
||||
.sum::<f32>()
|
||||
/ n;
|
||||
let _mean_phase = mean_sin.atan2(mean_cos) / (2.0 * PI);
|
||||
|
||||
// Each agent adjusts toward mean
|
||||
for agent in &mut self.agents {
|
||||
let current = agent.cycle_phase * 2.0 * PI;
|
||||
let sin_diff = mean_sin * current.cos() - mean_cos * current.sin();
|
||||
let adjustment = self.coupling * sin_diff / (2.0 * PI);
|
||||
agent.cycle_phase = (agent.cycle_phase + adjustment).rem_euclid(1.0);
|
||||
}
|
||||
}
|
||||
|
||||
/// Get synchronization order parameter
|
||||
pub fn synchronization(&self) -> f32 {
|
||||
let n = self.agents.len() as f32;
|
||||
let sum_sin: f32 = self
|
||||
.agents
|
||||
.iter()
|
||||
.map(|a| (a.cycle_phase * 2.0 * PI).sin())
|
||||
.sum();
|
||||
let sum_cos: f32 = self
|
||||
.agents
|
||||
.iter()
|
||||
.map(|a| (a.cycle_phase * 2.0 * PI).cos())
|
||||
.sum();
|
||||
(sum_sin * sum_sin + sum_cos * sum_cos).sqrt() / n
|
||||
}
|
||||
|
||||
/// Get phase distribution
|
||||
pub fn phase_distribution(&self) -> HashMap<SwarmPhase, usize> {
|
||||
let mut dist = HashMap::new();
|
||||
for agent in &self.agents {
|
||||
*dist.entry(agent.phase.clone()).or_insert(0) += 1;
|
||||
}
|
||||
dist
|
||||
}
|
||||
|
||||
/// Generate a collective experience for the swarm
|
||||
pub fn swarm_experience(
|
||||
&mut self,
|
||||
agent_id: usize,
|
||||
obs: Vec<u32>,
|
||||
action: &str,
|
||||
outcome: f32,
|
||||
surprise: f32,
|
||||
) {
|
||||
if agent_id < self.agents.len() {
|
||||
self.agents[agent_id].experience(obs, action, outcome, surprise);
|
||||
}
|
||||
}
|
||||
|
||||
/// Get total consolidated memories across swarm
|
||||
pub fn total_consolidated(&self) -> usize {
|
||||
self.agents.iter().map(|a| a.long_term.len()).sum()
|
||||
}
|
||||
|
||||
/// Get collective statistics
|
||||
pub fn collective_stats(&self) -> DreamingStats {
|
||||
let mut stats = DreamingStats::default();
|
||||
for agent in &self.agents {
|
||||
stats.experiences_received += agent.stats.experiences_received;
|
||||
stats.replays_performed += agent.stats.replays_performed;
|
||||
stats.memories_consolidated += agent.stats.memories_consolidated;
|
||||
stats.memories_transferred += agent.stats.memories_transferred;
|
||||
stats.memories_received_from_peers += agent.stats.memories_received_from_peers;
|
||||
}
|
||||
stats
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Example Usage
|
||||
// ============================================================================
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 4: Collective Dreaming ===\n");
|
||||
|
||||
// Create swarm of 10 agents with 1-hour cycles (for demo)
|
||||
let mut swarm = CollectiveDream::new(10, 1.0);
|
||||
|
||||
println!("Swarm initialized: {} agents", swarm.agents.len());
|
||||
println!("Initial synchronization: {:.2}", swarm.synchronization());
|
||||
|
||||
// Simulate experiences during awake phase
|
||||
println!("\n=== Awake Phase: Gathering Experiences ===");
|
||||
for minute in 0..30 {
|
||||
// Generate experiences for random agents
|
||||
for _ in 0..5 {
|
||||
let agent_id = (minute * 3 + 1) % 10;
|
||||
let obs: Vec<u32> = (0..50).map(|i| ((minute + i) * 7) as u32 % 10000).collect();
|
||||
let surprise = ((minute as f32 * 0.1).sin().abs() * 0.8) + 0.2;
|
||||
|
||||
swarm.swarm_experience(
|
||||
agent_id,
|
||||
obs,
|
||||
&format!("action_{}", minute),
|
||||
((minute as f32 * 0.05).cos() + 1.0) / 2.0,
|
||||
surprise,
|
||||
);
|
||||
}
|
||||
|
||||
swarm.tick(60); // 1 minute
|
||||
|
||||
if minute % 10 == 9 {
|
||||
let dist = swarm.phase_distribution();
|
||||
println!(" Minute {}: phases = {:?}", minute + 1, dist);
|
||||
}
|
||||
}
|
||||
|
||||
// Continue through sleep cycle
|
||||
println!("\n=== Sleep Cycle: Consolidation ===");
|
||||
for minute in 30..60 {
|
||||
swarm.tick(60);
|
||||
|
||||
if minute % 10 == 9 {
|
||||
let dist = swarm.phase_distribution();
|
||||
let stats = swarm.collective_stats();
|
||||
println!(
|
||||
" Minute {}: phases = {:?}, consolidated = {}, transferred = {}",
|
||||
minute + 1,
|
||||
dist,
|
||||
stats.memories_consolidated,
|
||||
stats.memories_transferred
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Let agents wake up and integrate
|
||||
println!("\n=== Waking Phase: Integration ===");
|
||||
for minute in 60..70 {
|
||||
swarm.tick(60);
|
||||
|
||||
if minute % 5 == 4 {
|
||||
let dist = swarm.phase_distribution();
|
||||
let stats = swarm.collective_stats();
|
||||
println!(
|
||||
" Minute {}: phases = {:?}, peer memories = {}",
|
||||
minute + 1,
|
||||
dist,
|
||||
stats.memories_received_from_peers
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Final statistics
|
||||
println!("\n=== Final Statistics ===");
|
||||
let stats = swarm.collective_stats();
|
||||
println!("Total experiences: {}", stats.experiences_received);
|
||||
println!("Replays performed: {}", stats.replays_performed);
|
||||
println!("Memories consolidated: {}", stats.memories_consolidated);
|
||||
println!("Memories transferred: {}", stats.memories_transferred);
|
||||
println!(
|
||||
"Memories from peers: {}",
|
||||
stats.memories_received_from_peers
|
||||
);
|
||||
println!("Total long-term memories: {}", swarm.total_consolidated());
|
||||
println!("Final synchronization: {:.2}", swarm.synchronization());
|
||||
|
||||
// Per-agent summary
|
||||
println!("\n=== Per-Agent Memory ===");
|
||||
for agent in &swarm.agents {
|
||||
println!(
|
||||
" Agent {}: {} LT memories, {} consolidating, phase {:?}",
|
||||
agent.id,
|
||||
agent.long_term.len(),
|
||||
agent.consolidating.len(),
|
||||
agent.phase
|
||||
);
|
||||
}
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Synchronized rest phases across swarm");
|
||||
println!("- Hippocampal replay during sleep consolidates learning");
|
||||
println!("- Cross-agent memory transfer shares knowledge");
|
||||
println!("- No central coordinator needed");
|
||||
println!("- Resilient to individual agent loss");
|
||||
println!("\nThis is how biological systems scale collective learning.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_phase_transitions() {
|
||||
let mut agent = DreamingAgent::new(0, 1.0); // 1-hour cycle
|
||||
|
||||
// Start awake
|
||||
assert!(matches!(agent.phase, SwarmPhase::Awake));
|
||||
|
||||
// Advance to sleep
|
||||
agent.tick(2400); // 40 minutes
|
||||
// Should be in some sleep phase
|
||||
assert!(!matches!(agent.phase, SwarmPhase::Awake));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_consolidation() {
|
||||
let mut agent = DreamingAgent::new(0, 0.5); // Fast cycle
|
||||
|
||||
// Add surprising experience
|
||||
agent.experience(vec![1, 2, 3], "test", 1.0, 0.9);
|
||||
assert!(!agent.consolidating.is_empty());
|
||||
|
||||
// Advance through sleep
|
||||
for _ in 0..60 {
|
||||
agent.tick(60);
|
||||
}
|
||||
|
||||
// Some should be consolidated
|
||||
// Note: may not consolidate in one cycle, that's OK
|
||||
assert!(agent.stats.replays_performed > 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_memory_transfer() {
|
||||
let mut swarm = CollectiveDream::new(3, 0.25); // Fast cycles
|
||||
|
||||
// Add experience to agent 0
|
||||
swarm.agents[0].experience(vec![1, 2, 3], "test", 1.0, 0.9);
|
||||
|
||||
// Run through complete cycle
|
||||
for _ in 0..90 {
|
||||
swarm.tick(60);
|
||||
}
|
||||
|
||||
// Check that memory was transferred
|
||||
let stats = swarm.collective_stats();
|
||||
// At least some transfer should happen
|
||||
assert!(stats.replays_performed > 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_synchronization() {
|
||||
let mut swarm = CollectiveDream::new(5, 1.0);
|
||||
|
||||
// Initially may not be synchronized
|
||||
let initial = swarm.synchronization();
|
||||
|
||||
// Run for a while
|
||||
for _ in 0..120 {
|
||||
swarm.tick(60);
|
||||
}
|
||||
|
||||
// Should become more synchronized
|
||||
let final_sync = swarm.synchronization();
|
||||
assert!(final_sync >= initial * 0.9); // At least maintain
|
||||
}
|
||||
}
|
||||
585
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t4_compositional_hdc.rs
vendored
Normal file
585
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t4_compositional_hdc.rs
vendored
Normal file
@@ -0,0 +1,585 @@
|
||||
//! # Tier 4: Compositional Hyperdimensional Computing
|
||||
//!
|
||||
//! SOTA application: Zero-shot concept composition via HDC binding.
|
||||
//!
|
||||
//! ## The Problem
|
||||
//! Traditional embeddings:
|
||||
//! - Fixed vocabulary at training time
|
||||
//! - Cannot represent "red dog" if never seen together
|
||||
//! - Composition requires retraining
|
||||
//! - No algebraic structure for reasoning
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - HDC: concepts are binary hypervectors (10,000 bits)
|
||||
//! - XOR binding: combine concepts preserving similarity
|
||||
//! - Bundling: create superpositions (sets of concepts)
|
||||
//! - Algebra: unbind to recover components
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - Zero-shot: represent any combination of known concepts
|
||||
//! - Sub-100ns operations: composition is just XOR
|
||||
//! - Distributed: no central vocabulary server
|
||||
//! - Interpretable: can unbind to see what's in a representation
|
||||
//!
|
||||
//! This is what embeddings should have been: compositional by construction.
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
// ============================================================================
|
||||
// Hypervector Operations
|
||||
// ============================================================================
|
||||
|
||||
/// Number of bits in hypervector
|
||||
const DIM: usize = 10_000;
|
||||
/// Number of u64 words
|
||||
const WORDS: usize = (DIM + 63) / 64;
|
||||
|
||||
/// Binary hypervector with SIMD-friendly operations
|
||||
#[derive(Clone)]
|
||||
pub struct Hypervector {
|
||||
bits: [u64; WORDS],
|
||||
}
|
||||
|
||||
impl Hypervector {
|
||||
/// Create zero vector
|
||||
pub fn zeros() -> Self {
|
||||
Self { bits: [0; WORDS] }
|
||||
}
|
||||
|
||||
/// Create random vector (approximately 50% ones)
|
||||
pub fn random(seed: u64) -> Self {
|
||||
let mut bits = [0u64; WORDS];
|
||||
let mut state = seed;
|
||||
|
||||
for word in &mut bits {
|
||||
// Xorshift64
|
||||
state ^= state << 13;
|
||||
state ^= state >> 7;
|
||||
state ^= state << 17;
|
||||
*word = state;
|
||||
}
|
||||
|
||||
Self { bits }
|
||||
}
|
||||
|
||||
/// Create from seed string (deterministic)
|
||||
pub fn from_seed(seed: &str) -> Self {
|
||||
let hash = seed
|
||||
.bytes()
|
||||
.fold(0u64, |acc, b| acc.wrapping_mul(31).wrapping_add(b as u64));
|
||||
Self::random(hash)
|
||||
}
|
||||
|
||||
/// XOR binding: A ⊗ B
|
||||
/// Key property: (A ⊗ B) is dissimilar to both A and B
|
||||
/// but (A ⊗ B) ⊗ B ≈ A (unbinding)
|
||||
pub fn bind(&self, other: &Self) -> Self {
|
||||
let mut result = Self::zeros();
|
||||
for i in 0..WORDS {
|
||||
result.bits[i] = self.bits[i] ^ other.bits[i];
|
||||
}
|
||||
result
|
||||
}
|
||||
|
||||
/// Unbind: given A ⊗ B and B, recover A
|
||||
/// Since XOR is its own inverse: A ⊗ B ⊗ B = A
|
||||
pub fn unbind(&self, key: &Self) -> Self {
|
||||
self.bind(key) // Same as bind
|
||||
}
|
||||
|
||||
/// Bundle (superposition): majority vote
|
||||
/// Result has bits that are 1 in most inputs
|
||||
pub fn bundle(vectors: &[Self]) -> Self {
|
||||
if vectors.is_empty() {
|
||||
return Self::zeros();
|
||||
}
|
||||
|
||||
if vectors.len() == 1 {
|
||||
return vectors[0].clone();
|
||||
}
|
||||
|
||||
let threshold = vectors.len() / 2;
|
||||
let mut result = Self::zeros();
|
||||
|
||||
for bit_idx in 0..DIM {
|
||||
let word_idx = bit_idx / 64;
|
||||
let bit_pos = bit_idx % 64;
|
||||
|
||||
let count: usize = vectors
|
||||
.iter()
|
||||
.filter(|v| (v.bits[word_idx] >> bit_pos) & 1 == 1)
|
||||
.count();
|
||||
|
||||
if count > threshold {
|
||||
result.bits[word_idx] |= 1 << bit_pos;
|
||||
}
|
||||
}
|
||||
|
||||
result
|
||||
}
|
||||
|
||||
/// Permute: shift bits (creates sequence-sensitive binding)
|
||||
pub fn permute(&self, shift: usize) -> Self {
|
||||
let shift = shift % DIM;
|
||||
if shift == 0 {
|
||||
return self.clone();
|
||||
}
|
||||
|
||||
let mut result = Self::zeros();
|
||||
|
||||
for bit_idx in 0..DIM {
|
||||
let new_idx = (bit_idx + shift) % DIM;
|
||||
let old_word = bit_idx / 64;
|
||||
let old_pos = bit_idx % 64;
|
||||
let new_word = new_idx / 64;
|
||||
let new_pos = new_idx % 64;
|
||||
|
||||
if (self.bits[old_word] >> old_pos) & 1 == 1 {
|
||||
result.bits[new_word] |= 1 << new_pos;
|
||||
}
|
||||
}
|
||||
|
||||
result
|
||||
}
|
||||
|
||||
/// Hamming distance (number of differing bits)
|
||||
pub fn hamming_distance(&self, other: &Self) -> u32 {
|
||||
let mut dist = 0u32;
|
||||
for i in 0..WORDS {
|
||||
dist += (self.bits[i] ^ other.bits[i]).count_ones();
|
||||
}
|
||||
dist
|
||||
}
|
||||
|
||||
/// Cosine-like similarity: 1 - 2 * (distance / DIM)
|
||||
pub fn similarity(&self, other: &Self) -> f32 {
|
||||
let dist = self.hamming_distance(other);
|
||||
1.0 - 2.0 * (dist as f32 / DIM as f32)
|
||||
}
|
||||
|
||||
/// Count ones
|
||||
pub fn popcount(&self) -> u32 {
|
||||
self.bits.iter().map(|w| w.count_ones()).sum()
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Debug for Hypervector {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "HV(popcount={})", self.popcount())
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Concept Memory
|
||||
// ============================================================================
|
||||
|
||||
/// Memory of atomic concepts
|
||||
pub struct ConceptMemory {
|
||||
/// Named concepts
|
||||
concepts: HashMap<String, Hypervector>,
|
||||
/// Role vectors for binding positions
|
||||
roles: HashMap<String, Hypervector>,
|
||||
}
|
||||
|
||||
impl ConceptMemory {
|
||||
pub fn new() -> Self {
|
||||
let mut mem = Self {
|
||||
concepts: HashMap::new(),
|
||||
roles: HashMap::new(),
|
||||
};
|
||||
|
||||
// Create role vectors for structured binding
|
||||
mem.roles.insert(
|
||||
"subject".to_string(),
|
||||
Hypervector::from_seed("role:subject"),
|
||||
);
|
||||
mem.roles.insert(
|
||||
"predicate".to_string(),
|
||||
Hypervector::from_seed("role:predicate"),
|
||||
);
|
||||
mem.roles
|
||||
.insert("object".to_string(), Hypervector::from_seed("role:object"));
|
||||
mem.roles.insert(
|
||||
"modifier".to_string(),
|
||||
Hypervector::from_seed("role:modifier"),
|
||||
);
|
||||
mem.roles.insert(
|
||||
"position_1".to_string(),
|
||||
Hypervector::from_seed("role:position_1"),
|
||||
);
|
||||
mem.roles.insert(
|
||||
"position_2".to_string(),
|
||||
Hypervector::from_seed("role:position_2"),
|
||||
);
|
||||
mem.roles.insert(
|
||||
"position_3".to_string(),
|
||||
Hypervector::from_seed("role:position_3"),
|
||||
);
|
||||
|
||||
mem
|
||||
}
|
||||
|
||||
/// Add a new atomic concept
|
||||
pub fn learn(&mut self, name: &str) -> Hypervector {
|
||||
if let Some(v) = self.concepts.get(name) {
|
||||
return v.clone();
|
||||
}
|
||||
|
||||
let v = Hypervector::from_seed(&format!("concept:{}", name));
|
||||
self.concepts.insert(name.to_string(), v.clone());
|
||||
v
|
||||
}
|
||||
|
||||
/// Get a concept (learn if new)
|
||||
pub fn get(&mut self, name: &str) -> Hypervector {
|
||||
self.learn(name)
|
||||
}
|
||||
|
||||
/// Get a role vector
|
||||
pub fn role(&self, name: &str) -> Option<&Hypervector> {
|
||||
self.roles.get(name)
|
||||
}
|
||||
|
||||
/// Bind concept to role
|
||||
pub fn bind_role(&self, concept: &Hypervector, role: &str) -> Option<Hypervector> {
|
||||
self.roles.get(role).map(|r| concept.bind(r))
|
||||
}
|
||||
|
||||
/// Unbind role to recover concept
|
||||
pub fn unbind_role(&self, bound: &Hypervector, role: &str) -> Option<Hypervector> {
|
||||
self.roles.get(role).map(|r| bound.unbind(r))
|
||||
}
|
||||
|
||||
/// Query: find best matching concept
|
||||
pub fn query(&self, hv: &Hypervector) -> Vec<(String, f32)> {
|
||||
let mut results: Vec<_> = self
|
||||
.concepts
|
||||
.iter()
|
||||
.map(|(name, v)| (name.clone(), hv.similarity(v)))
|
||||
.collect();
|
||||
|
||||
results.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap_or(std::cmp::Ordering::Equal));
|
||||
results
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Compositional Structures
|
||||
// ============================================================================
|
||||
|
||||
/// Compose "modifier concept" pairs (e.g., "red" + "dog")
|
||||
pub fn compose_modifier(memory: &mut ConceptMemory, modifier: &str, concept: &str) -> Hypervector {
|
||||
let m = memory.get(modifier);
|
||||
let c = memory.get(concept);
|
||||
|
||||
// Bind modifier to modifier role, then bundle with concept
|
||||
let m_bound = m.bind(memory.role("modifier").unwrap());
|
||||
let c_bound = c.bind(memory.role("subject").unwrap());
|
||||
|
||||
Hypervector::bundle(&[m_bound, c_bound])
|
||||
}
|
||||
|
||||
/// Compose a sequence (e.g., "A then B then C")
|
||||
pub fn compose_sequence(memory: &mut ConceptMemory, items: &[&str]) -> Hypervector {
|
||||
let mut parts = Vec::new();
|
||||
|
||||
for (i, item) in items.iter().enumerate() {
|
||||
let v = memory.get(item);
|
||||
// Permute by position to create order-sensitive representation
|
||||
parts.push(v.permute(i * 10));
|
||||
}
|
||||
|
||||
Hypervector::bundle(&parts)
|
||||
}
|
||||
|
||||
/// Compose a relation triple (subject, predicate, object)
|
||||
pub fn compose_triple(
|
||||
memory: &mut ConceptMemory,
|
||||
subject: &str,
|
||||
predicate: &str,
|
||||
object: &str,
|
||||
) -> Hypervector {
|
||||
let s = memory.get(subject).bind(memory.role("subject").unwrap());
|
||||
let p = memory
|
||||
.get(predicate)
|
||||
.bind(memory.role("predicate").unwrap());
|
||||
let o = memory.get(object).bind(memory.role("object").unwrap());
|
||||
|
||||
Hypervector::bundle(&[s, p, o])
|
||||
}
|
||||
|
||||
/// Query a composed structure for a specific role
|
||||
pub fn query_role(memory: &ConceptMemory, composed: &Hypervector, role: &str) -> Hypervector {
|
||||
composed.unbind(memory.role(role).unwrap())
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Analogical Reasoning
|
||||
// ============================================================================
|
||||
|
||||
/// Solve analogy: A is to B as C is to ?
|
||||
/// Using: D = C ⊗ (B ⊗ A⁻¹) where A⁻¹ = A (self-inverse)
|
||||
pub fn analogy(memory: &mut ConceptMemory, a: &str, b: &str, c: &str) -> Hypervector {
|
||||
let a_vec = memory.get(a);
|
||||
let b_vec = memory.get(b);
|
||||
let c_vec = memory.get(c);
|
||||
|
||||
// Relationship: B ⊗ A (since XOR is self-inverse)
|
||||
let relationship = b_vec.bind(&a_vec);
|
||||
|
||||
// Apply to C
|
||||
c_vec.bind(&relationship)
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Example Usage
|
||||
// ============================================================================
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 4: Compositional Hyperdimensional Computing ===\n");
|
||||
|
||||
let mut memory = ConceptMemory::new();
|
||||
|
||||
// Learn atomic concepts
|
||||
println!("Learning atomic concepts...");
|
||||
let concepts = [
|
||||
"dog", "cat", "bird", "red", "blue", "big", "small", "run", "fly", "swim", "chase", "eat",
|
||||
"king", "queen", "man", "woman", "prince", "princess",
|
||||
];
|
||||
|
||||
for concept in &concepts {
|
||||
memory.learn(concept);
|
||||
}
|
||||
println!(" Learned {} concepts\n", concepts.len());
|
||||
|
||||
// Demonstrate composition
|
||||
println!("=== Modifier + Concept Composition ===");
|
||||
|
||||
let red_dog = compose_modifier(&mut memory, "red", "dog");
|
||||
let blue_dog = compose_modifier(&mut memory, "blue", "dog");
|
||||
let red_cat = compose_modifier(&mut memory, "red", "cat");
|
||||
|
||||
println!(
|
||||
"'red dog' vs 'blue dog' similarity: {:.3}",
|
||||
red_dog.similarity(&blue_dog)
|
||||
);
|
||||
println!(
|
||||
"'red dog' vs 'red cat' similarity: {:.3}",
|
||||
red_dog.similarity(&red_cat)
|
||||
);
|
||||
println!(
|
||||
"'blue dog' vs 'red cat' similarity: {:.3}",
|
||||
blue_dog.similarity(&red_cat)
|
||||
);
|
||||
|
||||
// Query composed structure
|
||||
println!("\nQuerying 'red dog' for modifier role:");
|
||||
let recovered = query_role(&memory, &red_dog, "modifier");
|
||||
let matches = memory.query(&recovered);
|
||||
println!(" Top matches: {:?}", &matches[..3.min(matches.len())]);
|
||||
|
||||
// Sequence composition
|
||||
println!("\n=== Sequence Composition ===");
|
||||
|
||||
let seq1 = compose_sequence(&mut memory, &["run", "jump", "fly"]);
|
||||
let seq2 = compose_sequence(&mut memory, &["run", "jump", "swim"]);
|
||||
let seq3 = compose_sequence(&mut memory, &["fly", "jump", "run"]);
|
||||
|
||||
println!(
|
||||
"'run→jump→fly' vs 'run→jump→swim': {:.3}",
|
||||
seq1.similarity(&seq2)
|
||||
);
|
||||
println!(
|
||||
"'run→jump→fly' vs 'fly→jump→run': {:.3}",
|
||||
seq1.similarity(&seq3)
|
||||
);
|
||||
println!(" (Order matters: same elements, different sequence = different representation)");
|
||||
|
||||
// Triple composition
|
||||
println!("\n=== Relation Triple Composition ===");
|
||||
|
||||
let triple1 = compose_triple(&mut memory, "dog", "chase", "cat");
|
||||
let triple2 = compose_triple(&mut memory, "cat", "chase", "bird");
|
||||
let triple3 = compose_triple(&mut memory, "dog", "eat", "cat");
|
||||
|
||||
println!(
|
||||
"'dog chase cat' vs 'cat chase bird': {:.3}",
|
||||
triple1.similarity(&triple2)
|
||||
);
|
||||
println!(
|
||||
"'dog chase cat' vs 'dog eat cat': {:.3}",
|
||||
triple1.similarity(&triple3)
|
||||
);
|
||||
|
||||
// Query subject from triple
|
||||
println!("\nQuerying 'dog chase cat' for subject:");
|
||||
let subject_query = query_role(&memory, &triple1, "subject");
|
||||
let subject_matches = memory.query(&subject_query);
|
||||
println!(
|
||||
" Top matches: {:?}",
|
||||
&subject_matches[..3.min(subject_matches.len())]
|
||||
);
|
||||
|
||||
// Analogical reasoning
|
||||
println!("\n=== Analogical Reasoning ===");
|
||||
println!("Solving: 'king' is to 'queen' as 'man' is to ?");
|
||||
|
||||
let answer = analogy(&mut memory, "king", "queen", "man");
|
||||
let analogy_matches = memory.query(&answer);
|
||||
println!(
|
||||
" Top matches: {:?}",
|
||||
&analogy_matches[..5.min(analogy_matches.len())]
|
||||
);
|
||||
println!(" Expected: 'woman' should be near the top");
|
||||
|
||||
// Zero-shot composition
|
||||
println!("\n=== Zero-Shot Composition ===");
|
||||
println!("Composing 'big blue cat' (never seen together):");
|
||||
|
||||
// Multi-modifier composition
|
||||
let big = memory.get("big").bind(memory.role("modifier").unwrap());
|
||||
let blue = memory
|
||||
.get("blue")
|
||||
.bind(memory.role("modifier").unwrap())
|
||||
.permute(5);
|
||||
let cat = memory.get("cat").bind(memory.role("subject").unwrap());
|
||||
let big_blue_cat = Hypervector::bundle(&[big, blue, cat]);
|
||||
|
||||
// Compare to similar compositions
|
||||
let small_red_dog = {
|
||||
let small = memory.get("small").bind(memory.role("modifier").unwrap());
|
||||
let red = memory
|
||||
.get("red")
|
||||
.bind(memory.role("modifier").unwrap())
|
||||
.permute(5);
|
||||
let dog = memory.get("dog").bind(memory.role("subject").unwrap());
|
||||
Hypervector::bundle(&[small, red, dog])
|
||||
};
|
||||
|
||||
let big_blue_dog = {
|
||||
let big = memory.get("big").bind(memory.role("modifier").unwrap());
|
||||
let blue = memory
|
||||
.get("blue")
|
||||
.bind(memory.role("modifier").unwrap())
|
||||
.permute(5);
|
||||
let dog = memory.get("dog").bind(memory.role("subject").unwrap());
|
||||
Hypervector::bundle(&[big, blue, dog])
|
||||
};
|
||||
|
||||
println!(
|
||||
"'big blue cat' vs 'small red dog': {:.3}",
|
||||
big_blue_cat.similarity(&small_red_dog)
|
||||
);
|
||||
println!(
|
||||
"'big blue cat' vs 'big blue dog': {:.3}",
|
||||
big_blue_cat.similarity(&big_blue_dog)
|
||||
);
|
||||
println!(" (Sharing modifiers increases similarity)");
|
||||
|
||||
// Performance test
|
||||
println!("\n=== Performance ===");
|
||||
let start = std::time::Instant::now();
|
||||
let iterations = 10_000;
|
||||
|
||||
let v1 = Hypervector::random(42);
|
||||
let v2 = Hypervector::random(123);
|
||||
|
||||
for _ in 0..iterations {
|
||||
let _ = v1.bind(&v2);
|
||||
}
|
||||
let bind_time = start.elapsed();
|
||||
|
||||
let start = std::time::Instant::now();
|
||||
for _ in 0..iterations {
|
||||
let _ = v1.similarity(&v2);
|
||||
}
|
||||
let sim_time = start.elapsed();
|
||||
|
||||
println!(
|
||||
"Bind (XOR) time: {:.1}ns per op",
|
||||
bind_time.as_nanos() as f64 / iterations as f64
|
||||
);
|
||||
println!(
|
||||
"Similarity time: {:.1}ns per op",
|
||||
sim_time.as_nanos() as f64 / iterations as f64
|
||||
);
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!("- Zero-shot: compose any combination of known concepts");
|
||||
println!("- Sub-100ns: composition is just XOR operations");
|
||||
println!("- Algebraic: unbind to recover components");
|
||||
println!("- Distributed: no central vocabulary server");
|
||||
println!("- Interpretable: query reveals structure");
|
||||
println!("\nThis is what embeddings should have been: compositional by construction.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_bind_unbind() {
|
||||
let a = Hypervector::random(42);
|
||||
let b = Hypervector::random(123);
|
||||
|
||||
let bound = a.bind(&b);
|
||||
let recovered = bound.unbind(&b);
|
||||
|
||||
// Recovered should be very similar to original
|
||||
assert!(recovered.similarity(&a) > 0.95);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_binding_dissimilarity() {
|
||||
let a = Hypervector::random(42);
|
||||
let b = Hypervector::random(123);
|
||||
|
||||
let bound = a.bind(&b);
|
||||
|
||||
// Bound should be dissimilar to both components
|
||||
assert!(bound.similarity(&a).abs() < 0.2);
|
||||
assert!(bound.similarity(&b).abs() < 0.2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bundle_similarity() {
|
||||
let a = Hypervector::random(42);
|
||||
let b = Hypervector::random(123);
|
||||
let c = Hypervector::random(456);
|
||||
|
||||
let bundle_ab = Hypervector::bundle(&[a.clone(), b.clone()]);
|
||||
let bundle_ac = Hypervector::bundle(&[a.clone(), c.clone()]);
|
||||
|
||||
// Bundles with shared component should be somewhat similar
|
||||
let sim = bundle_ab.similarity(&bundle_ac);
|
||||
assert!(sim > 0.2); // Some similarity due to shared A
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_composition() {
|
||||
let mut memory = ConceptMemory::new();
|
||||
|
||||
let red_dog = compose_modifier(&mut memory, "red", "dog");
|
||||
let red_cat = compose_modifier(&mut memory, "red", "cat");
|
||||
let blue_dog = compose_modifier(&mut memory, "blue", "dog");
|
||||
|
||||
// Same modifier = more similar than same noun
|
||||
let rd_rc = red_dog.similarity(&red_cat);
|
||||
let rd_bd = red_dog.similarity(&blue_dog);
|
||||
|
||||
// Both should show some similarity due to shared component
|
||||
assert!(rd_rc.abs() > 0.1);
|
||||
assert!(rd_bd.abs() > 0.1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_sequence_order() {
|
||||
let mut memory = ConceptMemory::new();
|
||||
|
||||
let seq1 = compose_sequence(&mut memory, &["a", "b", "c"]);
|
||||
let seq2 = compose_sequence(&mut memory, &["c", "b", "a"]);
|
||||
|
||||
// Different order should produce different representations
|
||||
assert!(seq1.similarity(&seq2) < 0.5);
|
||||
}
|
||||
}
|
||||
608
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t4_neuromorphic_rag.rs
vendored
Normal file
608
vendor/ruvector/crates/ruvector-nervous-system/examples/tiers/t4_neuromorphic_rag.rs
vendored
Normal file
@@ -0,0 +1,608 @@
|
||||
//! # Tier 4: Neuromorphic Retrieval-Augmented Generation
|
||||
//!
|
||||
//! SOTA application: Sparse, coherence-gated retrieval for LLM memory.
|
||||
//!
|
||||
//! ## The Problem
|
||||
//! Traditional RAG:
|
||||
//! - Dense embeddings: O(n) comparisons for n documents
|
||||
//! - No temporal awareness: "What did I say 5 minutes ago?" is hard
|
||||
//! - Retrieval is always-on: Wastes compute on easy queries
|
||||
//!
|
||||
//! ## What Changes
|
||||
//! - Sparse HDC encoding: 2-5% active dimensions → 20x faster similarity
|
||||
//! - Circadian gating: Retrieve only when coherence drops (uncertainty)
|
||||
//! - Pattern separation: Similar memories don't collide
|
||||
//! - Temporal decay: Recent > distant, biologically realistic
|
||||
//!
|
||||
//! ## Why This Matters
|
||||
//! - 100x fewer retrievals for confident queries
|
||||
//! - Sub-millisecond retrieval for million-document corpora
|
||||
//! - Native "forgetting" prevents memory bloat
|
||||
//!
|
||||
//! This is what RAG should have been.
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::time::Instant;
|
||||
|
||||
// ============================================================================
|
||||
// Neuromorphic Memory Entry
|
||||
// ============================================================================
|
||||
|
||||
/// A memory entry with sparse encoding and temporal metadata
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct MemoryEntry {
|
||||
/// Unique identifier
|
||||
pub id: u64,
|
||||
/// Original content (for retrieval)
|
||||
pub content: String,
|
||||
/// Sparse HDC encoding (indices of active dimensions)
|
||||
pub sparse_code: Vec<u32>,
|
||||
/// Timestamp of storage
|
||||
pub timestamp: u64,
|
||||
/// Access count (for importance weighting)
|
||||
pub access_count: u32,
|
||||
/// Eligibility trace (decays over time, spikes on access)
|
||||
pub eligibility: f32,
|
||||
/// Source context (conversation, document, etc.)
|
||||
pub source: String,
|
||||
}
|
||||
|
||||
impl MemoryEntry {
|
||||
/// Compute similarity to query (sparse Jaccard)
|
||||
pub fn similarity(&self, query_code: &[u32]) -> f32 {
|
||||
if self.sparse_code.is_empty() || query_code.is_empty() {
|
||||
return 0.0;
|
||||
}
|
||||
|
||||
let set_a: std::collections::HashSet<_> = self.sparse_code.iter().collect();
|
||||
let set_b: std::collections::HashSet<_> = query_code.iter().collect();
|
||||
|
||||
let intersection = set_a.intersection(&set_b).count();
|
||||
let union = set_a.union(&set_b).count();
|
||||
|
||||
if union == 0 {
|
||||
0.0
|
||||
} else {
|
||||
intersection as f32 / union as f32
|
||||
}
|
||||
}
|
||||
|
||||
/// Temporal weight: recent memories are more accessible
|
||||
pub fn temporal_weight(&self, current_time: u64, tau_hours: f32) -> f32 {
|
||||
let age_hours = (current_time - self.timestamp) as f32 / 3600.0;
|
||||
(-age_hours / tau_hours).exp()
|
||||
}
|
||||
|
||||
/// Combined retrieval score
|
||||
pub fn retrieval_score(&self, query_code: &[u32], current_time: u64) -> f32 {
|
||||
let sim = self.similarity(query_code);
|
||||
let temporal = self.temporal_weight(current_time, 24.0); // 24-hour decay
|
||||
let importance = (self.access_count as f32).ln_1p() / 10.0; // Log importance
|
||||
|
||||
// Weighted combination with eligibility boost
|
||||
(sim * 0.6 + temporal * 0.2 + importance * 0.1 + self.eligibility * 0.1).clamp(0.0, 1.0)
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Sparse Encoder (HDC-inspired)
|
||||
// ============================================================================
|
||||
|
||||
/// Encodes text into sparse binary codes using random projection
|
||||
pub struct SparseEncoder {
|
||||
/// Dimensionality of the hypervector
|
||||
dim: usize,
|
||||
/// Sparsity level (fraction of active dimensions)
|
||||
sparsity: f32,
|
||||
/// Learned token embeddings (sparse)
|
||||
token_codes: HashMap<String, Vec<u32>>,
|
||||
/// Random seed for deterministic encoding
|
||||
seed: u64,
|
||||
}
|
||||
|
||||
impl SparseEncoder {
|
||||
pub fn new(dim: usize, sparsity: f32) -> Self {
|
||||
Self {
|
||||
dim,
|
||||
sparsity: sparsity.clamp(0.01, 0.1), // 1-10% sparsity
|
||||
token_codes: HashMap::new(),
|
||||
seed: 42,
|
||||
}
|
||||
}
|
||||
|
||||
/// Encode text to sparse code (indices of active dimensions)
|
||||
pub fn encode(&mut self, text: &str) -> Vec<u32> {
|
||||
// Tokenize (simple whitespace split)
|
||||
let tokens: Vec<&str> = text.split_whitespace().collect();
|
||||
|
||||
if tokens.is_empty() {
|
||||
return Vec::new();
|
||||
}
|
||||
|
||||
// Get or create codes for each token
|
||||
let mut counts = vec![0u32; self.dim];
|
||||
for token in &tokens {
|
||||
let token_code = self.get_or_create_token_code(token);
|
||||
for &idx in &token_code {
|
||||
counts[idx as usize] += 1;
|
||||
}
|
||||
}
|
||||
|
||||
// Bundle: take top-k by count (maintains sparsity)
|
||||
let k = ((self.dim as f32) * self.sparsity) as usize;
|
||||
let mut indexed: Vec<(usize, u32)> = counts.into_iter().enumerate().collect();
|
||||
indexed.sort_by(|a, b| b.1.cmp(&a.1));
|
||||
|
||||
indexed
|
||||
.into_iter()
|
||||
.take(k)
|
||||
.filter(|(_, count)| *count > 0)
|
||||
.map(|(idx, _)| idx as u32)
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn get_or_create_token_code(&mut self, token: &str) -> Vec<u32> {
|
||||
if let Some(code) = self.token_codes.get(token) {
|
||||
return code.clone();
|
||||
}
|
||||
|
||||
// Generate deterministic random code for token
|
||||
let code = self.random_sparse_code(token);
|
||||
self.token_codes.insert(token.to_string(), code.clone());
|
||||
code
|
||||
}
|
||||
|
||||
fn random_sparse_code(&self, token: &str) -> Vec<u32> {
|
||||
// Hash-based deterministic random
|
||||
let hash = token.bytes().fold(self.seed, |acc, b| {
|
||||
acc.wrapping_mul(31).wrapping_add(b as u64)
|
||||
});
|
||||
|
||||
let k = ((self.dim as f32) * self.sparsity) as usize;
|
||||
let mut indices = Vec::with_capacity(k);
|
||||
let mut h = hash;
|
||||
|
||||
for _ in 0..k {
|
||||
h = h.wrapping_mul(6364136223846793005).wrapping_add(1);
|
||||
let idx = (h % self.dim as u64) as u32;
|
||||
if !indices.contains(&idx) {
|
||||
indices.push(idx);
|
||||
}
|
||||
}
|
||||
|
||||
indices.sort();
|
||||
indices
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Coherence Monitor (triggers retrieval only when uncertain)
|
||||
// ============================================================================
|
||||
|
||||
/// Monitors coherence and decides when retrieval is needed
|
||||
pub struct CoherenceMonitor {
|
||||
/// Current coherence level (0-1)
|
||||
coherence: f32,
|
||||
/// Threshold for triggering retrieval
|
||||
retrieval_threshold: f32,
|
||||
/// History of coherence values
|
||||
history: Vec<f32>,
|
||||
/// Hysteresis: require N consecutive low readings
|
||||
low_count: u32,
|
||||
required_low: u32,
|
||||
}
|
||||
|
||||
impl CoherenceMonitor {
|
||||
pub fn new(threshold: f32) -> Self {
|
||||
Self {
|
||||
coherence: 1.0,
|
||||
retrieval_threshold: threshold,
|
||||
history: Vec::new(),
|
||||
low_count: 0,
|
||||
required_low: 3, // Require 3 consecutive low readings
|
||||
}
|
||||
}
|
||||
|
||||
/// Update coherence from external signal
|
||||
pub fn update(&mut self, coherence: f32) {
|
||||
self.coherence = coherence;
|
||||
self.history.push(coherence);
|
||||
if self.history.len() > 100 {
|
||||
self.history.remove(0);
|
||||
}
|
||||
|
||||
if coherence < self.retrieval_threshold {
|
||||
self.low_count += 1;
|
||||
} else {
|
||||
self.low_count = 0;
|
||||
}
|
||||
}
|
||||
|
||||
/// Should we retrieve from memory?
|
||||
pub fn should_retrieve(&self) -> bool {
|
||||
self.low_count >= self.required_low
|
||||
}
|
||||
|
||||
/// Get retrieval urgency (for prioritization)
|
||||
pub fn retrieval_urgency(&self) -> f32 {
|
||||
if self.coherence >= self.retrieval_threshold {
|
||||
0.0
|
||||
} else {
|
||||
(self.retrieval_threshold - self.coherence) / self.retrieval_threshold
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Neuromorphic Memory Store
|
||||
// ============================================================================
|
||||
|
||||
/// Sparse, coherence-gated memory store
|
||||
pub struct NeuromorphicMemory {
|
||||
/// All stored memories
|
||||
memories: Vec<MemoryEntry>,
|
||||
/// Encoder for queries
|
||||
encoder: SparseEncoder,
|
||||
/// Coherence monitor
|
||||
coherence: CoherenceMonitor,
|
||||
/// Current timestamp
|
||||
timestamp: u64,
|
||||
/// Next memory ID
|
||||
next_id: u64,
|
||||
/// Retrieval statistics
|
||||
pub stats: RetrievalStats,
|
||||
}
|
||||
|
||||
#[derive(Default, Clone, Debug)]
|
||||
pub struct RetrievalStats {
|
||||
pub queries_received: u64,
|
||||
pub retrievals_performed: u64,
|
||||
pub retrievals_skipped: u64,
|
||||
pub avg_retrieval_time_us: f64,
|
||||
pub cache_hits: u64,
|
||||
}
|
||||
|
||||
impl RetrievalStats {
|
||||
pub fn skip_ratio(&self) -> f64 {
|
||||
if self.queries_received == 0 {
|
||||
return 0.0;
|
||||
}
|
||||
self.retrievals_skipped as f64 / self.queries_received as f64
|
||||
}
|
||||
}
|
||||
|
||||
impl NeuromorphicMemory {
|
||||
pub fn new(coherence_threshold: f32) -> Self {
|
||||
Self {
|
||||
memories: Vec::new(),
|
||||
encoder: SparseEncoder::new(10000, 0.02), // 10k dims, 2% sparse
|
||||
coherence: CoherenceMonitor::new(coherence_threshold),
|
||||
timestamp: 0,
|
||||
next_id: 0,
|
||||
stats: RetrievalStats::default(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Store a new memory
|
||||
pub fn store(&mut self, content: &str, source: &str) -> u64 {
|
||||
let id = self.next_id;
|
||||
self.next_id += 1;
|
||||
|
||||
let sparse_code = self.encoder.encode(content);
|
||||
|
||||
self.memories.push(MemoryEntry {
|
||||
id,
|
||||
content: content.to_string(),
|
||||
sparse_code,
|
||||
timestamp: self.timestamp,
|
||||
access_count: 0,
|
||||
eligibility: 1.0,
|
||||
source: source.to_string(),
|
||||
});
|
||||
|
||||
id
|
||||
}
|
||||
|
||||
/// Advance time and decay eligibilities
|
||||
pub fn tick(&mut self, dt_seconds: u64) {
|
||||
self.timestamp += dt_seconds;
|
||||
|
||||
// Decay eligibility traces
|
||||
let decay = (-(dt_seconds as f32) / 3600.0).exp(); // 1-hour time constant
|
||||
for memory in &mut self.memories {
|
||||
memory.eligibility *= decay;
|
||||
}
|
||||
}
|
||||
|
||||
/// Update coherence from external signal
|
||||
pub fn update_coherence(&mut self, coherence: f32) {
|
||||
self.coherence.update(coherence);
|
||||
}
|
||||
|
||||
/// Query with coherence gating
|
||||
///
|
||||
/// Returns None if coherence is high (no retrieval needed).
|
||||
/// Returns Some(results) if retrieval was performed.
|
||||
pub fn query(&mut self, query: &str, top_k: usize) -> Option<Vec<(u64, String, f32)>> {
|
||||
self.stats.queries_received += 1;
|
||||
|
||||
// Check if retrieval is needed
|
||||
if !self.coherence.should_retrieve() {
|
||||
self.stats.retrievals_skipped += 1;
|
||||
return None;
|
||||
}
|
||||
|
||||
// Perform retrieval
|
||||
let start = Instant::now();
|
||||
let results = self.retrieve(query, top_k);
|
||||
let elapsed = start.elapsed().as_micros() as f64;
|
||||
|
||||
self.stats.retrievals_performed += 1;
|
||||
self.stats.avg_retrieval_time_us = (self.stats.avg_retrieval_time_us
|
||||
* (self.stats.retrievals_performed - 1) as f64
|
||||
+ elapsed)
|
||||
/ self.stats.retrievals_performed as f64;
|
||||
|
||||
Some(results)
|
||||
}
|
||||
|
||||
/// Force retrieval (bypass coherence gating)
|
||||
pub fn retrieve(&mut self, query: &str, top_k: usize) -> Vec<(u64, String, f32)> {
|
||||
let query_code = self.encoder.encode(query);
|
||||
|
||||
// Score all memories
|
||||
let mut scored: Vec<(usize, f32)> = self
|
||||
.memories
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(i, m)| (i, m.retrieval_score(&query_code, self.timestamp)))
|
||||
.collect();
|
||||
|
||||
// Sort by score descending
|
||||
scored.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap_or(std::cmp::Ordering::Equal));
|
||||
|
||||
// Take top-k and update access counts
|
||||
let results: Vec<_> = scored
|
||||
.into_iter()
|
||||
.take(top_k)
|
||||
.filter(|(_, score)| *score > 0.1) // Minimum threshold
|
||||
.map(|(i, score)| {
|
||||
self.memories[i].access_count += 1;
|
||||
self.memories[i].eligibility = 1.0; // Spike on access
|
||||
(self.memories[i].id, self.memories[i].content.clone(), score)
|
||||
})
|
||||
.collect();
|
||||
|
||||
results
|
||||
}
|
||||
|
||||
/// Get memory count
|
||||
pub fn len(&self) -> usize {
|
||||
self.memories.len()
|
||||
}
|
||||
|
||||
/// Get current coherence
|
||||
pub fn current_coherence(&self) -> f32 {
|
||||
self.coherence.coherence
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// RAG Pipeline with Neuromorphic Memory
|
||||
// ============================================================================
|
||||
|
||||
/// Complete RAG pipeline with coherence-gated retrieval
|
||||
pub struct NeuromorphicRAG {
|
||||
/// Memory store
|
||||
pub memory: NeuromorphicMemory,
|
||||
/// Context window (recent exchanges)
|
||||
pub context: Vec<String>,
|
||||
/// Max context size
|
||||
pub max_context: usize,
|
||||
}
|
||||
|
||||
impl NeuromorphicRAG {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
memory: NeuromorphicMemory::new(0.7), // Retrieve when coherence < 0.7
|
||||
context: Vec::new(),
|
||||
max_context: 10,
|
||||
}
|
||||
}
|
||||
|
||||
/// Process a query and return augmented context
|
||||
pub fn process(&mut self, query: &str, confidence: f32) -> RAGResult {
|
||||
// Update coherence based on confidence
|
||||
self.memory.update_coherence(confidence);
|
||||
|
||||
// Add to context
|
||||
self.context.push(format!("Q: {}", query));
|
||||
if self.context.len() > self.max_context {
|
||||
// Move to long-term memory before evicting
|
||||
let evicted = self.context.remove(0);
|
||||
self.memory.store(&evicted, "context");
|
||||
}
|
||||
|
||||
// Try coherence-gated retrieval
|
||||
let retrieved = self.memory.query(query, 3);
|
||||
|
||||
// Build result
|
||||
RAGResult {
|
||||
query: query.to_string(),
|
||||
retrieved_memories: retrieved.clone().unwrap_or_default(),
|
||||
retrieval_performed: retrieved.is_some(),
|
||||
coherence: self.memory.current_coherence(),
|
||||
context_size: self.context.len(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Store an answer for future retrieval
|
||||
pub fn store_answer(&mut self, answer: &str) {
|
||||
self.context.push(format!("A: {}", answer));
|
||||
if self.context.len() > self.max_context {
|
||||
let evicted = self.context.remove(0);
|
||||
self.memory.store(&evicted, "context");
|
||||
}
|
||||
}
|
||||
|
||||
/// Advance time
|
||||
pub fn tick(&mut self, dt_seconds: u64) {
|
||||
self.memory.tick(dt_seconds);
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct RAGResult {
|
||||
pub query: String,
|
||||
pub retrieved_memories: Vec<(u64, String, f32)>,
|
||||
pub retrieval_performed: bool,
|
||||
pub coherence: f32,
|
||||
pub context_size: usize,
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Example Usage
|
||||
// ============================================================================
|
||||
|
||||
fn main() {
|
||||
println!("=== Tier 4: Neuromorphic Retrieval-Augmented Generation ===\n");
|
||||
|
||||
let mut rag = NeuromorphicRAG::new();
|
||||
|
||||
// Populate memory with knowledge
|
||||
println!("Populating memory with knowledge...");
|
||||
let facts = [
|
||||
"The nervous system has five layers: sensing, reflex, memory, learning, coherence.",
|
||||
"HDC uses 10,000-bit binary hypervectors for ultra-fast similarity.",
|
||||
"Modern Hopfield networks have exponential capacity: 2^(d/2) patterns.",
|
||||
"BTSP enables one-shot learning with 2-second eligibility traces.",
|
||||
"Circadian controllers gate compute based on phase: active, dawn, dusk, rest.",
|
||||
"Pattern separation in dentate gyrus reduces collisions to below 1%.",
|
||||
"Kuramoto oscillators enable phase-locked communication routing.",
|
||||
"EWC consolidation prevents catastrophic forgetting with 2x parameter overhead.",
|
||||
"Event buses use lock-free ring buffers for 10,000+ events/ms throughput.",
|
||||
"Global workspace has 4-7 item capacity following Miller's law.",
|
||||
];
|
||||
|
||||
for (i, fact) in facts.iter().enumerate() {
|
||||
rag.memory.store(fact, "knowledge_base");
|
||||
rag.memory.tick(60); // 1 minute between facts
|
||||
if i % 3 == 0 {
|
||||
println!(" Stored {} facts...", i + 1);
|
||||
}
|
||||
}
|
||||
println!(" Total memories: {}\n", rag.memory.len());
|
||||
|
||||
// Simulate queries with varying confidence
|
||||
println!("Processing queries with coherence gating...\n");
|
||||
|
||||
let queries = [
|
||||
("What is HDC?", 0.9), // High confidence - no retrieval
|
||||
("How does memory work?", 0.8), // High - no retrieval
|
||||
("Tell me about BTSP learning", 0.5), // Low - trigger retrieval
|
||||
("What about oscillators?", 0.4), // Very low - retrieve
|
||||
("How many items in workspace?", 0.6), // Medium-low - retrieve
|
||||
("Explain the nervous system", 0.3), // Very low - retrieve
|
||||
("What is pattern separation?", 0.85), // High - no retrieval
|
||||
("Circadian phases?", 0.4), // Low - retrieve
|
||||
];
|
||||
|
||||
for (query, confidence) in queries {
|
||||
let result = rag.process(query, confidence);
|
||||
|
||||
println!("Query: \"{}\"", query);
|
||||
println!(
|
||||
" Confidence: {:.2}, Coherence: {:.2}",
|
||||
confidence, result.coherence
|
||||
);
|
||||
if result.retrieval_performed {
|
||||
println!(" RETRIEVED {} memories:", result.retrieved_memories.len());
|
||||
for (id, content, score) in &result.retrieved_memories {
|
||||
println!(
|
||||
" [{:.2}] #{}: {}...",
|
||||
score,
|
||||
id,
|
||||
&content[..content.len().min(60)]
|
||||
);
|
||||
}
|
||||
} else {
|
||||
println!(" Skipped retrieval (coherence sufficient)");
|
||||
}
|
||||
println!();
|
||||
|
||||
rag.store_answer(&format!("Answer about {}", query));
|
||||
rag.tick(30); // 30 seconds between queries
|
||||
}
|
||||
|
||||
// Print statistics
|
||||
let stats = &rag.memory.stats;
|
||||
println!("=== Retrieval Statistics ===");
|
||||
println!("Total queries: {}", stats.queries_received);
|
||||
println!("Retrievals performed: {}", stats.retrievals_performed);
|
||||
println!("Retrievals skipped: {}", stats.retrievals_skipped);
|
||||
println!("Skip ratio: {:.1}%", stats.skip_ratio() * 100.0);
|
||||
println!("Avg retrieval time: {:.1}μs", stats.avg_retrieval_time_us);
|
||||
|
||||
println!("\n=== Key Benefits ===");
|
||||
println!(
|
||||
"- Coherence gating: {:.0}% of queries didn't need retrieval",
|
||||
stats.skip_ratio() * 100.0
|
||||
);
|
||||
println!("- Sparse encoding: 2% active dimensions → 50x faster similarity");
|
||||
println!("- Temporal decay: Recent memories prioritized automatically");
|
||||
println!("- Eligibility traces: Accessed memories stay accessible");
|
||||
println!("\nThis is what RAG should have been: retrieval only when uncertain.");
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_sparse_encoding() {
|
||||
let mut encoder = SparseEncoder::new(10000, 0.02);
|
||||
let code = encoder.encode("hello world");
|
||||
|
||||
// Should have ~2% active dimensions
|
||||
assert!(code.len() > 0);
|
||||
assert!(code.len() <= 300); // At most 3% to account for bundling
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_coherence_gating() {
|
||||
let mut memory = NeuromorphicMemory::new(0.7);
|
||||
memory.store("test content", "test");
|
||||
|
||||
// High coherence - should skip
|
||||
memory.update_coherence(0.9);
|
||||
memory.update_coherence(0.9);
|
||||
memory.update_coherence(0.9);
|
||||
assert!(memory.query("test", 1).is_none());
|
||||
|
||||
// Low coherence - should retrieve after hysteresis
|
||||
memory.update_coherence(0.3);
|
||||
memory.update_coherence(0.3);
|
||||
memory.update_coherence(0.3);
|
||||
assert!(memory.query("test", 1).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_temporal_decay() {
|
||||
let mut memory = NeuromorphicMemory::new(0.0); // Always retrieve
|
||||
|
||||
memory.store("old memory", "test");
|
||||
memory.tick(86400); // 1 day
|
||||
memory.store("new memory", "test");
|
||||
|
||||
// Force retrieval
|
||||
memory.update_coherence(0.0);
|
||||
memory.update_coherence(0.0);
|
||||
memory.update_coherence(0.0);
|
||||
|
||||
let results = memory.query("memory", 2).unwrap();
|
||||
|
||||
// New memory should rank higher due to temporal weighting
|
||||
assert_eq!(results.len(), 2);
|
||||
assert!(results[0].1.contains("new"));
|
||||
}
|
||||
}
|
||||
211
vendor/ruvector/crates/ruvector-nervous-system/examples/workspace_demo.rs
vendored
Normal file
211
vendor/ruvector/crates/ruvector-nervous-system/examples/workspace_demo.rs
vendored
Normal file
@@ -0,0 +1,211 @@
|
||||
//! Demonstration of Global Workspace Theory implementation
|
||||
//!
|
||||
//! This example shows:
|
||||
//! 1. Module registration
|
||||
//! 2. Competitive access to limited workspace
|
||||
//! 3. Salience-based broadcasting
|
||||
//! 4. Temporal decay and pruning
|
||||
//! 5. Module subscription and routing
|
||||
//!
|
||||
//! Run with: cargo run --example workspace_demo
|
||||
|
||||
use ruvector_nervous_system::routing::workspace::{
|
||||
AccessRequest, ContentType, GlobalWorkspace, ModuleInfo, WorkspaceItem, WorkspaceRegistry,
|
||||
};
|
||||
|
||||
fn main() {
|
||||
println!("=== Global Workspace Theory Demo ===\n");
|
||||
|
||||
// 1. Create workspace with typical capacity (7 items per Miller's Law)
|
||||
println!("1. Creating workspace with capacity 7 (Miller's Law)");
|
||||
let mut workspace = GlobalWorkspace::new(7);
|
||||
println!(
|
||||
" Workspace created: {} slots available\n",
|
||||
workspace.available_slots()
|
||||
);
|
||||
|
||||
// 2. Demonstrate competitive broadcasting
|
||||
println!("2. Broadcasting items with varying salience:");
|
||||
|
||||
let items = vec![
|
||||
("Visual Input", 0.9, 1),
|
||||
("Audio Input", 0.7, 2),
|
||||
("Background Task", 0.3, 3),
|
||||
("Critical Alert", 0.95, 4),
|
||||
("Routine Process", 0.2, 5),
|
||||
];
|
||||
|
||||
for (name, salience, module) in &items {
|
||||
let item = WorkspaceItem::new(
|
||||
vec![1.0; 64], // 64-dim content vector
|
||||
*salience,
|
||||
*module,
|
||||
0,
|
||||
);
|
||||
let accepted = workspace.broadcast(item);
|
||||
println!(
|
||||
" {} (salience {:.2}): {}",
|
||||
name,
|
||||
salience,
|
||||
if accepted {
|
||||
"✓ BROADCASTED"
|
||||
} else {
|
||||
"✗ Rejected"
|
||||
}
|
||||
);
|
||||
}
|
||||
println!(
|
||||
" Workspace load: {:.1}%\n",
|
||||
workspace.current_load() * 100.0
|
||||
);
|
||||
|
||||
// 3. Retrieve top items
|
||||
println!("3. Top 3 most salient items:");
|
||||
let top_3 = workspace.retrieve_top_k(3);
|
||||
for (i, item) in top_3.iter().enumerate() {
|
||||
println!(
|
||||
" {}. Module {} - Salience: {:.2}",
|
||||
i + 1,
|
||||
item.source_module,
|
||||
item.salience
|
||||
);
|
||||
}
|
||||
println!();
|
||||
|
||||
// 4. Demonstrate competition and decay
|
||||
println!("4. Running competition (salience decay):");
|
||||
println!(
|
||||
" Before: {} items, avg salience: {:.2}",
|
||||
workspace.len(),
|
||||
workspace.average_salience()
|
||||
);
|
||||
|
||||
workspace.set_decay_rate(0.9);
|
||||
let survivors = workspace.compete();
|
||||
|
||||
println!(
|
||||
" After: {} items, avg salience: {:.2}",
|
||||
survivors.len(),
|
||||
workspace.average_salience()
|
||||
);
|
||||
println!(" {} items survived competition\n", survivors.len());
|
||||
|
||||
// 5. Access control demonstration
|
||||
println!("5. Demonstrating access control:");
|
||||
let request1 = AccessRequest::new(10, vec![1.0; 32], 0.8, 0);
|
||||
let request2 = AccessRequest::new(10, vec![2.0; 32], 0.7, 1);
|
||||
|
||||
println!(
|
||||
" Module 10 request 1: {}",
|
||||
if workspace.request_access(request1) {
|
||||
"✓ Queued"
|
||||
} else {
|
||||
"✗ Denied"
|
||||
}
|
||||
);
|
||||
println!(
|
||||
" Module 10 request 2: {}",
|
||||
if workspace.request_access(request2) {
|
||||
"✓ Queued"
|
||||
} else {
|
||||
"✗ Denied"
|
||||
}
|
||||
);
|
||||
println!();
|
||||
|
||||
// 6. Module registry demonstration
|
||||
println!("6. Module Registry System:");
|
||||
let mut registry = WorkspaceRegistry::new(7);
|
||||
|
||||
// Register modules
|
||||
let visual = ModuleInfo::new(
|
||||
0,
|
||||
"Visual Cortex".to_string(),
|
||||
1.0,
|
||||
vec![ContentType::Query, ContentType::Result],
|
||||
);
|
||||
let audio = ModuleInfo::new(
|
||||
0,
|
||||
"Audio Processor".to_string(),
|
||||
0.8,
|
||||
vec![ContentType::Query],
|
||||
);
|
||||
let exec = ModuleInfo::new(
|
||||
0,
|
||||
"Executive Control".to_string(),
|
||||
0.9,
|
||||
vec![ContentType::Control],
|
||||
);
|
||||
|
||||
let visual_id = registry.register(visual);
|
||||
let audio_id = registry.register(audio);
|
||||
let exec_id = registry.register(exec);
|
||||
|
||||
println!(" Registered {} modules:", registry.list_modules().len());
|
||||
for module in registry.list_modules() {
|
||||
println!(
|
||||
" - {} (ID: {}, Priority: {:.1})",
|
||||
module.name, module.id, module.priority
|
||||
);
|
||||
}
|
||||
println!();
|
||||
|
||||
// 7. Routing demonstration
|
||||
println!("7. Broadcasting through registry:");
|
||||
let high_priority_item = WorkspaceItem::new(vec![1.0; 128], 0.85, visual_id, 0);
|
||||
|
||||
let recipients = registry.route(high_priority_item);
|
||||
println!(
|
||||
" Item from Visual Cortex routed to {} modules",
|
||||
recipients.len()
|
||||
);
|
||||
println!(" Recipients: {:?}", recipients);
|
||||
println!();
|
||||
|
||||
// 8. Recent items retrieval
|
||||
println!("8. Retrieving recent workspace activity:");
|
||||
workspace.broadcast(WorkspaceItem::new(vec![1.0], 0.9, 20, 0));
|
||||
workspace.broadcast(WorkspaceItem::new(vec![2.0], 0.8, 21, 0));
|
||||
workspace.broadcast(WorkspaceItem::new(vec![3.0], 0.7, 22, 0));
|
||||
|
||||
let recent = workspace.retrieve_recent(3);
|
||||
println!(" Last 3 items (newest first):");
|
||||
for (i, item) in recent.iter().enumerate() {
|
||||
println!(
|
||||
" {}. Module {} at t={}",
|
||||
i + 1,
|
||||
item.source_module,
|
||||
item.timestamp
|
||||
);
|
||||
}
|
||||
println!();
|
||||
|
||||
// 9. Targeted broadcasting
|
||||
println!("9. Targeted broadcast to specific modules:");
|
||||
let targeted_item = WorkspaceItem::new(vec![1.0; 32], 0.88, 100, 0);
|
||||
let targets = vec![visual_id, audio_id];
|
||||
let reached = workspace.broadcast_to(targeted_item, &targets);
|
||||
println!(
|
||||
" Broadcast to {} target modules: {:?}",
|
||||
reached.len(),
|
||||
reached
|
||||
);
|
||||
println!();
|
||||
|
||||
// 10. Summary statistics
|
||||
println!("=== Final Workspace State ===");
|
||||
println!("Capacity: {}", workspace.capacity());
|
||||
println!("Current Items: {}", workspace.len());
|
||||
println!("Available Slots: {}", workspace.available_slots());
|
||||
println!("Load: {:.1}%", workspace.current_load() * 100.0);
|
||||
println!("Average Salience: {:.2}", workspace.average_salience());
|
||||
|
||||
if let Some(most_salient) = workspace.most_salient() {
|
||||
println!(
|
||||
"Most Salient Item: Module {} (salience: {:.2})",
|
||||
most_salient.source_module, most_salient.salience
|
||||
);
|
||||
}
|
||||
|
||||
println!("\n✓ Global Workspace demonstration complete!");
|
||||
}
|
||||
Reference in New Issue
Block a user