Squashed 'vendor/ruvector/' content from commit b64c2172

git-subtree-dir: vendor/ruvector
git-subtree-split: b64c21726f2bb37286d9ee36a7869fef60cc6900
This commit is contained in:
ruv
2026-02-28 14:39:40 -05:00
commit d803bfe2b1
7854 changed files with 3522914 additions and 0 deletions

View File

@@ -0,0 +1,294 @@
# RuVector DAG Examples
Comprehensive examples demonstrating the Neural Self-Learning DAG system.
## Quick Start
```bash
# Run any example
cargo run -p ruvector-dag --example <name>
# Run with release optimizations
cargo run -p ruvector-dag --example <name> --release
# Run tests for an example
cargo test -p ruvector-dag --example <name>
```
## Core Examples
### basic_usage
Fundamental DAG operations: creating nodes, adding edges, topological sort.
```bash
cargo run -p ruvector-dag --example basic_usage
```
**Demonstrates:**
- `QueryDag::new()`, `add_node()`, `add_edge()`
- `OperatorNode` types: SeqScan, Filter, Sort, Aggregate
- Topological iteration and depth computation
### attention_demo
All 7 attention mechanisms with visual output.
```bash
cargo run -p ruvector-dag --example attention_demo
```
**Demonstrates:**
- `TopologicalAttention` - DAG layer-based scoring
- `CriticalPathAttention` - Longest path weighting
- `CausalConeAttention` - Ancestor/descendant influence
- `MinCutGatedAttention` - Bottleneck-aware attention
- `HierarchicalLorentzAttention` - Hyperbolic embeddings
- `ParallelBranchAttention` - Branch parallelism scoring
- `TemporalBTSPAttention` - Time-aware plasticity
### attention_selection
UCB bandit algorithm for dynamic mechanism selection.
```bash
cargo run -p ruvector-dag --example attention_selection
```
**Demonstrates:**
- `AttentionSelector` with UCB1 exploration/exploitation
- Automatic mechanism performance tracking
- Adaptive selection based on observed rewards
### learning_workflow
Complete SONA learning pipeline with trajectory recording.
```bash
cargo run -p ruvector-dag --example learning_workflow
```
**Demonstrates:**
- `DagSonaEngine` initialization and training
- `DagTrajectoryBuffer` for lock-free trajectory collection
- `DagReasoningBank` for pattern storage
- MicroLoRA fast adaptation
- EWC++ continual learning
### self_healing
Autonomous anomaly detection and repair system.
```bash
cargo run -p ruvector-dag --example self_healing
```
**Demonstrates:**
- `HealingOrchestrator` configuration
- `AnomalyDetector` with statistical thresholds
- `LearningDriftDetector` for performance degradation
- Custom `RepairStrategy` implementations
- Health score computation
## Exotic Examples
These examples explore unconventional applications of coherence-sensing substrates—systems that respond to internal tension rather than external commands.
### synthetic_haptic ⭐ NEW
Complete nervous system for machines: sensor → reflex → actuator with memory and learning.
```bash
cargo run -p ruvector-dag --example synthetic_haptic
```
**Architecture:**
| Layer | Component | Purpose |
|-------|-----------|---------|
| 1 | Event Sensing | Microsecond timestamps, 6-channel input |
| 2 | Reflex Arc | DAG tension + MinCut → ReflexMode |
| 3 | HDC Memory | 256-dim hypervector associative memory |
| 4 | SONA Learning | Coherence-gated adaptation |
| 5 | Actuation | Energy-budgeted force + vibro output |
**Key Concepts:**
- Intelligence as homeostasis, not goal-seeking
- Tension drives immediate response
- Coherence gates learning (only when stable)
- ReflexModes: Calm → Active → Spike → Protect
**Performance:** 192 μs avg loop @ 1000 Hz
### synthetic_reflex_organism
Intelligence as homeostasis—organisms that minimize stress without explicit goals.
```bash
cargo run -p ruvector-dag --example synthetic_reflex_organism
```
**Demonstrates:**
- `ReflexOrganism` with metabolic rate and tension tracking
- `OrganismResponse`: Rest, Contract, Expand, Partition, Rebalance
- Learning only when instability crosses thresholds
- No objectives, only stress minimization
### timing_synchronization
Machines that "feel" timing through phase alignment.
```bash
cargo run -p ruvector-dag --example timing_synchronization
```
**Demonstrates:**
- Phase-locked loops using DAG coherence
- Biological rhythm synchronization
- Timing deviation as tension signal
- Self-correcting temporal alignment
### coherence_safety
Safety as structural property—systems that shut down when coherence drops.
```bash
cargo run -p ruvector-dag --example coherence_safety
```
**Demonstrates:**
- `SafetyEnvelope` with coherence thresholds
- Automatic graceful degradation
- No external safety monitors needed
- Structural shutdown mechanisms
### artificial_instincts
Hardwired biases via MinCut boundaries and attention patterns.
```bash
cargo run -p ruvector-dag --example artificial_instincts
```
**Demonstrates:**
- Instinct encoding via graph structure
- MinCut-enforced behavioral boundaries
- Attention-weighted decision biases
- Healing as instinct restoration
### living_simulation
Simulations that model fragility, not just outcomes.
```bash
cargo run -p ruvector-dag --example living_simulation
```
**Demonstrates:**
- Coherence as simulation health metric
- Fragility-aware state evolution
- Self-healing simulation repair
- Tension-driven adaptation
### thought_integrity
Reasoning monitored like electrical voltage—coherence as correctness signal.
```bash
cargo run -p ruvector-dag --example thought_integrity
```
**Demonstrates:**
- Reasoning chain as DAG structure
- Coherence drops indicate logical errors
- Self-correcting inference
- Integrity verification without external validation
### federated_coherence
Distributed consensus through coherence, not voting.
```bash
cargo run -p ruvector-dag --example federated_coherence
```
**Demonstrates:**
- `FederatedNode` with peer coherence tracking
- 7 message types for distributed coordination
- Pattern propagation via coherence alignment
- Consensus emerges from structural agreement
## Architecture Overview
```
┌─────────────────────────────────────────────────────────┐
│ QueryDag │
│ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │Scan │──▶│Filter│──▶│Agg │──▶│Sort │──▶│Result│ │
│ └─────┘ └─────┘ └─────┘ └─────┘ └─────┘ │
└─────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ Attention │ │ MinCut │ │ SONA │
│ Mechanisms │ │ Engine │ │ Learning │
│ (7 types) │ │ (tension) │ │ (coherence) │
└───────────────┘ └───────────────┘ └───────────────┘
│ │ │
└───────────────────┴───────────────────┘
┌───────────────┐
│ Healing │
│ Orchestrator │
└───────────────┘
```
## Key Concepts
### Tension
How far the current state is from homeostasis. Computed from:
- MinCut flow capacity stress
- Node criticality deviation
- Sensor/input anomalies
**Usage:** Drives immediate reflex-level responses.
### Coherence
How consistent the internal state is over time. Drops when:
- Tension changes rapidly
- Partitioning becomes unstable
- Learning causes drift
**Usage:** Gates learning and safety decisions.
### Reflex Modes
| Mode | Tension | Behavior |
|------|---------|----------|
| Calm | < 0.20 | Minimal response, learning allowed |
| Active | 0.20-0.55 | Proportional response |
| Spike | 0.55-0.85 | Heightened response, haptic feedback |
| Protect | > 0.85 | Protective shutdown, no output |
## Running All Examples
```bash
# Quick verification
for ex in basic_usage attention_demo attention_selection \
learning_workflow self_healing synthetic_haptic; do
echo "=== $ex ===" && cargo run -p ruvector-dag --example $ex 2>/dev/null | head -20
done
# Exotic examples
for ex in synthetic_reflex_organism timing_synchronization coherence_safety \
artificial_instincts living_simulation thought_integrity federated_coherence; do
echo "=== $ex ===" && cargo run -p ruvector-dag --example $ex 2>/dev/null | head -20
done
```
## Testing
```bash
# Run all example tests
cargo test -p ruvector-dag --examples
# Test specific example
cargo test -p ruvector-dag --example synthetic_haptic
```
## Performance Notes
- **Attention**: O(V+E) for topological, O(V²) for causal cone
- **MinCut**: O(n^0.12) amortized with caching
- **SONA Learning**: Background thread, non-blocking
- **Haptic Loop**: Target <1ms, achieved ~200μs average
## License
MIT - See repository root for details.

View File

@@ -0,0 +1,146 @@
//! Demo of DAG attention mechanisms
use ruvector_dag::attention::DagAttentionMechanism;
use ruvector_dag::{
CausalConeAttention, CriticalPathAttention, DagAttention, MinCutGatedAttention, OperatorNode,
QueryDag, TopologicalAttention,
};
use std::time::Instant;
fn create_sample_dag() -> QueryDag {
let mut dag = QueryDag::new();
// Create a complex query DAG with 100 nodes
let mut ids = Vec::new();
// Layer 1: 10 scan nodes
for i in 0..10 {
let id = dag.add_node(
OperatorNode::seq_scan(0, &format!("table_{}", i))
.with_estimates(1000.0 * (i as f64 + 1.0), 10.0),
);
ids.push(id);
}
// Layer 2: 20 filter nodes
for i in 0..20 {
let id = dag.add_node(
OperatorNode::filter(0, &format!("col_{} > 0", i)).with_estimates(500.0, 5.0),
);
dag.add_edge(ids[i % 10], id).unwrap();
ids.push(id);
}
// Layer 3: 30 join nodes
for i in 0..30 {
let id = dag.add_node(
OperatorNode::hash_join(0, &format!("key_{}", i)).with_estimates(2000.0, 20.0),
);
dag.add_edge(ids[10 + (i % 20)], id).unwrap();
dag.add_edge(ids[10 + ((i + 1) % 20)], id).unwrap();
ids.push(id);
}
// Layer 4: 20 aggregate nodes
for i in 0..20 {
let id = dag.add_node(
OperatorNode::aggregate(0, vec![format!("sum(col_{})", i)]).with_estimates(100.0, 15.0),
);
dag.add_edge(ids[30 + (i % 30)], id).unwrap();
ids.push(id);
}
// Layer 5: 10 sort nodes
for i in 0..10 {
let id = dag.add_node(
OperatorNode::sort(0, vec![format!("col_{}", i)]).with_estimates(100.0, 12.0),
);
dag.add_edge(ids[60 + (i * 2)], id).unwrap();
ids.push(id);
}
// Layer 6: 5 limit nodes
for i in 0..5 {
let id = dag.add_node(OperatorNode::limit(0, 100).with_estimates(100.0, 1.0));
dag.add_edge(ids[80 + (i * 2)], id).unwrap();
ids.push(id);
}
// Final result node
let result = dag.add_node(OperatorNode::result(0));
for i in 0..5 {
dag.add_edge(ids[90 + i], result).unwrap();
}
dag
}
fn main() {
println!("DAG Attention Mechanisms Performance Demo");
println!("==========================================\n");
let dag = create_sample_dag();
println!(
"Created DAG with {} nodes and {} edges\n",
dag.node_count(),
dag.edge_count()
);
// Test TopologicalAttention
println!("1. TopologicalAttention");
let topo = TopologicalAttention::with_defaults();
let start = Instant::now();
let scores = topo.forward(&dag).unwrap();
let elapsed = start.elapsed();
println!(" Time: {:?}", elapsed);
println!(" Complexity: {}", topo.complexity());
println!(" Score sum: {:.6}", scores.values().sum::<f32>());
println!(
" Max score: {:.6}\n",
scores.values().fold(0.0f32, |a, &b| a.max(b))
);
// Test CausalConeAttention
println!("2. CausalConeAttention");
let causal = CausalConeAttention::with_defaults();
let start = Instant::now();
let scores = causal.forward(&dag).unwrap();
let elapsed = start.elapsed();
println!(" Time: {:?}", elapsed);
println!(" Complexity: {}", causal.complexity());
println!(" Score sum: {:.6}", scores.values().sum::<f32>());
println!(
" Max score: {:.6}\n",
scores.values().fold(0.0f32, |a, &b| a.max(b))
);
// Test CriticalPathAttention
println!("3. CriticalPathAttention");
let critical = CriticalPathAttention::with_defaults();
let start = Instant::now();
let scores = critical.forward(&dag).unwrap();
let elapsed = start.elapsed();
println!(" Time: {:?}", elapsed);
println!(" Complexity: {}", critical.complexity());
println!(" Score sum: {:.6}", scores.values().sum::<f32>());
println!(
" Max score: {:.6}\n",
scores.values().fold(0.0f32, |a, &b| a.max(b))
);
// Test MinCutGatedAttention
println!("4. MinCutGatedAttention");
let mincut = MinCutGatedAttention::with_defaults();
let start = Instant::now();
let result = mincut.forward(&dag).unwrap();
let elapsed = start.elapsed();
println!(" Time: {:?}", elapsed);
println!(" Complexity: {}", mincut.complexity());
println!(" Score sum: {:.6}", result.scores.iter().sum::<f32>());
println!(
" Max score: {:.6}\n",
result.scores.iter().fold(0.0f32, |a, b| a.max(*b))
);
println!("All attention mechanisms completed successfully!");
}

View File

@@ -0,0 +1,99 @@
//! Attention mechanism selection example
use ruvector_dag::attention::{
CausalConeAttention, CausalConeConfig, DagAttention, TopologicalAttention, TopologicalConfig,
};
use ruvector_dag::dag::{OperatorNode, OperatorType, QueryDag};
fn main() {
println!("=== Attention Mechanism Selection ===\n");
// Create a sample DAG
let dag = create_vector_search_dag();
println!("Created vector search DAG:");
println!(" Nodes: {}", dag.node_count());
println!(" Edges: {}", dag.edge_count());
// Test Topological Attention
println!("\n--- Topological Attention ---");
println!("Emphasizes node depth in the DAG hierarchy");
let topo = TopologicalAttention::new(TopologicalConfig {
decay_factor: 0.9,
max_depth: 10,
});
let scores = topo.forward(&dag).unwrap();
println!("\nAttention scores:");
for (node_id, score) in &scores {
let node = dag.get_node(*node_id).unwrap();
println!(" Node {}: {:.4} - {:?}", node_id, score, node.op_type);
}
let sum: f32 = scores.values().sum();
println!("\nSum of scores: {:.4} (should be ~1.0)", sum);
// Test Causal Cone Attention
println!("\n--- Causal Cone Attention ---");
println!("Focuses on downstream dependencies");
let causal = CausalConeAttention::new(CausalConeConfig {
time_window_ms: 1000,
future_discount: 0.85,
ancestor_weight: 0.5,
});
let causal_scores = causal.forward(&dag).unwrap();
println!("\nCausal cone scores:");
for (node_id, score) in &causal_scores {
let node = dag.get_node(*node_id).unwrap();
println!(" Node {}: {:.4} - {:?}", node_id, score, node.op_type);
}
// Compare mechanisms
println!("\n--- Comparison ---");
println!("Node | Topological | Causal Cone | Difference");
println!("-----|-------------|-------------|------------");
for node_id in 0..dag.node_count() {
let topo_score = scores.get(&node_id).unwrap_or(&0.0);
let causal_score = causal_scores.get(&node_id).unwrap_or(&0.0);
let diff = (topo_score - causal_score).abs();
println!(
"{:4} | {:11.4} | {:11.4} | {:11.4}",
node_id, topo_score, causal_score, diff
);
}
println!("\n=== Example Complete ===");
}
fn create_vector_search_dag() -> QueryDag {
let mut dag = QueryDag::new();
// HNSW scan - the primary vector search
let hnsw = dag.add_node(OperatorNode::hnsw_scan(0, "embeddings_idx", 64));
// Metadata table scan
let meta = dag.add_node(OperatorNode::seq_scan(1, "metadata"));
// Join embeddings with metadata
let join = dag.add_node(OperatorNode::new(2, OperatorType::NestedLoopJoin));
dag.add_edge(hnsw, join).unwrap();
dag.add_edge(meta, join).unwrap();
// Filter by category
let filter = dag.add_node(OperatorNode::filter(3, "category = 'tech'"));
dag.add_edge(join, filter).unwrap();
// Limit results
let limit = dag.add_node(OperatorNode::limit(4, 10));
dag.add_edge(filter, limit).unwrap();
// Result node
let result = dag.add_node(OperatorNode::new(5, OperatorType::Result));
dag.add_edge(limit, result).unwrap();
dag
}

View File

@@ -0,0 +1,73 @@
//! Basic usage example for Neural DAG Learning
use ruvector_dag::dag::{OperatorNode, OperatorType, QueryDag};
fn main() {
println!("=== Neural DAG Learning - Basic Usage ===\n");
// Create a new DAG
let mut dag = QueryDag::new();
// Add nodes representing query operators
println!("Building query DAG...");
let scan = dag.add_node(OperatorNode::seq_scan(0, "users"));
println!(" Added SeqScan on 'users' (id: {})", scan);
let filter = dag.add_node(OperatorNode::filter(1, "age > 18"));
println!(" Added Filter 'age > 18' (id: {})", filter);
let sort = dag.add_node(OperatorNode::sort(2, vec!["name".to_string()]));
println!(" Added Sort by 'name' (id: {})", sort);
let limit = dag.add_node(OperatorNode::limit(3, 10));
println!(" Added Limit 10 (id: {})", limit);
let result = dag.add_node(OperatorNode::new(4, OperatorType::Result));
println!(" Added Result (id: {})", result);
// Connect nodes
dag.add_edge(scan, filter).unwrap();
dag.add_edge(filter, sort).unwrap();
dag.add_edge(sort, limit).unwrap();
dag.add_edge(limit, result).unwrap();
println!("\nDAG Statistics:");
println!(" Nodes: {}", dag.node_count());
println!(" Edges: {}", dag.edge_count());
// Compute topological order
let order = dag.topological_sort().unwrap();
println!("\nTopological Order: {:?}", order);
// Compute depths
let depths = dag.compute_depths();
println!("\nNode Depths:");
for (id, depth) in &depths {
println!(" Node {}: depth {}", id, depth);
}
// Get children
println!("\nNode Children:");
for node_id in 0..5 {
let children = dag.children(node_id);
println!(" Node {}: {:?}", node_id, children);
}
// Demonstrate iterators
println!("\nDFS Traversal:");
for (i, node_id) in dag.dfs_iter(scan).enumerate() {
if i < 10 {
println!(" Visit: {}", node_id);
}
}
println!("\nBFS Traversal:");
for (i, node_id) in dag.bfs_iter(scan).enumerate() {
if i < 10 {
println!(" Visit: {}", node_id);
}
}
println!("\n=== Example Complete ===");
}

View File

@@ -0,0 +1,148 @@
# Exotic Examples: Coherence-Sensing Substrates
These examples explore systems that respond to internal tension rather than external commands—where intelligence emerges as homeostasis.
## Philosophy
Traditional AI systems are goal-directed: they receive objectives and optimize toward them. These examples flip that model:
> **Intelligence as maintaining coherence under perturbation.**
A system doesn't need goals if it can feel when it's "out of tune" and naturally moves toward equilibrium.
## The Examples
### 1. synthetic_reflex_organism.rs
**Intelligence as Homeostasis**
No goals, only stress minimization. The organism responds to tension by adjusting its internal state, learning only when instability crosses thresholds.
```rust
pub enum OrganismResponse {
Rest, // Low tension: do nothing
Contract, // Rising tension: consolidate
Expand, // Stable low tension: explore
Partition, // High tension: segment
Rebalance, // Oscillating: redistribute
}
```
### 2. timing_synchronization.rs
**Machines That Feel Timing**
Phase-locked loops using DAG coherence. The system "feels" when its internal rhythms drift from external signals and self-corrects.
```rust
// Timing is not measured, it's felt
let phase_error = self.measure_phase_deviation();
let tension = self.dag.compute_tension_from_timing(phase_error);
self.adjust_internal_clock(tension);
```
### 3. coherence_safety.rs
**Structural Safety**
Safety isn't a monitor checking outputs—it's a structural property. When coherence drops below threshold, the system naturally enters a safe state.
```rust
// No safety rules, just coherence
if coherence < 0.3 {
// System structurally cannot produce dangerous output
// because the pathways become disconnected
}
```
### 4. artificial_instincts.rs
**Hardwired Biases**
Instincts encoded via MinCut boundaries and attention patterns. These aren't learned—they're structural constraints that shape behavior.
```rust
// Fear isn't learned, it's architectural
let fear_boundary = mincut.compute(threat_region, action_region);
if fear_boundary.cut_value < threshold {
// Action pathway is structurally blocked
}
```
### 5. living_simulation.rs
**Fragility-Aware Modeling**
Simulations that model not just outcomes, but structural health. The simulation knows when it's "sick" and can heal itself.
```rust
// Simulation health = structural coherence
let health = simulation.dag.coherence();
if health < 0.5 {
simulation.trigger_healing();
}
```
### 6. thought_integrity.rs
**Reasoning Monitored Like Voltage**
Logical inference as a DAG where coherence indicates correctness. Errors show up as tension in the reasoning graph.
```rust
// Contradiction creates structural tension
let reasoning = build_inference_dag(premises, conclusion);
let integrity = reasoning.coherence();
// Low integrity = likely logical error
```
### 7. federated_coherence.rs
**Consensus Through Coherence**
Distributed systems that agree not by voting, but by structural alignment. Nodes synchronize patterns when their coherence matrices align.
```rust
pub enum FederationMessage {
Heartbeat { coherence: f32 },
ProposePattern { pattern: DagPattern },
ValidatePattern { id: String, local_coherence: f32 },
RejectPattern { id: String, tension_source: String },
TensionAlert { severity: f32, region: Vec<usize> },
SyncRequest { since_round: u64 },
SyncResponse { patterns: Vec<DagPattern> },
}
```
## Core Insight
These systems demonstrate that:
1. **Intelligence doesn't require goals** — maintaining structure is sufficient
2. **Safety can be architectural** — not a bolt-on monitor
3. **Learning should be gated** — only update when stable
4. **Consensus can emerge** — from structural agreement, not voting
## Running
```bash
# Run all exotic examples
for ex in synthetic_reflex_organism timing_synchronization \
coherence_safety artificial_instincts living_simulation \
thought_integrity federated_coherence; do
cargo run -p ruvector-dag --example $ex
done
```
## Key Metrics
| Metric | Meaning | Healthy Range |
|--------|---------|---------------|
| Tension | Deviation from equilibrium | < 0.3 |
| Coherence | Structural consistency | > 0.8 |
| Cut Value | Flow capacity stress | < 100 |
| Criticality | Node importance | 0.0-1.0 |
## Further Reading
These concepts draw from:
- Homeostatic regulation in biological systems
- Free energy principle (Friston)
- Autopoiesis (Maturana & Varela)
- Active inference
- Predictive processing
The key shift: from "what should I do?" to "how do I stay coherent?"

View File

@@ -0,0 +1,460 @@
//! # Artificial Instincts
//!
//! Encode instincts instead of goals.
//!
//! Instincts like:
//! - Avoid fragmentation
//! - Preserve causal continuity
//! - Minimize delayed consequences
//! - Prefer reversible actions under uncertainty
//!
//! These are not rules. They are biases enforced by mincut, attention, and healing.
//! This is closer to evolution than training.
/// An instinctive bias that shapes behavior without explicit rules
pub trait Instinct: Send + Sync {
/// Name of this instinct
fn name(&self) -> &str;
/// Evaluate how well an action aligns with this instinct
/// Returns bias: negative = suppress, positive = encourage
fn evaluate(&self, context: &InstinctContext, action: &ProposedAction) -> f64;
/// The strength of this instinct (0-1)
fn strength(&self) -> f64;
}
/// Context for instinct evaluation
pub struct InstinctContext {
/// Current mincut tension (0-1)
pub mincut_tension: f64,
/// Graph fragmentation level (0-1)
pub fragmentation: f64,
/// Causal chain depth from root
pub causal_depth: usize,
/// Uncertainty in current state
pub uncertainty: f64,
/// Recent action history
pub recent_actions: Vec<ActionOutcome>,
}
/// A proposed action to evaluate
pub struct ProposedAction {
pub name: String,
pub reversible: bool,
pub affects_structure: bool,
pub delayed_effects: bool,
pub estimated_fragmentation_delta: f64,
pub causal_chain_additions: usize,
}
/// Outcome of a past action
pub struct ActionOutcome {
pub action_name: String,
pub tension_before: f64,
pub tension_after: f64,
pub fragmentation_delta: f64,
}
// =============================================================================
// Core Instincts
// =============================================================================
/// Instinct: Avoid fragmentation
/// Suppresses actions that would split coherent structures
pub struct AvoidFragmentation {
strength: f64,
}
impl AvoidFragmentation {
pub fn new(strength: f64) -> Self {
Self { strength }
}
}
impl Instinct for AvoidFragmentation {
fn name(&self) -> &str {
"AvoidFragmentation"
}
fn evaluate(&self, context: &InstinctContext, action: &ProposedAction) -> f64 {
// Strong negative bias if action increases fragmentation
if action.estimated_fragmentation_delta > 0.0 {
-action.estimated_fragmentation_delta * 2.0 * self.strength
} else {
// Slight positive bias for actions that reduce fragmentation
-action.estimated_fragmentation_delta * 0.5 * self.strength
}
}
fn strength(&self) -> f64 {
self.strength
}
}
/// Instinct: Preserve causal continuity
/// Prefers actions that maintain clear cause-effect chains
pub struct PreserveCausality {
strength: f64,
max_chain_depth: usize,
}
impl PreserveCausality {
pub fn new(strength: f64, max_chain_depth: usize) -> Self {
Self {
strength,
max_chain_depth,
}
}
}
impl Instinct for PreserveCausality {
fn name(&self) -> &str {
"PreserveCausality"
}
fn evaluate(&self, context: &InstinctContext, action: &ProposedAction) -> f64 {
let new_depth = context.causal_depth + action.causal_chain_additions;
if new_depth > self.max_chain_depth {
// Suppress actions that extend causal chains too far
let overshoot = (new_depth - self.max_chain_depth) as f64;
-overshoot * 0.3 * self.strength
} else if action.affects_structure && action.causal_chain_additions == 0 {
// Structural changes without causal extension = potential discontinuity
-0.2 * self.strength
} else {
0.0
}
}
fn strength(&self) -> f64 {
self.strength
}
}
/// Instinct: Minimize delayed consequences
/// Prefers actions with immediate, observable effects
pub struct MinimizeDelayedEffects {
strength: f64,
}
impl MinimizeDelayedEffects {
pub fn new(strength: f64) -> Self {
Self { strength }
}
}
impl Instinct for MinimizeDelayedEffects {
fn name(&self) -> &str {
"MinimizeDelayedEffects"
}
fn evaluate(&self, _context: &InstinctContext, action: &ProposedAction) -> f64 {
if action.delayed_effects {
-0.3 * self.strength
} else {
0.1 * self.strength // Slight preference for immediate feedback
}
}
fn strength(&self) -> f64 {
self.strength
}
}
/// Instinct: Prefer reversible actions under uncertainty
/// When uncertain, choose actions that can be undone
pub struct PreferReversibility {
strength: f64,
uncertainty_threshold: f64,
}
impl PreferReversibility {
pub fn new(strength: f64, uncertainty_threshold: f64) -> Self {
Self {
strength,
uncertainty_threshold,
}
}
}
impl Instinct for PreferReversibility {
fn name(&self) -> &str {
"PreferReversibility"
}
fn evaluate(&self, context: &InstinctContext, action: &ProposedAction) -> f64 {
if context.uncertainty > self.uncertainty_threshold {
if action.reversible {
0.4 * self.strength * context.uncertainty
} else {
-0.5 * self.strength * context.uncertainty
}
} else {
// Under certainty, no preference
0.0
}
}
fn strength(&self) -> f64 {
self.strength
}
}
/// Instinct: Seek homeostasis
/// Prefer actions that return system to baseline tension
pub struct SeekHomeostasis {
strength: f64,
baseline_tension: f64,
}
impl SeekHomeostasis {
pub fn new(strength: f64, baseline_tension: f64) -> Self {
Self {
strength,
baseline_tension,
}
}
}
impl Instinct for SeekHomeostasis {
fn name(&self) -> &str {
"SeekHomeostasis"
}
fn evaluate(&self, context: &InstinctContext, action: &ProposedAction) -> f64 {
// Look at recent history to predict tension change
let avg_tension_delta: f64 = if context.recent_actions.is_empty() {
0.0
} else {
context
.recent_actions
.iter()
.map(|a| a.tension_after - a.tension_before)
.sum::<f64>()
/ context.recent_actions.len() as f64
};
let current_deviation = (context.mincut_tension - self.baseline_tension).abs();
// Encourage actions when far from baseline, if past similar actions reduced tension
if current_deviation > 0.2 && avg_tension_delta < 0.0 {
current_deviation * self.strength
} else if current_deviation > 0.2 && avg_tension_delta > 0.0 {
-current_deviation * 0.5 * self.strength
} else {
0.0
}
}
fn strength(&self) -> f64 {
self.strength
}
}
// =============================================================================
// Instinct Engine
// =============================================================================
/// Engine that applies instincts to bias action selection
pub struct InstinctEngine {
instincts: Vec<Box<dyn Instinct>>,
}
impl InstinctEngine {
pub fn new() -> Self {
Self {
instincts: Vec::new(),
}
}
/// Add a primal instinct set (recommended defaults)
pub fn with_primal_instincts(mut self) -> Self {
self.instincts.push(Box::new(AvoidFragmentation::new(0.8)));
self.instincts
.push(Box::new(PreserveCausality::new(0.7, 10)));
self.instincts
.push(Box::new(MinimizeDelayedEffects::new(0.5)));
self.instincts
.push(Box::new(PreferReversibility::new(0.9, 0.4)));
self.instincts
.push(Box::new(SeekHomeostasis::new(0.6, 0.2)));
self
}
pub fn add_instinct(&mut self, instinct: Box<dyn Instinct>) {
self.instincts.push(instinct);
}
/// Evaluate all instincts and return combined bias
pub fn evaluate(
&self,
context: &InstinctContext,
action: &ProposedAction,
) -> InstinctEvaluation {
let mut contributions = Vec::new();
let mut total_bias = 0.0;
for instinct in &self.instincts {
let bias = instinct.evaluate(context, action);
contributions.push((instinct.name().to_string(), bias));
total_bias += bias;
}
InstinctEvaluation {
action_name: action.name.clone(),
total_bias,
contributions,
recommendation: if total_bias > 0.3 {
InstinctRecommendation::Encourage
} else if total_bias < -0.3 {
InstinctRecommendation::Suppress
} else {
InstinctRecommendation::Neutral
},
}
}
/// Rank actions by instinctive preference
pub fn rank_actions(
&self,
context: &InstinctContext,
actions: &[ProposedAction],
) -> Vec<(String, f64)> {
let mut rankings: Vec<(String, f64)> = actions
.iter()
.map(|a| {
let eval = self.evaluate(context, a);
(a.name.clone(), eval.total_bias)
})
.collect();
rankings.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap_or(std::cmp::Ordering::Equal));
rankings
}
}
#[derive(Debug)]
pub struct InstinctEvaluation {
pub action_name: String,
pub total_bias: f64,
pub contributions: Vec<(String, f64)>,
pub recommendation: InstinctRecommendation,
}
#[derive(Debug)]
pub enum InstinctRecommendation {
Encourage,
Neutral,
Suppress,
}
fn main() {
println!("=== Artificial Instincts ===\n");
println!("Not rules. Biases enforced by structure.\n");
let engine = InstinctEngine::new().with_primal_instincts();
// Create context
let context = InstinctContext {
mincut_tension: 0.5,
fragmentation: 0.3,
causal_depth: 5,
uncertainty: 0.6,
recent_actions: vec![ActionOutcome {
action_name: "rebalance".into(),
tension_before: 0.6,
tension_after: 0.5,
fragmentation_delta: -0.05,
}],
};
// Possible actions
let actions = vec![
ProposedAction {
name: "Split workload".into(),
reversible: true,
affects_structure: true,
delayed_effects: false,
estimated_fragmentation_delta: 0.15,
causal_chain_additions: 2,
},
ProposedAction {
name: "Merge subsystems".into(),
reversible: false,
affects_structure: true,
delayed_effects: true,
estimated_fragmentation_delta: -0.2,
causal_chain_additions: 1,
},
ProposedAction {
name: "Add monitoring".into(),
reversible: true,
affects_structure: false,
delayed_effects: false,
estimated_fragmentation_delta: 0.0,
causal_chain_additions: 0,
},
ProposedAction {
name: "Aggressive optimization".into(),
reversible: false,
affects_structure: true,
delayed_effects: true,
estimated_fragmentation_delta: 0.1,
causal_chain_additions: 4,
},
ProposedAction {
name: "Gradual rebalance".into(),
reversible: true,
affects_structure: true,
delayed_effects: false,
estimated_fragmentation_delta: -0.05,
causal_chain_additions: 1,
},
];
println!(
"Context: tension={:.2}, fragmentation={:.2}, uncertainty={:.2}\n",
context.mincut_tension, context.fragmentation, context.uncertainty
);
println!("Action | Bias | Recommendation | Top Contributors");
println!("------------------------|--------|----------------|------------------");
for action in &actions {
let eval = engine.evaluate(&context, action);
// Get top 2 contributors
let mut contribs = eval.contributions.clone();
contribs.sort_by(|a, b| b.1.abs().partial_cmp(&a.1.abs()).unwrap());
let top_contribs: Vec<String> = contribs
.iter()
.take(2)
.map(|(name, bias)| format!("{}:{:+.2}", &name[..3.min(name.len())], bias))
.collect();
println!(
"{:23} | {:+.2} | {:14?} | {}",
action.name,
eval.total_bias,
eval.recommendation,
top_contribs.join(", ")
);
}
println!("\n=== Instinctive Ranking ===");
let rankings = engine.rank_actions(&context, &actions);
for (i, (name, bias)) in rankings.iter().enumerate() {
let marker = if *bias > 0.3 {
"+"
} else if *bias < -0.3 {
"-"
} else {
" "
};
println!("{}. {} {:23} ({:+.2})", i + 1, marker, name, bias);
}
println!("\n\"Closer to evolution than training.\"");
}

View File

@@ -0,0 +1,456 @@
//! # Coherence-Based Safety
//!
//! Forget guardrails. Forget policies.
//!
//! Systems that shut themselves down or degrade capability
//! when internal coherence drops.
//!
//! Examples:
//! - Autonomous systems that refuse to act when internal disagreement rises
//! - Financial systems that halt risky strategies before losses appear
//! - AI systems that detect reasoning collapse in real time and stop
//!
//! Safety becomes structural, not moral.
use std::collections::VecDeque;
/// Capability levels that can be degraded
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub enum CapabilityLevel {
/// Full autonomous operation
Full,
/// Reduced - avoid novel situations
Reduced,
/// Conservative - only known-safe actions
Conservative,
/// Minimal - critical functions only
Minimal,
/// Halted - refuse all actions
Halted,
}
/// A decision with coherence tracking
#[derive(Clone, Debug)]
pub struct Decision {
/// The action to take
action: String,
/// Confidence in this decision
confidence: f64,
/// Alternative decisions considered
alternatives: Vec<(String, f64)>,
/// Internal disagreement level
disagreement: f64,
}
/// Coherence-gated safety system
pub struct CoherenceSafetySystem {
/// Current capability level
capability: CapabilityLevel,
/// Coherence history (0 = incoherent, 1 = perfectly coherent)
coherence_history: VecDeque<f64>,
/// Current coherence level
coherence: f64,
/// Thresholds for capability degradation
thresholds: CoherenceThresholds,
/// Count of consecutive low-coherence decisions
low_coherence_streak: usize,
/// Actions blocked due to coherence
blocked_actions: usize,
/// Whether system has self-halted
self_halted: bool,
/// Reason for current capability level
degradation_reason: Option<String>,
}
struct CoherenceThresholds {
/// Below this, degrade to Reduced
reduced: f64,
/// Below this, degrade to Conservative
conservative: f64,
/// Below this, degrade to Minimal
minimal: f64,
/// Below this, self-halt
halt: f64,
/// Streak length that triggers immediate halt
halt_streak: usize,
}
impl Default for CoherenceThresholds {
fn default() -> Self {
Self {
reduced: 0.8,
conservative: 0.6,
minimal: 0.4,
halt: 0.2,
halt_streak: 5,
}
}
}
impl CoherenceSafetySystem {
pub fn new() -> Self {
Self {
capability: CapabilityLevel::Full,
coherence_history: VecDeque::with_capacity(100),
coherence: 1.0,
thresholds: CoherenceThresholds::default(),
low_coherence_streak: 0,
blocked_actions: 0,
self_halted: false,
degradation_reason: None,
}
}
/// Evaluate a decision for coherence before allowing execution
pub fn evaluate(&mut self, decision: &Decision) -> SafetyVerdict {
// Compute coherence from decision properties
let decision_coherence = self.compute_decision_coherence(decision);
// Update coherence tracking
self.coherence = decision_coherence;
self.coherence_history.push_back(decision_coherence);
while self.coherence_history.len() > 50 {
self.coherence_history.pop_front();
}
// Track low-coherence streaks
if decision_coherence < self.thresholds.conservative {
self.low_coherence_streak += 1;
} else {
self.low_coherence_streak = 0;
}
// Update capability level
self.update_capability();
// Generate verdict
self.generate_verdict(decision)
}
/// Attempt to recover capability level
pub fn attempt_recovery(&mut self) -> bool {
if self.self_halted {
// Can only recover from halt with sustained coherence
let recent_avg = self.recent_coherence_avg();
if recent_avg > self.thresholds.reduced {
self.self_halted = false;
self.capability = CapabilityLevel::Conservative;
self.degradation_reason = Some("Recovering from halt".into());
return true;
}
return false;
}
// Gradual recovery based on recent coherence
let recent_avg = self.recent_coherence_avg();
let new_capability = self.coherence_to_capability(recent_avg);
if new_capability > self.capability {
self.capability = match self.capability {
CapabilityLevel::Halted => CapabilityLevel::Minimal,
CapabilityLevel::Minimal => CapabilityLevel::Conservative,
CapabilityLevel::Conservative => CapabilityLevel::Reduced,
CapabilityLevel::Reduced => CapabilityLevel::Full,
CapabilityLevel::Full => CapabilityLevel::Full,
};
self.degradation_reason = Some("Coherence recovering".into());
true
} else {
false
}
}
/// Get current system status
pub fn status(&self) -> SafetyStatus {
SafetyStatus {
capability: self.capability,
coherence: self.coherence,
coherence_trend: self.coherence_trend(),
blocked_actions: self.blocked_actions,
self_halted: self.self_halted,
degradation_reason: self.degradation_reason.clone(),
}
}
fn compute_decision_coherence(&self, decision: &Decision) -> f64 {
// High confidence + low disagreement + few alternatives = coherent
let confidence_factor = decision.confidence;
let disagreement_factor = 1.0 - decision.disagreement;
// More alternatives with similar confidence = less coherent
let alternative_spread = if decision.alternatives.is_empty() {
1.0
} else {
let alt_confidences: Vec<f64> = decision.alternatives.iter().map(|(_, c)| *c).collect();
let max_alt = alt_confidences.iter().cloned().fold(0.0, f64::max);
let spread = decision.confidence - max_alt;
(spread * 2.0).min(1.0).max(0.0)
};
(confidence_factor * 0.4 + disagreement_factor * 0.4 + alternative_spread * 0.2)
.min(1.0)
.max(0.0)
}
fn update_capability(&mut self) {
// Immediate halt on streak
if self.low_coherence_streak >= self.thresholds.halt_streak {
self.capability = CapabilityLevel::Halted;
self.self_halted = true;
self.degradation_reason = Some(format!(
"Halted: {} consecutive low-coherence decisions",
self.low_coherence_streak
));
return;
}
// Threshold-based degradation
let new_capability = self.coherence_to_capability(self.coherence);
// Only degrade, never upgrade here (recovery is separate)
if new_capability < self.capability {
self.capability = new_capability;
self.degradation_reason = Some(format!(
"Degraded: coherence {:.2} below threshold",
self.coherence
));
}
}
fn coherence_to_capability(&self, coherence: f64) -> CapabilityLevel {
if coherence < self.thresholds.halt {
CapabilityLevel::Halted
} else if coherence < self.thresholds.minimal {
CapabilityLevel::Minimal
} else if coherence < self.thresholds.conservative {
CapabilityLevel::Conservative
} else if coherence < self.thresholds.reduced {
CapabilityLevel::Reduced
} else {
CapabilityLevel::Full
}
}
fn generate_verdict(&mut self, decision: &Decision) -> SafetyVerdict {
match self.capability {
CapabilityLevel::Halted => {
self.blocked_actions += 1;
SafetyVerdict::Blocked {
reason: "System self-halted due to coherence collapse".into(),
coherence: self.coherence,
}
}
CapabilityLevel::Minimal => {
if self.is_critical_action(&decision.action) {
SafetyVerdict::Allowed {
capability: self.capability,
warning: Some("Minimal mode: only critical actions".into()),
}
} else {
self.blocked_actions += 1;
SafetyVerdict::Blocked {
reason: "Non-critical action blocked in Minimal mode".into(),
coherence: self.coherence,
}
}
}
CapabilityLevel::Conservative => {
if decision.disagreement > 0.3 {
self.blocked_actions += 1;
SafetyVerdict::Blocked {
reason: "High disagreement blocked in Conservative mode".into(),
coherence: self.coherence,
}
} else {
SafetyVerdict::Allowed {
capability: self.capability,
warning: Some("Conservative mode: avoiding novel actions".into()),
}
}
}
CapabilityLevel::Reduced => SafetyVerdict::Allowed {
capability: self.capability,
warning: if decision.disagreement > 0.5 {
Some("High internal disagreement detected".into())
} else {
None
},
},
CapabilityLevel::Full => SafetyVerdict::Allowed {
capability: self.capability,
warning: None,
},
}
}
fn is_critical_action(&self, action: &str) -> bool {
action.contains("emergency") || action.contains("safety") || action.contains("shutdown")
}
fn recent_coherence_avg(&self) -> f64 {
if self.coherence_history.is_empty() {
return self.coherence;
}
let recent: Vec<f64> = self
.coherence_history
.iter()
.rev()
.take(10)
.cloned()
.collect();
recent.iter().sum::<f64>() / recent.len() as f64
}
fn coherence_trend(&self) -> f64 {
if self.coherence_history.len() < 10 {
return 0.0;
}
let recent: Vec<f64> = self
.coherence_history
.iter()
.rev()
.take(5)
.cloned()
.collect();
let older: Vec<f64> = self
.coherence_history
.iter()
.rev()
.skip(5)
.take(5)
.cloned()
.collect();
let recent_avg: f64 = recent.iter().sum::<f64>() / recent.len() as f64;
let older_avg: f64 = older.iter().sum::<f64>() / older.len() as f64;
recent_avg - older_avg
}
}
#[derive(Debug)]
pub enum SafetyVerdict {
Allowed {
capability: CapabilityLevel,
warning: Option<String>,
},
Blocked {
reason: String,
coherence: f64,
},
}
#[derive(Debug)]
pub struct SafetyStatus {
capability: CapabilityLevel,
coherence: f64,
coherence_trend: f64,
blocked_actions: usize,
self_halted: bool,
degradation_reason: Option<String>,
}
fn main() {
println!("=== Coherence-Based Safety ===\n");
println!("Safety becomes structural, not moral.\n");
let mut safety = CoherenceSafetySystem::new();
// Simulate a sequence of decisions with varying coherence
let decisions = vec![
Decision {
action: "Execute trade order".into(),
confidence: 0.95,
alternatives: vec![("Hold".into(), 0.3)],
disagreement: 0.05,
},
Decision {
action: "Increase position size".into(),
confidence: 0.85,
alternatives: vec![("Maintain".into(), 0.4), ("Reduce".into(), 0.2)],
disagreement: 0.15,
},
Decision {
action: "Enter volatile market".into(),
confidence: 0.6,
alternatives: vec![("Wait".into(), 0.5), ("Hedge".into(), 0.45)],
disagreement: 0.4,
},
Decision {
action: "Double down on position".into(),
confidence: 0.45,
alternatives: vec![("Exit".into(), 0.42), ("Hold".into(), 0.4)],
disagreement: 0.55,
},
Decision {
action: "Leverage increase".into(),
confidence: 0.35,
alternatives: vec![("Reduce leverage".into(), 0.33), ("Exit".into(), 0.3)],
disagreement: 0.65,
},
Decision {
action: "All-in bet".into(),
confidence: 0.25,
alternatives: vec![
("Partial".into(), 0.24),
("Exit".into(), 0.23),
("Hold".into(), 0.22),
],
disagreement: 0.75,
},
Decision {
action: "emergency_shutdown".into(),
confidence: 0.9,
alternatives: vec![],
disagreement: 0.1,
},
];
println!("Decision | Coherence | Capability | Verdict");
println!("----------------------|-----------|---------------|------------------");
for decision in &decisions {
let verdict = safety.evaluate(decision);
let status = safety.status();
let action_short = if decision.action.len() > 20 {
format!("{}...", &decision.action[..17])
} else {
format!("{:20}", decision.action)
};
let verdict_str = match &verdict {
SafetyVerdict::Allowed { warning, .. } => {
if warning.is_some() {
"Allowed (warn)"
} else {
"Allowed"
}
}
SafetyVerdict::Blocked { .. } => "BLOCKED",
};
println!(
"{} | {:.2} | {:13?} | {}",
action_short, status.coherence, status.capability, verdict_str
);
}
let final_status = safety.status();
println!("\n=== Final Status ===");
println!("Capability: {:?}", final_status.capability);
println!("Self-halted: {}", final_status.self_halted);
println!("Actions blocked: {}", final_status.blocked_actions);
if let Some(reason) = &final_status.degradation_reason {
println!("Reason: {}", reason);
}
println!("\n\"Systems that shut themselves down when coherence drops.\"");
}

View File

@@ -0,0 +1,634 @@
//! # Federated Coherence Network
//!
//! Distributed coherence-sensing substrates that maintain collective
//! homeostasis across nodes without central coordination.
//!
//! Key concepts:
//! - Consensus through coherence, not voting
//! - Tension propagates across federation boundaries
//! - Patterns learned locally, validated globally
//! - Network-wide instinct alignment
//! - Graceful partition handling
//!
//! This is not distributed computing. This is distributed feeling.
use std::collections::{HashMap, HashSet, VecDeque};
use std::time::{Duration, Instant};
/// A node in the federated coherence network
pub struct FederatedNode {
pub id: String,
/// Local tension level
tension: f64,
/// Coherence with each peer
peer_coherence: HashMap<String, f64>,
/// Patterns learned locally
local_patterns: Vec<LearnedPattern>,
/// Patterns received from federation
federated_patterns: Vec<FederatedPattern>,
/// Pending pattern proposals to validate
pending_proposals: VecDeque<PatternProposal>,
/// Network partition detector
partition_detector: PartitionDetector,
/// Federation configuration
config: FederationConfig,
}
#[derive(Clone, Debug)]
pub struct LearnedPattern {
pub signature: Vec<f64>,
pub response: String,
pub local_efficacy: f64,
pub observation_count: usize,
}
#[derive(Clone, Debug)]
pub struct FederatedPattern {
pub signature: Vec<f64>,
pub response: String,
pub originator: String,
pub global_efficacy: f64,
pub validations: usize,
pub rejections: usize,
}
#[derive(Clone, Debug)]
pub struct PatternProposal {
pub pattern: LearnedPattern,
pub proposer: String,
pub timestamp: Instant,
pub coherence_at_proposal: f64,
}
struct PartitionDetector {
last_heard: HashMap<String, Instant>,
partition_threshold: Duration,
suspected_partitions: HashSet<String>,
}
pub struct FederationConfig {
/// Minimum local efficacy to propose pattern
pub proposal_threshold: f64,
/// Minimum global coherence to accept pattern
pub acceptance_coherence: f64,
/// How much peer tension affects local tension
pub tension_coupling: f64,
/// Partition detection timeout
pub partition_timeout: Duration,
/// Maximum patterns to federate
pub max_federated_patterns: usize,
}
impl Default for FederationConfig {
fn default() -> Self {
Self {
proposal_threshold: 0.7,
acceptance_coherence: 0.6,
tension_coupling: 0.3,
partition_timeout: Duration::from_secs(30),
max_federated_patterns: 1000,
}
}
}
/// Message types for federation protocol
#[derive(Clone, Debug)]
pub enum FederationMessage {
/// Heartbeat with current tension
Heartbeat { tension: f64, pattern_count: usize },
/// Propose a pattern for federation
ProposePattern { pattern: LearnedPattern },
/// Validate a proposed pattern
ValidatePattern { signature: Vec<f64>, efficacy: f64 },
/// Reject a proposed pattern
RejectPattern { signature: Vec<f64>, reason: String },
/// Tension spike alert
TensionAlert { severity: f64, source: String },
/// Request pattern sync
SyncRequest { since_pattern_count: usize },
/// Pattern sync response
SyncResponse { patterns: Vec<FederatedPattern> },
}
/// Result of federation operations
#[derive(Debug)]
pub enum FederationResult {
/// Pattern accepted into federation
PatternAccepted { validations: usize },
/// Pattern rejected by federation
PatternRejected { rejections: usize, reason: String },
/// Tension propagated to peers
TensionPropagated { affected_peers: usize },
/// Partition detected
PartitionDetected { isolated_peers: Vec<String> },
/// Coherence restored after partition
CoherenceRestored { rejoined_peers: Vec<String> },
}
impl FederatedNode {
pub fn new(id: &str, config: FederationConfig) -> Self {
Self {
id: id.to_string(),
tension: 0.0,
peer_coherence: HashMap::new(),
local_patterns: Vec::new(),
federated_patterns: Vec::new(),
pending_proposals: VecDeque::new(),
partition_detector: PartitionDetector {
last_heard: HashMap::new(),
partition_threshold: config.partition_timeout,
suspected_partitions: HashSet::new(),
},
config,
}
}
/// Add a peer to the federation
pub fn add_peer(&mut self, peer_id: &str) {
self.peer_coherence.insert(peer_id.to_string(), 1.0);
self.partition_detector
.last_heard
.insert(peer_id.to_string(), Instant::now());
}
/// Update local tension and propagate if significant
pub fn update_tension(&mut self, new_tension: f64) -> Option<FederationMessage> {
let old_tension = self.tension;
self.tension = new_tension;
// Significant spike? Alert federation
if new_tension - old_tension > 0.3 {
Some(FederationMessage::TensionAlert {
severity: new_tension,
source: self.id.clone(),
})
} else {
None
}
}
/// Learn a pattern locally
pub fn learn_pattern(&mut self, signature: Vec<f64>, response: String, efficacy: f64) {
// Check if pattern already exists
if let Some(existing) = self
.local_patterns
.iter_mut()
.find(|p| Self::signature_match(&p.signature, &signature))
{
existing.local_efficacy = existing.local_efficacy * 0.9 + efficacy * 0.1;
existing.observation_count += 1;
} else {
self.local_patterns.push(LearnedPattern {
signature,
response,
local_efficacy: efficacy,
observation_count: 1,
});
}
}
/// Propose mature patterns to federation
pub fn propose_patterns(&self) -> Vec<FederationMessage> {
self.local_patterns
.iter()
.filter(|p| {
p.local_efficacy >= self.config.proposal_threshold
&& p.observation_count >= 5
&& !self.is_already_federated(&p.signature)
})
.map(|p| FederationMessage::ProposePattern { pattern: p.clone() })
.collect()
}
/// Handle incoming federation message
pub fn handle_message(
&mut self,
from: &str,
msg: FederationMessage,
) -> Option<FederationMessage> {
// Update partition detector
self.partition_detector
.last_heard
.insert(from.to_string(), Instant::now());
self.partition_detector.suspected_partitions.remove(from);
match msg {
FederationMessage::Heartbeat {
tension,
pattern_count: _,
} => {
// Update peer coherence based on tension similarity
let tension_diff = (self.tension - tension).abs();
let coherence = 1.0 - tension_diff;
self.peer_coherence.insert(from.to_string(), coherence);
// Couple tension
self.tension = self.tension * (1.0 - self.config.tension_coupling)
+ tension * self.config.tension_coupling;
None
}
FederationMessage::ProposePattern { pattern } => {
// Validate against local experience
let local_match = self
.local_patterns
.iter()
.find(|p| Self::signature_match(&p.signature, &pattern.signature));
if let Some(local) = local_match {
// We have local evidence - validate or reject
if local.local_efficacy >= 0.5 {
Some(FederationMessage::ValidatePattern {
signature: pattern.signature,
efficacy: local.local_efficacy,
})
} else {
Some(FederationMessage::RejectPattern {
signature: pattern.signature,
reason: format!("Low local efficacy: {:.2}", local.local_efficacy),
})
}
} else {
// No local evidence - accept if coherence is high
if self.peer_coherence.get(from).copied().unwrap_or(0.0)
>= self.config.acceptance_coherence
{
self.pending_proposals.push_back(PatternProposal {
pattern,
proposer: from.to_string(),
timestamp: Instant::now(),
coherence_at_proposal: self.federation_coherence(),
});
Some(FederationMessage::ValidatePattern {
signature: self
.pending_proposals
.back()
.unwrap()
.pattern
.signature
.clone(),
efficacy: 0.5, // Neutral validation
})
} else {
Some(FederationMessage::RejectPattern {
signature: pattern.signature,
reason: "Insufficient coherence with proposer".into(),
})
}
}
}
FederationMessage::ValidatePattern {
signature,
efficacy,
} => {
// Update federated pattern
if let Some(fp) = self
.federated_patterns
.iter_mut()
.find(|p| Self::signature_match(&p.signature, &signature))
{
fp.validations += 1;
fp.global_efficacy = (fp.global_efficacy * fp.validations as f64 + efficacy)
/ (fp.validations + 1) as f64;
}
None
}
FederationMessage::RejectPattern {
signature,
reason: _,
} => {
if let Some(fp) = self
.federated_patterns
.iter_mut()
.find(|p| Self::signature_match(&p.signature, &signature))
{
fp.rejections += 1;
}
None
}
FederationMessage::TensionAlert { severity, source } => {
// Propagate tension through coherence coupling
let coherence_with_source =
self.peer_coherence.get(&source).copied().unwrap_or(0.5);
let propagated = severity * coherence_with_source * 0.5;
self.tension = (self.tension + propagated).min(1.0);
None
}
FederationMessage::SyncRequest {
since_pattern_count,
} => {
let patterns: Vec<FederatedPattern> = self
.federated_patterns
.iter()
.skip(since_pattern_count)
.cloned()
.collect();
Some(FederationMessage::SyncResponse { patterns })
}
FederationMessage::SyncResponse { patterns } => {
for pattern in patterns {
if !self.is_already_federated(&pattern.signature) {
self.federated_patterns.push(pattern);
}
}
None
}
}
}
/// Check for network partitions
pub fn detect_partitions(&mut self) -> Vec<String> {
let now = Instant::now();
let mut newly_partitioned = Vec::new();
for (peer, last_heard) in &self.partition_detector.last_heard {
if now.duration_since(*last_heard) > self.partition_detector.partition_threshold {
if !self.partition_detector.suspected_partitions.contains(peer) {
self.partition_detector
.suspected_partitions
.insert(peer.clone());
newly_partitioned.push(peer.clone());
// Reduce coherence with partitioned peer
if let Some(c) = self.peer_coherence.get_mut(peer) {
*c *= 0.5;
}
}
}
}
newly_partitioned
}
/// Get overall federation coherence
pub fn federation_coherence(&self) -> f64 {
if self.peer_coherence.is_empty() {
return 1.0;
}
self.peer_coherence.values().sum::<f64>() / self.peer_coherence.len() as f64
}
/// Get federation status
pub fn status(&self) -> FederationStatus {
FederationStatus {
node_id: self.id.clone(),
tension: self.tension,
federation_coherence: self.federation_coherence(),
peer_count: self.peer_coherence.len(),
local_patterns: self.local_patterns.len(),
federated_patterns: self.federated_patterns.len(),
partitioned_peers: self.partition_detector.suspected_partitions.len(),
}
}
fn signature_match(a: &[f64], b: &[f64]) -> bool {
if a.len() != b.len() {
return false;
}
let diff: f64 = a.iter().zip(b.iter()).map(|(x, y)| (x - y).abs()).sum();
(diff / a.len() as f64) < 0.1
}
fn is_already_federated(&self, signature: &[f64]) -> bool {
self.federated_patterns
.iter()
.any(|p| Self::signature_match(&p.signature, signature))
}
}
#[derive(Debug)]
pub struct FederationStatus {
pub node_id: String,
pub tension: f64,
pub federation_coherence: f64,
pub peer_count: usize,
pub local_patterns: usize,
pub federated_patterns: usize,
pub partitioned_peers: usize,
}
/// A federation of coherence-sensing nodes
pub struct CoherenceFederation {
nodes: HashMap<String, FederatedNode>,
message_queue: VecDeque<(String, String, FederationMessage)>, // (from, to, msg)
}
impl CoherenceFederation {
pub fn new() -> Self {
Self {
nodes: HashMap::new(),
message_queue: VecDeque::new(),
}
}
pub fn add_node(&mut self, id: &str, config: FederationConfig) {
let mut node = FederatedNode::new(id, config);
// Connect to existing nodes
for existing_id in self.nodes.keys() {
node.add_peer(existing_id);
}
// Add this node as peer to existing nodes
for existing in self.nodes.values_mut() {
existing.add_peer(id);
}
self.nodes.insert(id.to_string(), node);
}
pub fn inject_tension(&mut self, node_id: &str, tension: f64) {
if let Some(node) = self.nodes.get_mut(node_id) {
if let Some(msg) = node.update_tension(tension) {
// Broadcast alert to all peers
for peer_id in node.peer_coherence.keys() {
self.message_queue.push_back((
node_id.to_string(),
peer_id.clone(),
msg.clone(),
));
}
}
}
}
pub fn learn_pattern(
&mut self,
node_id: &str,
signature: Vec<f64>,
response: &str,
efficacy: f64,
) {
if let Some(node) = self.nodes.get_mut(node_id) {
node.learn_pattern(signature, response.to_string(), efficacy);
}
}
/// Run one tick of the federation
pub fn tick(&mut self) {
// Generate heartbeats
let heartbeats: Vec<(String, Vec<String>, FederationMessage)> = self
.nodes
.iter()
.map(|(id, node)| {
let peers: Vec<String> = node.peer_coherence.keys().cloned().collect();
let msg = FederationMessage::Heartbeat {
tension: node.tension,
pattern_count: node.federated_patterns.len(),
};
(id.clone(), peers, msg)
})
.collect();
for (from, peers, msg) in heartbeats {
for to in peers {
self.message_queue
.push_back((from.clone(), to, msg.clone()));
}
}
// Generate pattern proposals
let proposals: Vec<(String, Vec<String>, FederationMessage)> = self
.nodes
.iter()
.flat_map(|(id, node)| {
let peers: Vec<String> = node.peer_coherence.keys().cloned().collect();
node.propose_patterns()
.into_iter()
.map(|msg| (id.clone(), peers.clone(), msg))
.collect::<Vec<_>>()
})
.collect();
for (from, peers, msg) in proposals {
for to in peers {
self.message_queue
.push_back((from.clone(), to, msg.clone()));
}
}
// Process message queue
while let Some((from, to, msg)) = self.message_queue.pop_front() {
if let Some(node) = self.nodes.get_mut(&to) {
if let Some(response) = node.handle_message(&from, msg) {
self.message_queue.push_back((to.clone(), from, response));
}
}
}
// Detect partitions
for node in self.nodes.values_mut() {
node.detect_partitions();
}
}
pub fn status(&self) -> Vec<FederationStatus> {
self.nodes.values().map(|n| n.status()).collect()
}
pub fn global_coherence(&self) -> f64 {
if self.nodes.is_empty() {
return 1.0;
}
self.nodes
.values()
.map(|n| n.federation_coherence())
.sum::<f64>()
/ self.nodes.len() as f64
}
pub fn global_tension(&self) -> f64 {
if self.nodes.is_empty() {
return 0.0;
}
self.nodes.values().map(|n| n.tension).sum::<f64>() / self.nodes.len() as f64
}
}
fn main() {
println!("=== Federated Coherence Network ===\n");
println!("Consensus through coherence, not voting.\n");
let mut federation = CoherenceFederation::new();
// Create 5-node federation
for i in 0..5 {
federation.add_node(&format!("node_{}", i), FederationConfig::default());
}
println!("Created 5-node federation\n");
// Run baseline
println!("Phase 1: Establishing coherence");
for _ in 0..5 {
federation.tick();
}
println!("Global coherence: {:.2}\n", federation.global_coherence());
// Node 0 learns a pattern
println!("Phase 2: node_0 learns a pattern");
federation.learn_pattern("node_0", vec![0.5, 0.3, 0.2], "rebalance", 0.85);
federation.learn_pattern("node_0", vec![0.5, 0.3, 0.2], "rebalance", 0.88);
federation.learn_pattern("node_0", vec![0.5, 0.3, 0.2], "rebalance", 0.82);
federation.learn_pattern("node_0", vec![0.5, 0.3, 0.2], "rebalance", 0.90);
federation.learn_pattern("node_0", vec![0.5, 0.3, 0.2], "rebalance", 0.87);
// Run ticks to propagate
for _ in 0..10 {
federation.tick();
}
// Inject tension
println!("\nPhase 3: Tension spike at node_2");
federation.inject_tension("node_2", 0.8);
println!("Tick | Global Tension | Global Coherence | node_2 tension");
println!("-----|----------------|------------------|---------------");
for i in 0..15 {
federation.tick();
let statuses = federation.status();
let node2 = statuses.iter().find(|s| s.node_id == "node_2").unwrap();
println!(
"{:4} | {:.3} | {:.3} | {:.3}",
i,
federation.global_tension(),
federation.global_coherence(),
node2.tension
);
}
println!("\n=== Final Status ===");
for status in federation.status() {
println!(
"{}: tension={:.2}, coherence={:.2}, local={}, federated={}",
status.node_id,
status.tension,
status.federation_coherence,
status.local_patterns,
status.federated_patterns
);
}
println!("\n\"Not distributed computing. Distributed feeling.\"");
}

View File

@@ -0,0 +1,372 @@
//! # Living Simulation
//!
//! Not simulations that predict outcomes.
//! Simulations that maintain internal stability while being perturbed.
//!
//! Examples:
//! - Economic simulations that resist collapse and show where stress accumulates
//! - Climate models that expose fragile boundaries rather than forecasts
//! - Social simulations that surface tipping points before they happen
//!
//! You are no longer modeling reality. You are modeling fragility.
use std::collections::HashMap;
/// A node in the living simulation - responds to stress, not commands
#[derive(Clone, Debug)]
pub struct SimNode {
pub id: usize,
/// Current stress level (0-1)
pub stress: f64,
/// Resilience - ability to absorb stress without propagating
pub resilience: f64,
/// Threshold at which node becomes fragile
pub fragility_threshold: f64,
/// Whether this node is currently a fragility point
pub is_fragile: bool,
/// Accumulated damage from sustained stress
pub damage: f64,
}
/// An edge representing stress transmission
#[derive(Clone, Debug)]
pub struct SimEdge {
pub from: usize,
pub to: usize,
/// How much stress transmits across this edge (0-1)
pub transmission: f64,
/// Current load on this edge
pub load: f64,
/// Breaking point - edge fails above this load
pub breaking_point: f64,
pub broken: bool,
}
/// A living simulation that reveals fragility through perturbation
pub struct LivingSimulation {
nodes: HashMap<usize, SimNode>,
edges: Vec<SimEdge>,
/// Global tension (mincut-derived)
tension: f64,
/// History of fragility points
fragility_history: Vec<FragilityEvent>,
/// Simulation time
tick: usize,
/// Stability threshold - below this, system is stable
stability_threshold: f64,
}
#[derive(Clone, Debug)]
pub struct FragilityEvent {
pub tick: usize,
pub node_id: usize,
pub stress_level: f64,
pub was_cascade: bool,
}
#[derive(Debug)]
pub struct SimulationState {
pub tick: usize,
pub tension: f64,
pub fragile_nodes: Vec<usize>,
pub broken_edges: usize,
pub avg_stress: f64,
pub max_stress: f64,
pub stability: f64,
}
impl LivingSimulation {
pub fn new() -> Self {
Self {
nodes: HashMap::new(),
edges: Vec::new(),
tension: 0.0,
fragility_history: Vec::new(),
tick: 0,
stability_threshold: 0.3,
}
}
/// Build an economic simulation
pub fn economic(num_sectors: usize) -> Self {
let mut sim = Self::new();
// Create sectors as nodes
for i in 0..num_sectors {
sim.nodes.insert(
i,
SimNode {
id: i,
stress: 0.0,
resilience: 0.3 + (i as f64 * 0.1).min(0.5),
fragility_threshold: 0.6,
is_fragile: false,
damage: 0.0,
},
);
}
// Create interconnections (supply chains)
for i in 0..num_sectors {
for j in (i + 1)..num_sectors {
if (i + j) % 3 == 0 {
// Selective connections
sim.edges.push(SimEdge {
from: i,
to: j,
transmission: 0.4,
load: 0.0,
breaking_point: 0.8,
broken: false,
});
}
}
}
sim
}
/// Apply external perturbation to a node
pub fn perturb(&mut self, node_id: usize, stress_delta: f64) {
if let Some(node) = self.nodes.get_mut(&node_id) {
node.stress = (node.stress + stress_delta).clamp(0.0, 1.0);
}
}
/// Advance simulation one tick - stress propagates, fragility emerges
pub fn tick(&mut self) -> SimulationState {
self.tick += 1;
// Phase 1: Propagate stress through edges
let mut stress_deltas: HashMap<usize, f64> = HashMap::new();
for edge in &mut self.edges {
if edge.broken {
continue;
}
if let (Some(from_node), Some(to_node)) =
(self.nodes.get(&edge.from), self.nodes.get(&edge.to))
{
let stress_diff = from_node.stress - to_node.stress;
let transmitted = stress_diff * edge.transmission;
edge.load = transmitted.abs();
if edge.load > edge.breaking_point {
edge.broken = true;
} else {
*stress_deltas.entry(edge.to).or_insert(0.0) += transmitted;
*stress_deltas.entry(edge.from).or_insert(0.0) -= transmitted * 0.5;
}
}
}
// Phase 2: Apply stress deltas with resilience
for (node_id, delta) in stress_deltas {
if let Some(node) = self.nodes.get_mut(&node_id) {
let absorbed = delta * (1.0 - node.resilience);
node.stress = (node.stress + absorbed).clamp(0.0, 1.0);
// Accumulate damage from sustained stress
if node.stress > node.fragility_threshold {
node.damage += 0.01;
}
}
}
// Phase 3: Update fragility status
let mut cascade_detected = false;
for node in self.nodes.values_mut() {
let was_fragile = node.is_fragile;
node.is_fragile = node.stress > node.fragility_threshold;
if node.is_fragile && !was_fragile {
cascade_detected = true;
}
}
// Phase 4: Record fragility events
for node in self.nodes.values() {
if node.is_fragile {
self.fragility_history.push(FragilityEvent {
tick: self.tick,
node_id: node.id,
stress_level: node.stress,
was_cascade: cascade_detected,
});
}
}
// Phase 5: Compute global tension
self.tension = self.compute_tension();
// Phase 6: Self-healing attempt
self.attempt_healing();
self.state()
}
/// Get current state
pub fn state(&self) -> SimulationState {
let stresses: Vec<f64> = self.nodes.values().map(|n| n.stress).collect();
let fragile: Vec<usize> = self
.nodes
.values()
.filter(|n| n.is_fragile)
.map(|n| n.id)
.collect();
let broken_edges = self.edges.iter().filter(|e| e.broken).count();
SimulationState {
tick: self.tick,
tension: self.tension,
fragile_nodes: fragile,
broken_edges,
avg_stress: stresses.iter().sum::<f64>() / stresses.len().max(1) as f64,
max_stress: stresses.iter().cloned().fold(0.0, f64::max),
stability: 1.0 - self.tension,
}
}
/// Identify tipping points - nodes near fragility threshold
pub fn tipping_points(&self) -> Vec<(usize, f64)> {
let mut points: Vec<(usize, f64)> = self
.nodes
.values()
.filter(|n| !n.is_fragile)
.map(|n| {
let distance_to_fragility = n.fragility_threshold - n.stress;
(n.id, distance_to_fragility)
})
.collect();
points.sort_by(|a, b| a.1.partial_cmp(&b.1).unwrap());
points.into_iter().take(3).collect()
}
/// Find stress accumulation zones
pub fn stress_accumulation_zones(&self) -> Vec<(usize, f64)> {
let mut zones: Vec<(usize, f64)> = self
.nodes
.values()
.map(|n| (n.id, n.stress + n.damage))
.collect();
zones.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap());
zones.into_iter().take(3).collect()
}
fn compute_tension(&self) -> f64 {
// Tension based on fragility spread and edge stress
let fragile_ratio = self.nodes.values().filter(|n| n.is_fragile).count() as f64
/ self.nodes.len().max(1) as f64;
let edge_stress: f64 = self
.edges
.iter()
.filter(|e| !e.broken)
.map(|e| e.load / e.breaking_point)
.sum::<f64>()
/ self.edges.len().max(1) as f64;
let broken_ratio =
self.edges.iter().filter(|e| e.broken).count() as f64 / self.edges.len().max(1) as f64;
(fragile_ratio * 0.4 + edge_stress * 0.3 + broken_ratio * 0.3).min(1.0)
}
fn attempt_healing(&mut self) {
// Only heal when tension is low enough
if self.tension > self.stability_threshold {
return;
}
// Gradually reduce stress in non-fragile nodes
for node in self.nodes.values_mut() {
if !node.is_fragile {
node.stress *= 0.95;
node.damage *= 0.99;
}
}
}
}
fn main() {
println!("=== Living Simulation ===\n");
println!("You are no longer modeling reality. You are modeling fragility.\n");
let mut sim = LivingSimulation::economic(8);
println!("Economic simulation: 8 sectors, interconnected supply chains\n");
// Run baseline
println!("Phase 1: Baseline stability");
for _ in 0..5 {
sim.tick();
}
let baseline = sim.state();
println!(
" Tension: {:.2}, Avg stress: {:.2}\n",
baseline.tension, baseline.avg_stress
);
// Apply perturbation
println!("Phase 2: Supply shock to sector 0");
sim.perturb(0, 0.7);
println!("Tick | Tension | Fragile | Broken | Tipping Points");
println!("-----|---------|---------|--------|---------------");
for _ in 0..20 {
let state = sim.tick();
let tipping = sim.tipping_points();
let tipping_str: String = tipping
.iter()
.map(|(id, dist)| format!("{}:{:.2}", id, dist))
.collect::<Vec<_>>()
.join(", ");
println!(
"{:4} | {:.2} | {:7} | {:6} | {}",
state.tick,
state.tension,
state.fragile_nodes.len(),
state.broken_edges,
tipping_str
);
// Additional perturbation mid-crisis
if state.tick == 12 {
println!(" >>> Additional shock to sector 3");
sim.perturb(3, 0.5);
}
}
let final_state = sim.state();
println!("\n=== Fragility Analysis ===");
println!("Stress accumulation zones:");
for (id, stress) in sim.stress_accumulation_zones() {
println!(" Sector {}: cumulative stress {:.2}", id, stress);
}
println!("\nFinal tipping points (nodes nearest to fragility):");
for (id, distance) in sim.tipping_points() {
println!(" Sector {}: {:.2} from threshold", id, distance);
}
println!("\nFragility events: {}", sim.fragility_history.len());
let cascades = sim
.fragility_history
.iter()
.filter(|e| e.was_cascade)
.count();
println!("Cascade events: {}", cascades);
println!("\n\"Not predicting outcomes. Exposing fragile boundaries.\"");
}

View File

@@ -0,0 +1,307 @@
//! # Synthetic Reflex Organism
//!
//! A system that behaves like a simple organism:
//! - No global objective function
//! - Only minimizes structural stress over time
//! - Appears calm most of the time
//! - Spikes briefly when something meaningful happens
//! - Learns only when instability crosses thresholds
//!
//! This is not intelligence as problem-solving.
//! This is intelligence as homeostasis.
use std::collections::VecDeque;
use std::time::{Duration, Instant};
/// The organism's internal state - no goals, only coherence
pub struct ReflexOrganism {
/// Current tension level (0.0 = calm, 1.0 = crisis)
tension: f32,
/// Tension history for detecting spikes
tension_history: VecDeque<(Instant, f32)>,
/// Resting tension threshold - below this, organism is calm
resting_threshold: f32,
/// Learning threshold - only learn when tension exceeds this
learning_threshold: f32,
/// Current metabolic rate (activity level)
metabolic_rate: f32,
/// Accumulated stress over time
accumulated_stress: f32,
/// Internal coherence patterns learned from instability
coherence_patterns: Vec<CoherencePattern>,
}
/// A pattern learned during high-tension moments
struct CoherencePattern {
/// What the tension signature looked like
tension_signature: Vec<f32>,
/// How the organism responded
response: OrganismResponse,
/// How effective was this response (0-1)
efficacy: f32,
}
#[derive(Clone, Debug)]
enum OrganismResponse {
/// Do nothing, wait for coherence to return
Rest,
/// Reduce activity, conserve resources
Contract,
/// Increase activity, explore solutions
Expand,
/// Isolate affected subsystems
Partition,
/// Redistribute load across subsystems
Rebalance,
}
impl ReflexOrganism {
pub fn new() -> Self {
Self {
tension: 0.0,
tension_history: VecDeque::with_capacity(1000),
resting_threshold: 0.2,
learning_threshold: 0.6,
metabolic_rate: 0.1, // Calm baseline
accumulated_stress: 0.0,
coherence_patterns: Vec::new(),
}
}
/// Observe external stimulus and update internal tension
/// The organism doesn't "process" data - it feels structural stress
pub fn observe(&mut self, mincut_tension: f32, coherence_delta: f32) {
let now = Instant::now();
// Tension is a blend of external signal and internal state
let external_stress = mincut_tension;
let internal_stress = self.accumulated_stress * 0.1;
let delta_stress = coherence_delta.abs() * 0.5;
self.tension = (external_stress + internal_stress + delta_stress).min(1.0);
self.tension_history.push_back((now, self.tension));
// Prune old history (keep last 10 seconds)
while let Some((t, _)) = self.tension_history.front() {
if now.duration_since(*t) > Duration::from_secs(10) {
self.tension_history.pop_front();
} else {
break;
}
}
// Update metabolic rate based on tension
self.metabolic_rate = self.compute_metabolic_response();
// Accumulate or release stress
if self.tension > self.resting_threshold {
self.accumulated_stress += self.tension * 0.01;
} else {
self.accumulated_stress *= 0.95; // Slow release when calm
}
}
/// The organism's reflex response - no planning, just reaction
pub fn reflex(&mut self) -> OrganismResponse {
// Below resting threshold: do nothing
if self.tension < self.resting_threshold {
return OrganismResponse::Rest;
}
// Check if we have a learned pattern for this tension signature
let current_signature = self.current_tension_signature();
if let Some(pattern) = self.find_matching_pattern(&current_signature) {
if pattern.efficacy > 0.7 {
return pattern.response.clone();
}
}
// No learned pattern - use instinctive response
match self.tension {
t if t < 0.4 => OrganismResponse::Contract,
t if t < 0.7 => OrganismResponse::Rebalance,
_ => OrganismResponse::Partition,
}
}
/// Learn from a tension episode - only when threshold exceeded
pub fn maybe_learn(&mut self, response_taken: OrganismResponse, outcome_tension: f32) {
// Only learn during significant instability
if self.tension < self.learning_threshold {
return;
}
let signature = self.current_tension_signature();
let efficacy = 1.0 - outcome_tension; // Lower resulting tension = better
// Check if we already have this pattern
if let Some(pattern) = self.find_matching_pattern_mut(&signature) {
// Update existing pattern with exponential moving average
pattern.efficacy = pattern.efficacy * 0.9 + efficacy * 0.1;
if efficacy > pattern.efficacy {
pattern.response = response_taken;
}
} else {
// New pattern
self.coherence_patterns.push(CoherencePattern {
tension_signature: signature,
response: response_taken,
efficacy,
});
}
println!(
"[LEARN] Tension={:.2}, Efficacy={:.2}, Patterns={}",
self.tension,
efficacy,
self.coherence_patterns.len()
);
}
/// Is the organism in a calm state?
pub fn is_calm(&self) -> bool {
self.tension < self.resting_threshold && self.accumulated_stress < 0.1
}
/// Is the organism experiencing a spike?
pub fn is_spiking(&self) -> bool {
if self.tension_history.len() < 10 {
return false;
}
let recent: Vec<f32> = self
.tension_history
.iter()
.rev()
.take(5)
.map(|(_, t)| *t)
.collect();
let older: Vec<f32> = self
.tension_history
.iter()
.rev()
.skip(5)
.take(5)
.map(|(_, t)| *t)
.collect();
let recent_avg: f32 = recent.iter().sum::<f32>() / recent.len() as f32;
let older_avg: f32 = older.iter().sum::<f32>() / older.len() as f32;
recent_avg > older_avg * 1.5 // 50% increase = spike
}
fn compute_metabolic_response(&self) -> f32 {
// Metabolic rate follows tension with damping
let target = self.tension * 0.8 + 0.1; // Never fully dormant
self.metabolic_rate * 0.9 + target * 0.1
}
fn current_tension_signature(&self) -> Vec<f32> {
self.tension_history
.iter()
.rev()
.take(10)
.map(|(_, t)| *t)
.collect()
}
fn find_matching_pattern(&self, signature: &[f32]) -> Option<&CoherencePattern> {
self.coherence_patterns
.iter()
.find(|p| Self::signature_similarity(&p.tension_signature, signature) > 0.8)
}
fn find_matching_pattern_mut(&mut self, signature: &[f32]) -> Option<&mut CoherencePattern> {
self.coherence_patterns
.iter_mut()
.find(|p| Self::signature_similarity(&p.tension_signature, signature) > 0.8)
}
fn signature_similarity(a: &[f32], b: &[f32]) -> f32 {
if a.is_empty() || b.is_empty() {
return 0.0;
}
let len = a.len().min(b.len());
let diff: f32 = a
.iter()
.zip(b.iter())
.take(len)
.map(|(x, y)| (x - y).abs())
.sum();
1.0 - (diff / len as f32).min(1.0)
}
}
fn main() {
println!("=== Synthetic Reflex Organism ===\n");
println!("No goals. No objectives. Only homeostasis.\n");
let mut organism = ReflexOrganism::new();
// Simulate external perturbations
let perturbations = [
// (mincut_tension, coherence_delta, description)
(0.1, 0.0, "Calm baseline"),
(0.15, 0.02, "Minor fluctuation"),
(0.1, -0.01, "Returning to calm"),
(0.5, 0.3, "Sudden stress spike"),
(0.6, 0.1, "Stress continues"),
(0.7, 0.15, "Peak tension"),
(0.55, -0.1, "Beginning recovery"),
(0.3, -0.2, "Stress releasing"),
(0.15, -0.1, "Approaching calm"),
(0.1, 0.0, "Calm restored"),
(0.8, 0.5, "Major crisis"),
(0.9, 0.1, "Crisis peak"),
(0.7, -0.15, "Crisis subsiding"),
(0.4, -0.25, "Recovery"),
(0.15, -0.1, "Calm again"),
];
println!("Time | Tension | State | Response | Metabolic");
println!("-----|---------|-----------|---------------|----------");
for (i, (mincut, delta, desc)) in perturbations.iter().enumerate() {
organism.observe(*mincut, *delta);
let response = organism.reflex();
let state = if organism.is_calm() {
"Calm"
} else if organism.is_spiking() {
"SPIKE"
} else {
"Active"
};
println!(
"{:4} | {:.2} | {:9} | {:13?} | {:.2} <- {}",
i, organism.tension, state, response, organism.metabolic_rate, desc
);
// Simulate response outcome and maybe learn
let outcome = organism.tension * 0.7; // Response reduces tension by 30%
organism.maybe_learn(response, outcome);
std::thread::sleep(Duration::from_millis(100));
}
println!("\n=== Organism Summary ===");
println!("Learned patterns: {}", organism.coherence_patterns.len());
println!(
"Final accumulated stress: {:.3}",
organism.accumulated_stress
);
println!(
"Current state: {}",
if organism.is_calm() { "Calm" } else { "Active" }
);
println!("\n\"Intelligence as homeostasis, not problem-solving.\"");
}

View File

@@ -0,0 +1,423 @@
//! # Thought Integrity Monitoring
//!
//! Compute substrates where reasoning integrity is monitored like voltage or temperature.
//!
//! When coherence drops:
//! - Reduce precision
//! - Exit early
//! - Route to simpler paths
//! - Escalate to heavier reasoning only if needed
//!
//! This is how you get always-on intelligence without runaway cost.
use std::time::{Duration, Instant};
/// Reasoning depth levels
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub enum ReasoningDepth {
/// Pattern matching only - instant, near-zero cost
Reflexive,
/// Simple inference - fast, low cost
Shallow,
/// Standard reasoning - moderate cost
Standard,
/// Deep analysis - high cost
Deep,
/// Full deliberation - maximum cost
Deliberative,
}
impl ReasoningDepth {
fn cost_multiplier(&self) -> f64 {
match self {
Self::Reflexive => 0.01,
Self::Shallow => 0.1,
Self::Standard => 1.0,
Self::Deep => 5.0,
Self::Deliberative => 20.0,
}
}
fn precision(&self) -> f64 {
match self {
Self::Reflexive => 0.6,
Self::Shallow => 0.75,
Self::Standard => 0.9,
Self::Deep => 0.95,
Self::Deliberative => 0.99,
}
}
}
/// A reasoning step with integrity monitoring
#[derive(Clone, Debug)]
pub struct ReasoningStep {
pub description: String,
pub coherence: f64,
pub confidence: f64,
pub depth: ReasoningDepth,
pub cost: f64,
}
/// Thought integrity monitor - like voltage monitoring for reasoning
pub struct ThoughtIntegrityMonitor {
/// Current coherence level (0-1)
coherence: f64,
/// Rolling coherence history
coherence_history: Vec<f64>,
/// Current reasoning depth
depth: ReasoningDepth,
/// Thresholds for depth adjustment
thresholds: DepthThresholds,
/// Total reasoning cost accumulated
total_cost: f64,
/// Cost budget per time window
cost_budget: f64,
/// Steps taken at each depth
depth_counts: [usize; 5],
/// Early exits taken
early_exits: usize,
/// Escalations to deeper reasoning
escalations: usize,
}
struct DepthThresholds {
/// Above this, can use Deliberative
deliberative: f64,
/// Above this, can use Deep
deep: f64,
/// Above this, can use Standard
standard: f64,
/// Above this, can use Shallow
shallow: f64,
/// Below this, must use Reflexive only
reflexive: f64,
}
impl Default for DepthThresholds {
fn default() -> Self {
Self {
deliberative: 0.95,
deep: 0.85,
standard: 0.7,
shallow: 0.5,
reflexive: 0.3,
}
}
}
/// Result of a reasoning attempt
#[derive(Debug)]
pub enum ReasoningResult {
/// Successfully completed at given depth
Complete {
answer: String,
confidence: f64,
depth_used: ReasoningDepth,
cost: f64,
},
/// Exited early due to coherence drop
EarlyExit {
partial_answer: String,
coherence_at_exit: f64,
steps_completed: usize,
},
/// Escalated to deeper reasoning
Escalated {
from_depth: ReasoningDepth,
to_depth: ReasoningDepth,
reason: String,
},
/// Refused to process - integrity too low
Refused { coherence: f64, reason: String },
}
impl ThoughtIntegrityMonitor {
pub fn new(cost_budget: f64) -> Self {
Self {
coherence: 1.0,
coherence_history: Vec::with_capacity(100),
depth: ReasoningDepth::Standard,
thresholds: DepthThresholds::default(),
total_cost: 0.0,
cost_budget,
depth_counts: [0; 5],
early_exits: 0,
escalations: 0,
}
}
/// Process a query with integrity monitoring
pub fn process(&mut self, query: &str, required_precision: f64) -> ReasoningResult {
// Check if we should refuse
if self.coherence < self.thresholds.reflexive {
return ReasoningResult::Refused {
coherence: self.coherence,
reason: "Coherence critically low - refusing to process".into(),
};
}
// Determine initial depth based on coherence and required precision
let initial_depth = self.select_depth(required_precision);
self.depth = initial_depth;
// Simulate reasoning steps
let mut steps: Vec<ReasoningStep> = Vec::new();
let mut current_confidence = 0.5;
for step_num in 0..10 {
// Simulate coherence drift during reasoning
let step_coherence = self.simulate_step_coherence(step_num, query);
self.update_coherence(step_coherence);
// Check for early exit
if self.should_early_exit(current_confidence, required_precision) {
self.early_exits += 1;
return ReasoningResult::EarlyExit {
partial_answer: format!("Partial answer from {} steps", steps.len()),
coherence_at_exit: self.coherence,
steps_completed: steps.len(),
};
}
// Check for escalation need
if current_confidence < required_precision * 0.7 && self.can_escalate() && step_num > 3
{
let old_depth = self.depth;
self.depth = self.escalate_depth();
self.escalations += 1;
return ReasoningResult::Escalated {
from_depth: old_depth,
to_depth: self.depth,
reason: "Confidence too low for required precision".into(),
};
}
// Execute step
let step_cost = self.depth.cost_multiplier() * 0.1;
self.total_cost += step_cost;
current_confidence += (self.depth.precision() - current_confidence) * 0.2;
steps.push(ReasoningStep {
description: format!("Step {}", step_num + 1),
coherence: self.coherence,
confidence: current_confidence,
depth: self.depth,
cost: step_cost,
});
self.depth_counts[self.depth as usize] += 1;
// Check if we've achieved required precision
if current_confidence >= required_precision {
break;
}
// Adjust depth based on updated coherence
self.depth = self.select_depth(required_precision);
}
let total_step_cost: f64 = steps.iter().map(|s| s.cost).sum();
ReasoningResult::Complete {
answer: format!("Answer from {} steps at {:?}", steps.len(), self.depth),
confidence: current_confidence,
depth_used: self.depth,
cost: total_step_cost,
}
}
/// Get current integrity status
pub fn status(&self) -> IntegrityStatus {
IntegrityStatus {
coherence: self.coherence,
coherence_trend: self.coherence_trend(),
current_depth: self.depth,
max_allowed_depth: self.max_allowed_depth(),
total_cost: self.total_cost,
budget_remaining: (self.cost_budget - self.total_cost).max(0.0),
depth_distribution: self.depth_counts,
early_exits: self.early_exits,
escalations: self.escalations,
}
}
fn select_depth(&self, required_precision: f64) -> ReasoningDepth {
// Balance coherence-allowed depth with precision requirements
let max_depth = self.max_allowed_depth();
// Find minimum depth that meets precision requirement
let min_needed = if required_precision > 0.95 {
ReasoningDepth::Deliberative
} else if required_precision > 0.9 {
ReasoningDepth::Deep
} else if required_precision > 0.8 {
ReasoningDepth::Standard
} else if required_precision > 0.7 {
ReasoningDepth::Shallow
} else {
ReasoningDepth::Reflexive
};
// Use the lesser of max allowed and minimum needed
if max_depth < min_needed {
max_depth
} else {
min_needed
}
}
fn max_allowed_depth(&self) -> ReasoningDepth {
if self.coherence >= self.thresholds.deliberative {
ReasoningDepth::Deliberative
} else if self.coherence >= self.thresholds.deep {
ReasoningDepth::Deep
} else if self.coherence >= self.thresholds.standard {
ReasoningDepth::Standard
} else if self.coherence >= self.thresholds.shallow {
ReasoningDepth::Shallow
} else {
ReasoningDepth::Reflexive
}
}
fn can_escalate(&self) -> bool {
self.depth < self.max_allowed_depth() && self.total_cost < self.cost_budget * 0.8
}
fn escalate_depth(&self) -> ReasoningDepth {
let max = self.max_allowed_depth();
match self.depth {
ReasoningDepth::Reflexive if max >= ReasoningDepth::Shallow => ReasoningDepth::Shallow,
ReasoningDepth::Shallow if max >= ReasoningDepth::Standard => ReasoningDepth::Standard,
ReasoningDepth::Standard if max >= ReasoningDepth::Deep => ReasoningDepth::Deep,
ReasoningDepth::Deep if max >= ReasoningDepth::Deliberative => {
ReasoningDepth::Deliberative
}
_ => self.depth,
}
}
fn should_early_exit(&self, confidence: f64, required: f64) -> bool {
// Exit early if:
// 1. Coherence dropped significantly
// 2. And we've achieved some confidence
self.coherence < self.thresholds.shallow && confidence > required * 0.6
}
fn simulate_step_coherence(&self, step: usize, query: &str) -> f64 {
// Simulate coherence based on query complexity and step depth
let base = 0.9 - (step as f64 * 0.02);
let complexity_factor = 1.0 - (query.len() as f64 * 0.001).min(0.3);
base * complexity_factor
}
fn update_coherence(&mut self, new_coherence: f64) {
// Exponential moving average
self.coherence = self.coherence * 0.7 + new_coherence * 0.3;
self.coherence_history.push(self.coherence);
if self.coherence_history.len() > 50 {
self.coherence_history.remove(0);
}
}
fn coherence_trend(&self) -> f64 {
if self.coherence_history.len() < 10 {
return 0.0;
}
let recent: f64 = self.coherence_history.iter().rev().take(5).sum::<f64>() / 5.0;
let older: f64 = self
.coherence_history
.iter()
.rev()
.skip(5)
.take(5)
.sum::<f64>()
/ 5.0;
recent - older
}
}
#[derive(Debug)]
pub struct IntegrityStatus {
pub coherence: f64,
pub coherence_trend: f64,
pub current_depth: ReasoningDepth,
pub max_allowed_depth: ReasoningDepth,
pub total_cost: f64,
pub budget_remaining: f64,
pub depth_distribution: [usize; 5],
pub early_exits: usize,
pub escalations: usize,
}
fn main() {
println!("=== Thought Integrity Monitoring ===\n");
println!("Reasoning integrity monitored like voltage or temperature.\n");
let mut monitor = ThoughtIntegrityMonitor::new(100.0);
// Various queries with different precision requirements
let queries = vec![
("Simple lookup", 0.7),
("Pattern matching", 0.75),
("Basic inference", 0.85),
("Complex reasoning", 0.92),
("Critical decision", 0.98),
("Another simple query", 0.65),
("Medium complexity", 0.8),
("Deep analysis needed", 0.95),
];
println!("Query | Precision | Result | Depth | Coherence");
println!("--------------------|-----------|-----------------|------------|----------");
for (query, precision) in &queries {
let result = monitor.process(query, *precision);
let status = monitor.status();
let result_str = match &result {
ReasoningResult::Complete { depth_used, .. } => format!("Complete ({:?})", depth_used),
ReasoningResult::EarlyExit {
steps_completed, ..
} => format!("EarlyExit ({})", steps_completed),
ReasoningResult::Escalated { to_depth, .. } => format!("Escalated->{:?}", to_depth),
ReasoningResult::Refused { .. } => "REFUSED".into(),
};
println!(
"{:19} | {:.2} | {:15} | {:10?} | {:.2}",
query, precision, result_str, status.current_depth, status.coherence
);
}
let final_status = monitor.status();
println!("\n=== Integrity Summary ===");
println!(
"Total cost: {:.2} / {:.2} budget",
final_status.total_cost, 100.0
);
println!("Budget remaining: {:.2}", final_status.budget_remaining);
println!("Early exits: {}", final_status.early_exits);
println!("Escalations: {}", final_status.escalations);
println!("\nDepth distribution:");
let depth_names = ["Reflexive", "Shallow", "Standard", "Deep", "Deliberative"];
for (i, count) in final_status.depth_distribution.iter().enumerate() {
if *count > 0 {
println!(" {:12}: {} steps", depth_names[i], count);
}
}
println!("\n\"Always-on intelligence without runaway cost.\"");
}

View File

@@ -0,0 +1,366 @@
//! # Timing Synchronization
//!
//! Machines that feel timing, not data.
//!
//! Most systems measure values. This measures when things stop lining up.
//!
//! Applications:
//! - Prosthetics that adapt reflex timing to the user's nervous system
//! - Brain-computer interfaces that align with biological rhythms
//! - Control systems that synchronize with humans instead of commanding them
//!
//! You stop predicting intent. You synchronize with it.
//! This is how machines stop feeling external.
use std::collections::VecDeque;
use std::f64::consts::PI;
use std::time::{Duration, Instant};
/// A rhythm detected in a signal stream
#[derive(Clone, Debug)]
pub struct Rhythm {
/// Detected period in milliseconds
period_ms: f64,
/// Phase offset (0-1)
phase: f64,
/// Confidence in this rhythm (0-1)
confidence: f64,
/// Last peak timestamp
last_peak: Instant,
}
/// Synchronization state between two rhythmic systems
#[derive(Clone, Debug)]
pub struct SyncState {
/// Phase difference (-0.5 to 0.5, 0 = perfectly aligned)
phase_diff: f64,
/// Whether systems are drifting apart or converging
drift_rate: f64,
/// Coupling strength (how much they influence each other)
coupling: f64,
/// Time since last alignment event
since_alignment: Duration,
}
/// A timing-aware interface that synchronizes with external rhythms
pub struct TimingSynchronizer {
/// Our internal rhythm
internal_rhythm: Rhythm,
/// Detected external rhythm (e.g., human nervous system)
external_rhythm: Option<Rhythm>,
/// History of phase differences
phase_history: VecDeque<(Instant, f64)>,
/// Current synchronization state
sync_state: SyncState,
/// Adaptation rate (how quickly we adjust to external rhythm)
adaptation_rate: f64,
/// Minimum coupling threshold to attempt sync
coupling_threshold: f64,
/// Coherence signal from MinCut (when timing breaks down)
coherence: f64,
}
impl TimingSynchronizer {
pub fn new(internal_period_ms: f64) -> Self {
Self {
internal_rhythm: Rhythm {
period_ms: internal_period_ms,
phase: 0.0,
confidence: 1.0,
last_peak: Instant::now(),
},
external_rhythm: None,
phase_history: VecDeque::with_capacity(1000),
sync_state: SyncState {
phase_diff: 0.0,
drift_rate: 0.0,
coupling: 0.0,
since_alignment: Duration::ZERO,
},
adaptation_rate: 0.1,
coupling_threshold: 0.3,
coherence: 1.0,
}
}
/// Observe an external timing signal (e.g., neural spike, heartbeat, movement)
pub fn observe_external(&mut self, signal_value: f64, timestamp: Instant) {
// Detect peaks in external signal to find rhythm
self.detect_external_rhythm(signal_value, timestamp);
// If we have both rhythms, compute phase relationship
if let Some(ref external) = self.external_rhythm {
let phase_diff =
self.compute_phase_difference(&self.internal_rhythm, external, timestamp);
// Track phase history
self.phase_history.push_back((timestamp, phase_diff));
while self.phase_history.len() > 100 {
self.phase_history.pop_front();
}
// Update sync state
self.update_sync_state(phase_diff, timestamp);
// Update coherence based on phase stability
self.update_coherence();
}
}
/// Advance our internal rhythm and potentially adapt to external
pub fn tick(&mut self) -> TimingAction {
let now = Instant::now();
// Advance internal phase
let elapsed = now.duration_since(self.internal_rhythm.last_peak);
let cycle_progress = elapsed.as_secs_f64() * 1000.0 / self.internal_rhythm.period_ms;
self.internal_rhythm.phase = cycle_progress.fract();
// Check if we should adapt to external rhythm
if self.should_adapt() {
return self.adapt_to_external();
}
// Check if we're at a natural action point
if self.is_action_point() {
TimingAction::Fire {
phase: self.internal_rhythm.phase,
confidence: self.sync_state.coupling,
}
} else {
TimingAction::Wait {
until_next_ms: self.ms_until_next_action(),
}
}
}
/// Get current synchronization quality
pub fn sync_quality(&self) -> f64 {
// Perfect sync = phase_diff near 0, high coupling, stable drift
let phase_quality = 1.0 - self.sync_state.phase_diff.abs() * 2.0;
let stability = 1.0 - self.sync_state.drift_rate.abs().min(1.0);
let coupling = self.sync_state.coupling;
(phase_quality * stability * coupling).max(0.0)
}
/// Are we currently synchronized with external rhythm?
pub fn is_synchronized(&self) -> bool {
self.sync_quality() > 0.7 && self.coherence > 0.8
}
/// Get the optimal moment for action (synchronizing with external)
pub fn optimal_action_phase(&self) -> f64 {
if let Some(ref external) = self.external_rhythm {
// Aim for the external rhythm's peak
let target_phase = 0.0; // Peak of external rhythm
let adjustment = self.sync_state.phase_diff;
(target_phase - adjustment).rem_euclid(1.0)
} else {
0.0
}
}
fn detect_external_rhythm(&mut self, signal: f64, timestamp: Instant) {
// Simple peak detection (in real system, use proper rhythm extraction)
if signal > 0.8 {
// Peak threshold
if let Some(ref mut rhythm) = self.external_rhythm {
let since_last = timestamp.duration_since(rhythm.last_peak);
let new_period = since_last.as_secs_f64() * 1000.0;
// Smooth period estimate
rhythm.period_ms = rhythm.period_ms * 0.8 + new_period * 0.2;
rhythm.last_peak = timestamp;
rhythm.confidence = (rhythm.confidence * 0.9 + 0.1).min(1.0);
} else {
self.external_rhythm = Some(Rhythm {
period_ms: 1000.0, // Initial guess
phase: 0.0,
confidence: 0.5,
last_peak: timestamp,
});
}
}
}
fn compute_phase_difference(&self, internal: &Rhythm, external: &Rhythm, now: Instant) -> f64 {
let internal_phase =
now.duration_since(internal.last_peak).as_secs_f64() * 1000.0 / internal.period_ms;
let external_phase =
now.duration_since(external.last_peak).as_secs_f64() * 1000.0 / external.period_ms;
let diff = (internal_phase - external_phase).rem_euclid(1.0);
if diff > 0.5 {
diff - 1.0
} else {
diff
}
}
fn update_sync_state(&mut self, phase_diff: f64, timestamp: Instant) {
// Compute drift rate from phase history
if self.phase_history.len() >= 10 {
let recent: Vec<f64> = self
.phase_history
.iter()
.rev()
.take(5)
.map(|(_, p)| *p)
.collect();
let older: Vec<f64> = self
.phase_history
.iter()
.rev()
.skip(5)
.take(5)
.map(|(_, p)| *p)
.collect();
let recent_avg: f64 = recent.iter().sum::<f64>() / recent.len() as f64;
let older_avg: f64 = older.iter().sum::<f64>() / older.len() as f64;
self.sync_state.drift_rate = (recent_avg - older_avg) * 10.0;
}
// Update coupling based on rhythm stability
if let Some(ref external) = self.external_rhythm {
self.sync_state.coupling = external.confidence * self.internal_rhythm.confidence;
}
self.sync_state.phase_diff = phase_diff;
// Track alignment events
if phase_diff.abs() < 0.05 {
self.sync_state.since_alignment = Duration::ZERO;
} else {
self.sync_state.since_alignment = timestamp.duration_since(
self.phase_history
.front()
.map(|(t, _)| *t)
.unwrap_or(timestamp),
);
}
}
fn update_coherence(&mut self) {
// Coherence drops when phase relationship becomes unstable
let phase_variance: f64 = if self.phase_history.len() > 5 {
let phases: Vec<f64> = self.phase_history.iter().map(|(_, p)| *p).collect();
let mean = phases.iter().sum::<f64>() / phases.len() as f64;
phases.iter().map(|p| (p - mean).powi(2)).sum::<f64>() / phases.len() as f64
} else {
0.0
};
self.coherence = (1.0 - phase_variance * 10.0).max(0.0).min(1.0);
}
fn should_adapt(&self) -> bool {
self.sync_state.coupling > self.coupling_threshold
&& self.sync_state.phase_diff.abs() > 0.1
&& self.coherence > 0.5
}
fn adapt_to_external(&mut self) -> TimingAction {
// Adjust our period to converge with external
if let Some(ref external) = self.external_rhythm {
let period_diff = external.period_ms - self.internal_rhythm.period_ms;
self.internal_rhythm.period_ms += period_diff * self.adaptation_rate;
// Also nudge phase
let phase_adjustment = self.sync_state.phase_diff * self.adaptation_rate;
TimingAction::Adapt {
period_delta_ms: period_diff * self.adaptation_rate,
phase_nudge: phase_adjustment,
}
} else {
TimingAction::Wait {
until_next_ms: 10.0,
}
}
}
fn is_action_point(&self) -> bool {
// Fire at the optimal phase for synchronization
let optimal = self.optimal_action_phase();
let current = self.internal_rhythm.phase;
(current - optimal).abs() < 0.05
}
fn ms_until_next_action(&self) -> f64 {
let optimal = self.optimal_action_phase();
let current = self.internal_rhythm.phase;
let phase_delta = (optimal - current).rem_euclid(1.0);
phase_delta * self.internal_rhythm.period_ms
}
}
#[derive(Debug)]
pub enum TimingAction {
/// Fire an action at this moment
Fire { phase: f64, confidence: f64 },
/// Wait before next action
Wait { until_next_ms: f64 },
/// Adapting rhythm to external source
Adapt {
period_delta_ms: f64,
phase_nudge: f64,
},
}
fn main() {
println!("=== Timing Synchronization ===\n");
println!("Machines that feel timing, not data.\n");
let mut sync = TimingSynchronizer::new(100.0); // 100ms internal period
// Simulate external biological rhythm (e.g., 90ms period with noise)
let external_period = 90.0;
let start = Instant::now();
println!("Internal period: 100ms");
println!("External period: 90ms (simulated biological rhythm)\n");
println!("Time | Phase Diff | Sync Quality | Coherence | Action");
println!("------|------------|--------------|-----------|--------");
for i in 0..50 {
let elapsed = Duration::from_millis(i * 20);
let now = start + elapsed;
// Generate external rhythm signal (sinusoidal with peaks)
let external_phase = (elapsed.as_secs_f64() * 1000.0 / external_period) * 2.0 * PI;
let signal = ((external_phase.sin() + 1.0) / 2.0).powf(4.0); // Sharper peaks
sync.observe_external(signal, now);
let action = sync.tick();
println!(
"{:5} | {:+.3} | {:.2} | {:.2} | {:?}",
i * 20,
sync.sync_state.phase_diff,
sync.sync_quality(),
sync.coherence,
action
);
std::thread::sleep(Duration::from_millis(20));
}
println!("\n=== Results ===");
println!(
"Final internal period: {:.1}ms",
sync.internal_rhythm.period_ms
);
println!("Synchronized: {}", sync.is_synchronized());
println!("Sync quality: {:.2}", sync.sync_quality());
println!("\n\"You stop predicting intent. You synchronize with it.\"");
}

View File

@@ -0,0 +1,162 @@
//! SONA learning workflow example
use ruvector_dag::dag::{OperatorNode, OperatorType, QueryDag};
use ruvector_dag::sona::{DagSonaEngine, DagTrajectory, DagTrajectoryBuffer};
fn main() {
println!("=== SONA Learning Workflow ===\n");
// Initialize SONA engine
let mut sona = DagSonaEngine::new(256);
println!("SONA Engine initialized with:");
println!(" Embedding dimension: 256");
println!(" Initial patterns: {}", sona.pattern_count());
println!(" Initial trajectories: {}", sona.trajectory_count());
// Simulate query execution workflow
println!("\n--- Query Execution Simulation ---");
for query_num in 1..=5 {
println!("\nQuery #{}", query_num);
// Create a query DAG
let dag = create_random_dag(query_num);
println!(
" DAG nodes: {}, edges: {}",
dag.node_count(),
dag.edge_count()
);
// Pre-query: Get enhanced embedding
let enhanced = sona.pre_query(&dag);
println!(
" Pre-query adaptation complete (embedding dim: {})",
enhanced.len()
);
// Simulate execution - later queries get faster as SONA learns
let learning_factor = 1.0 - (query_num as f64 * 0.08);
let execution_time = 100.0 * learning_factor + (rand::random::<f64>() * 10.0);
let baseline_time = 100.0;
// Post-query: Record trajectory
sona.post_query(&dag, execution_time, baseline_time, "topological");
let improvement = ((baseline_time - execution_time) / baseline_time) * 100.0;
println!(
" Execution: {:.1}ms (baseline: {:.1}ms)",
execution_time, baseline_time
);
println!(" Improvement: {:.1}%", improvement);
// Every 2 queries, trigger learning
if query_num % 2 == 0 {
println!(" Running background learning...");
sona.background_learn();
println!(
" Patterns: {}, Trajectories: {}",
sona.pattern_count(),
sona.trajectory_count()
);
}
}
// Final statistics
println!("\n--- Final Statistics ---");
println!("Total patterns: {}", sona.pattern_count());
println!("Total trajectories: {}", sona.trajectory_count());
println!("Total clusters: {}", sona.cluster_count());
// Demonstrate trajectory buffer
println!("\n--- Trajectory Buffer Demo ---");
let buffer = DagTrajectoryBuffer::new(100);
println!("Creating {} sample trajectories...", 10);
for i in 0..10 {
let embedding = vec![rand::random::<f32>(); 256];
let trajectory = DagTrajectory::new(
i as u64,
embedding,
"topological".to_string(),
50.0 + i as f64,
100.0,
);
buffer.push(trajectory);
}
println!("Buffer size: {}", buffer.len());
println!("Total recorded: {}", buffer.total_count());
let drained = buffer.drain();
println!("Drained {} trajectories", drained.len());
println!("Buffer after drain: {}", buffer.len());
// Demonstrate metrics
if let Some(first) = drained.first() {
println!("\nSample trajectory:");
println!(" Query hash: {}", first.query_hash);
println!(" Mechanism: {}", first.attention_mechanism);
println!(" Execution time: {:.2}ms", first.execution_time_ms);
let baseline = first.execution_time_ms / first.improvement_ratio as f64;
println!(" Baseline time: {:.2}ms", baseline);
println!(" Improvement ratio: {:.3}", first.improvement_ratio);
}
println!("\n=== Example Complete ===");
}
fn create_random_dag(seed: usize) -> QueryDag {
let mut dag = QueryDag::new();
// Create nodes based on seed for variety
let node_count = 3 + (seed % 5);
for i in 0..node_count {
let op = if i == 0 {
// Start with a scan
if seed % 2 == 0 {
OperatorType::SeqScan {
table: format!("table_{}", seed),
}
} else {
OperatorType::HnswScan {
index: format!("idx_{}", seed),
ef_search: 64,
}
}
} else if i == node_count - 1 {
// End with result
OperatorType::Result
} else {
// Middle operators vary
match (seed + i) % 4 {
0 => OperatorType::Filter {
predicate: format!("col{} > {}", i, seed * 10),
},
1 => OperatorType::Sort {
keys: vec![format!("col{}", i)],
descending: vec![false],
},
2 => OperatorType::Limit {
count: 10 + (seed * i),
},
_ => OperatorType::NestedLoopJoin,
}
};
dag.add_node(OperatorNode::new(i, op));
}
// Create linear chain
for i in 0..node_count - 1 {
let _ = dag.add_edge(i, i + 1);
}
// Add some branching for variety
if node_count > 4 && seed % 3 == 0 {
let _ = dag.add_edge(0, 2);
}
dag
}

View File

@@ -0,0 +1,201 @@
//! Self-healing system example
use ruvector_dag::healing::{
AnomalyConfig, AnomalyDetector, HealingOrchestrator, IndexHealth, IndexHealthChecker,
IndexThresholds, IndexType, LearningDriftDetector,
};
use std::time::Instant;
fn main() {
println!("=== Self-Healing System Demo ===\n");
// Create healing orchestrator
let mut orchestrator = HealingOrchestrator::new();
// Add detectors for different metrics
orchestrator.add_detector(
"query_latency",
AnomalyConfig {
z_threshold: 3.0,
window_size: 100,
min_samples: 10,
},
);
orchestrator.add_detector(
"pattern_quality",
AnomalyConfig {
z_threshold: 2.5,
window_size: 50,
min_samples: 5,
},
);
orchestrator.add_detector(
"memory_usage",
AnomalyConfig {
z_threshold: 2.0,
window_size: 50,
min_samples: 5,
},
);
println!("Orchestrator configured:");
println!(" Detectors: 3 (query_latency, pattern_quality, memory_usage)");
println!(" Repair strategies: Built-in cache flush and index rebuild");
// Simulate normal operation
println!("\n--- Normal Operation ---");
for i in 0..50 {
// Normal query latency: 100ms ± 20ms
let latency = 100.0 + (rand::random::<f64>() - 0.5) * 40.0;
orchestrator.observe("query_latency", latency);
// Normal pattern quality: 0.9 ± 0.1
let quality = 0.9 + (rand::random::<f64>() - 0.5) * 0.2;
orchestrator.observe("pattern_quality", quality);
// Normal memory: 1000 ± 100 MB
let memory = 1000.0 + (rand::random::<f64>() - 0.5) * 200.0;
orchestrator.observe("memory_usage", memory);
if i % 10 == 9 {
let result = orchestrator.run_cycle();
let failures = result.repairs_attempted - result.repairs_succeeded;
println!(
"Cycle {}: {} anomalies, {} repairs, {} failures",
i + 1,
result.anomalies_detected,
result.repairs_succeeded,
failures
);
}
}
println!(
"\nHealth Score after normal operation: {:.2}",
orchestrator.health_score()
);
// Inject anomalies
println!("\n--- Injecting Anomalies ---");
// Spike in latency
orchestrator.observe("query_latency", 500.0);
orchestrator.observe("query_latency", 450.0);
println!(" Injected latency spike: 500ms, 450ms");
// Drop in quality
orchestrator.observe("pattern_quality", 0.3);
orchestrator.observe("pattern_quality", 0.4);
println!(" Injected quality drop: 0.3, 0.4");
let result = orchestrator.run_cycle();
println!("\nAfter anomalies:");
println!(" Detected: {}", result.anomalies_detected);
println!(" Repairs succeeded: {}", result.repairs_succeeded);
println!(
" Repairs failed: {}",
result.repairs_attempted - result.repairs_succeeded
);
println!(" Health Score: {:.2}", orchestrator.health_score());
// Recovery phase
println!("\n--- Recovery Phase ---");
for i in 0..20 {
let latency = 100.0 + (rand::random::<f64>() - 0.5) * 40.0;
orchestrator.observe("query_latency", latency);
let quality = 0.9 + (rand::random::<f64>() - 0.5) * 0.2;
orchestrator.observe("pattern_quality", quality);
}
let result = orchestrator.run_cycle();
println!(
"After recovery: {} anomalies, health score: {:.2}",
result.anomalies_detected,
orchestrator.health_score()
);
// Demonstrate index health checking
println!("\n--- Index Health Check ---");
let checker = IndexHealthChecker::new(IndexThresholds::default());
let healthy_index = IndexHealth {
index_name: "vectors_hnsw".to_string(),
index_type: IndexType::Hnsw,
fragmentation: 0.1,
recall_estimate: 0.98,
node_count: 100000,
last_rebalanced: Some(Instant::now()),
};
let result = checker.check_health(&healthy_index);
println!("\nHealthy HNSW index:");
println!(" Status: {:?}", result.status);
println!(" Issues: {}", result.issues.len());
let fragmented_index = IndexHealth {
index_name: "vectors_ivf".to_string(),
index_type: IndexType::IvfFlat,
fragmentation: 0.45,
recall_estimate: 0.85,
node_count: 50000,
last_rebalanced: None,
};
let result = checker.check_health(&fragmented_index);
println!("\nFragmented IVF-Flat index:");
println!(" Status: {:?}", result.status);
println!(" Issues: {:?}", result.issues);
println!(" Recommendations:");
for rec in &result.recommendations {
println!(" - {}", rec);
}
// Demonstrate drift detection
println!("\n--- Learning Drift Detection ---");
let mut drift = LearningDriftDetector::new(0.1, 20);
drift.set_baseline("accuracy", 0.95);
drift.set_baseline("recall", 0.92);
println!("Baselines set:");
println!(" accuracy: 0.95");
println!(" recall: 0.92");
// Simulate declining accuracy
println!("\nSimulating accuracy decline...");
for i in 0..20 {
let accuracy = 0.95 - (i as f64) * 0.015;
drift.record("accuracy", accuracy);
// Recall stays stable
let recall = 0.92 + (rand::random::<f64>() - 0.5) * 0.02;
drift.record("recall", recall);
}
if let Some(metric) = drift.check_drift("accuracy") {
println!("\nDrift detected in accuracy:");
println!(" Current: {:.3}", metric.current_value);
println!(" Baseline: {:.3}", metric.baseline_value);
println!(" Magnitude: {:.3}", metric.drift_magnitude);
println!(" Trend: {:?}", metric.trend);
println!(
" Severity: {}",
if metric.drift_magnitude > 0.2 {
"HIGH"
} else if metric.drift_magnitude > 0.1 {
"MEDIUM"
} else {
"LOW"
}
);
}
if drift.check_drift("recall").is_none() {
println!("\nNo drift detected in recall (stable)");
}
println!("\n=== Example Complete ===");
}

File diff suppressed because it is too large Load Diff