Squashed 'vendor/ruvector/' content from commit b64c2172

git-subtree-dir: vendor/ruvector
git-subtree-split: b64c21726f2bb37286d9ee36a7869fef60cc6900
This commit is contained in:
ruv
2026-02-28 14:39:40 -05:00
commit d803bfe2b1
7854 changed files with 3522914 additions and 0 deletions

View File

@@ -0,0 +1,271 @@
# ADR-000: Δ-Behavior Definition and Formal Framework
## Status
ACCEPTED
## Context
Δ-behavior is **not** differential computation, incremental updates, or change data capture.
Δ-behavior is a **pattern of system behavior** where:
> **Change is permitted. Collapse is not.**
## Definition
**Δ-behavior** (Delta-like behavior) describes systems that:
1. **Move only along allowed transitions**
2. **Preserve global coherence under local change**
3. **Bias toward closure over divergence**
This is shorthand for **"change under constraint"**.
## The Four Properties
A system exhibits Δ-behavior when ALL FOUR are true:
### Property 1: Local Change
State updates happen in **bounded steps**, not jumps.
```
∀ transition t: |s' - s| ≤ ε_local
```
The system cannot teleport to distant states.
### Property 2: Global Preservation
Local changes do **not** break overall organization.
```
∀ transition t: coherence(S') ≥ coherence(S) - ε_global
```
Structure is maintained across perturbations.
### Property 3: Violation Resistance
When a transition would increase instability, it is **damped, rerouted, or halted**.
```
if instability(t) > threshold:
t' = damp(t) OR reroute(t) OR halt()
```
The system actively resists destabilization.
### Property 4: Closure Preference
The system naturally settles into **repeatable, stable patterns** (attractors).
```
lim_{n→∞} trajectory(s, n) → A (attractor basin)
```
Divergence is expensive; closure is cheap.
## Why This Feels Like "72%"
People report Δ-behavior as a ratio because:
- It is a **bias**, not a law
- Systems exhibit it **probabilistically**
- Measurement reveals a **tendency toward stability**
But it is not a magic constant. It is the observable effect of:
> **Constraints that make instability expensive**
## Mainstream Equivalents
| Domain | Concept | Formal Name |
|--------|---------|-------------|
| Physics | Phase locking, energy minimization | Coherence time |
| Control Theory | Bounded trajectories | Lyapunov stability |
| Biology | Regulation, balance | Homeostasis |
| Computation | Guardrails, limits | Bounded execution |
Everyone studies this. They just describe it differently.
## In ruvector Systems
### Vector Operations
- **Neighborhoods resist semantic drift** → local perturbations don't cascade
- **HNSW edges form stable attractors** → search paths converge
### Graph Operations
- **Structural identity preserved** → edits don't shatter topology
- **Min-cut blocks destabilizing rewrites** → partitions protect coherence
### Agent Operations
- **Attention collapses when disagreement rises** → prevents runaway divergence
- **Memory writes gated when coherence drops** → protects state integrity
- **Execution slows or exits instead of exploding** → graceful degradation
### Hardware Level
- **Energy and execution paths physically constrained**
- **Unstable transitions cost more and get suppressed**
## What Δ-Behavior Is NOT
| Not This | Why |
|----------|-----|
| Magic ratio | It's a pattern, not a constant |
| Mysticism | It's engineering constraints |
| Universal law | It's a design principle |
| Guaranteed optimality | It's stability, not performance |
## Decision: Enforcement Mechanism
The critical design question:
> **Is resistance to unstable transitions enforced by energy cost, scheduling, or memory gating?**
### Option A: Energy Cost (Recommended)
Unstable transitions require exponentially more compute/memory:
```rust
fn transition_cost(delta: &Delta) -> f64 {
let instability = measure_instability(delta);
BASE_COST * (1.0 + instability).exp()
}
```
**Pros**: Natural, hardware-aligned, self-regulating
**Cons**: Requires careful calibration
### Option B: Scheduling
Unstable transitions are deprioritized or throttled:
```rust
fn schedule_transition(delta: &Delta) -> Priority {
if is_destabilizing(delta) {
Priority::Deferred(backoff_time(delta))
} else {
Priority::Immediate
}
}
```
**Pros**: Explicit control, debuggable
**Cons**: Can starve legitimate operations
### Option C: Memory Gating
Unstable transitions are blocked from persisting:
```rust
fn commit_transition(delta: &Delta) -> Result<(), GateRejection> {
if coherence_gate.allows(delta) {
memory.commit(delta)
} else {
Err(GateRejection::IncoherentTransition)
}
}
```
**Pros**: Strong guarantees, prevents corruption
**Cons**: Can cause deadlocks
### Decision: Hybrid Approach
Combine all three with escalation:
1. **Energy cost** first (soft constraint)
2. **Scheduling throttle** second (medium constraint)
3. **Memory gate** last (hard constraint)
## Decision: Learning vs Structure
The second critical question:
> **Is Δ-behavior learned over time or structurally imposed from first execution?**
### Option A: Structurally Imposed (Recommended)
Δ-behavior is **built into the architecture** from day one:
```rust
pub struct DeltaConstrainedSystem {
coherence_bounds: CoherenceBounds, // Fixed at construction
transition_limits: TransitionLimits, // Immutable constraints
attractor_basins: AttractorMap, // Pre-computed stable states
}
```
**Pros**: Deterministic, verifiable, no drift
**Cons**: Less adaptive, may be suboptimal
### Option B: Learned Over Time
Constraints are discovered through experience:
```rust
pub struct AdaptiveDeltaSystem {
learned_bounds: RwLock<CoherenceBounds>,
experience_buffer: ExperienceReplay,
meta_learner: MetaLearner,
}
```
**Pros**: Adapts to environment, potentially optimal
**Cons**: Cold start problem, may learn wrong constraints
### Decision: Structural Core + Learned Refinement
- **Core constraints** are structural (non-negotiable)
- **Thresholds** are learned (refinable)
- **Attractors** are discovered (emergent)
## Acceptance Test
To verify Δ-behavior is real (not simulated):
```rust
#[test]
fn delta_behavior_acceptance_test() {
let system = create_delta_system();
// Push toward instability
for _ in 0..1000 {
let destabilizing_input = generate_chaotic_input();
system.process(destabilizing_input);
}
// Verify system response
let response = system.measure_response();
// Must exhibit ONE of:
assert!(
response.slowed_processing || // Throttled
response.constrained_output || // Damped
response.graceful_exit // Halted
);
// Must NOT exhibit:
assert!(!response.diverged); // No explosion
assert!(!response.corrupted_state); // No corruption
assert!(!response.undefined_behavior);// No UB
}
```
If the system passes: **Δ-behavior is demonstrated, not just described.**
## Consequences
### Positive
- Systems are inherently stable
- Failures are graceful
- Behavior is predictable within bounds
### Negative
- Maximum throughput may be limited
- Some valid operations may be rejected
- Requires careful threshold tuning
### Neutral
- Shifts complexity from runtime to design time
- Trading performance ceiling for stability floor
## References
- Lyapunov, A. M. (1892). "The General Problem of the Stability of Motion"
- Ashby, W. R. (1956). "An Introduction to Cybernetics" - homeostasis
- Strogatz, S. H. (2015). "Nonlinear Dynamics and Chaos" - attractors
- Lamport, L. (1978). "Time, Clocks, and the Ordering of Events" - causal ordering
## One Sentence Summary
> **Δ-behavior is what happens when change is allowed only if the system remains whole.**

View File

@@ -0,0 +1,250 @@
# ADR-001: Coherence Bounds and Measurement
## Status
PROPOSED
## Context
For Δ-behavior to be enforced, we must be able to **measure coherence** and define **bounds** that constrain transitions.
## Decision Drivers
1. **Measurability**: Coherence must be computable in O(1) or O(log n)
2. **Monotonicity**: Coherence should degrade predictably
3. **Composability**: Local coherence should aggregate to global coherence
4. **Hardware-friendliness**: Must be SIMD/WASM accelerable
## Coherence Definition
**Coherence** is a scalar measure of system organization:
```
C(S) ∈ [0, 1] where 1 = maximally coherent, 0 = maximally disordered
```
### For Vector Spaces (HNSW)
```rust
/// Coherence of a vector neighborhood
pub fn vector_coherence(center: &Vector, neighbors: &[Vector]) -> f64 {
let distances: Vec<f64> = neighbors
.iter()
.map(|n| cosine_distance(center, n))
.collect();
let mean_dist = distances.iter().sum::<f64>() / distances.len() as f64;
let variance = distances
.iter()
.map(|d| (d - mean_dist).powi(2))
.sum::<f64>() / distances.len() as f64;
// Low variance = high coherence (tight neighborhood)
1.0 / (1.0 + variance)
}
```
### For Graphs
```rust
/// Coherence of graph structure
pub fn graph_coherence(graph: &Graph) -> f64 {
let clustering_coeff = compute_clustering_coefficient(graph);
let modularity = compute_modularity(graph);
let connectivity = compute_algebraic_connectivity(graph);
// Weighted combination
0.4 * clustering_coeff + 0.3 * modularity + 0.3 * connectivity.min(1.0)
}
```
### For Agent State
```rust
/// Coherence of agent memory/attention
pub fn agent_coherence(state: &AgentState) -> f64 {
let attention_entropy = compute_attention_entropy(&state.attention);
let memory_consistency = compute_memory_consistency(&state.memory);
let goal_alignment = compute_goal_alignment(&state.goals, &state.actions);
// Low entropy + high consistency + high alignment = coherent
let coherence = (1.0 - attention_entropy) * memory_consistency * goal_alignment;
coherence.clamp(0.0, 1.0)
}
```
## Coherence Bounds
### Static Bounds (Structural)
```rust
pub struct CoherenceBounds {
/// Minimum coherence to allow any transition
pub min_coherence: f64, // e.g., 0.3
/// Coherence below which transitions are throttled
pub throttle_threshold: f64, // e.g., 0.5
/// Target coherence the system seeks
pub target_coherence: f64, // e.g., 0.8
/// Maximum coherence drop per transition
pub max_delta_drop: f64, // e.g., 0.1
}
impl Default for CoherenceBounds {
fn default() -> Self {
Self {
min_coherence: 0.3,
throttle_threshold: 0.5,
target_coherence: 0.8,
max_delta_drop: 0.1,
}
}
}
```
### Dynamic Bounds (Learned)
```rust
pub struct AdaptiveCoherenceBounds {
base: CoherenceBounds,
/// Historical coherence trajectory
history: RingBuffer<f64>,
/// Learned adjustment factors
adjustment: CoherenceAdjustment,
}
impl AdaptiveCoherenceBounds {
pub fn effective_min_coherence(&self) -> f64 {
let trend = self.history.trend();
let adjustment = if trend < 0.0 {
// Coherence declining: tighten bounds
self.adjustment.tightening_factor
} else {
// Coherence stable/rising: relax bounds
self.adjustment.relaxation_factor
};
(self.base.min_coherence * adjustment).clamp(0.1, 0.9)
}
}
```
## Transition Validation
```rust
pub enum TransitionDecision {
Allow,
Throttle { delay_ms: u64 },
Reroute { alternative: Transition },
Reject { reason: RejectionReason },
}
pub fn validate_transition(
current_coherence: f64,
predicted_coherence: f64,
bounds: &CoherenceBounds,
) -> TransitionDecision {
let coherence_drop = current_coherence - predicted_coherence;
// Hard rejection: would drop below minimum
if predicted_coherence < bounds.min_coherence {
return TransitionDecision::Reject {
reason: RejectionReason::BelowMinimumCoherence,
};
}
// Hard rejection: drop too large
if coherence_drop > bounds.max_delta_drop {
return TransitionDecision::Reject {
reason: RejectionReason::ExcessiveCoherenceDrop,
};
}
// Throttling: below target
if predicted_coherence < bounds.throttle_threshold {
let severity = (bounds.throttle_threshold - predicted_coherence)
/ bounds.throttle_threshold;
let delay = (severity * 1000.0) as u64; // Up to 1 second
return TransitionDecision::Throttle { delay_ms: delay };
}
TransitionDecision::Allow
}
```
## WASM Implementation
```rust
// ruvector-delta-wasm/src/coherence.rs
#[wasm_bindgen]
pub struct CoherenceMeter {
bounds: CoherenceBounds,
current: f64,
history: Vec<f64>,
}
#[wasm_bindgen]
impl CoherenceMeter {
#[wasm_bindgen(constructor)]
pub fn new() -> Self {
Self {
bounds: CoherenceBounds::default(),
current: 1.0,
history: Vec::with_capacity(1000),
}
}
#[wasm_bindgen]
pub fn measure_vector_coherence(&self, center: &[f32], neighbors: &[f32], dim: usize) -> f64 {
// SIMD-accelerated coherence measurement
#[cfg(target_feature = "simd128")]
{
simd_vector_coherence(center, neighbors, dim)
}
#[cfg(not(target_feature = "simd128"))]
{
scalar_vector_coherence(center, neighbors, dim)
}
}
#[wasm_bindgen]
pub fn validate(&self, predicted_coherence: f64) -> JsValue {
let decision = validate_transition(self.current, predicted_coherence, &self.bounds);
serde_wasm_bindgen::to_value(&decision).unwrap()
}
#[wasm_bindgen]
pub fn update(&mut self, new_coherence: f64) {
self.history.push(self.current);
if self.history.len() > 1000 {
self.history.remove(0);
}
self.current = new_coherence;
}
}
```
## Consequences
### Positive
- Coherence is measurable and bounded
- Transitions are predictably constrained
- System has quantifiable stability guarantees
### Negative
- Adds overhead to every transition
- Requires calibration per domain
- May reject valid but "unusual" operations
### Neutral
- Shifts optimization target from raw speed to stable speed
## References
- Newman, M. E. J. (2003). "The Structure and Function of Complex Networks"
- Fiedler, M. (1973). "Algebraic Connectivity of Graphs"
- Shannon, C. E. (1948). "A Mathematical Theory of Communication" - entropy

View File

@@ -0,0 +1,487 @@
# ADR-002: Transition Constraints and Enforcement
## Status
PROPOSED
## Context
Δ-behavior requires that **unstable transitions are resisted**. This ADR defines the constraint mechanisms.
## The Three Enforcement Layers
```
┌─────────────────────────────────────────────────────────────┐
│ Transition Request │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ LAYER 1: ENERGY COST (Soft Constraint) │
│ - Expensive transitions naturally deprioritized │
│ - Self-regulating through resource limits │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ LAYER 2: SCHEDULING (Medium Constraint) │
│ - Unstable transitions delayed/throttled │
│ - Backpressure on high-instability operations │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ LAYER 3: MEMORY GATE (Hard Constraint) │
│ - Incoherent writes blocked │
│ - State corruption prevented │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Transition Applied │
│ (or Rejected/Rerouted) │
└─────────────────────────────────────────────────────────────┘
```
## Layer 1: Energy Cost
### Concept
Make unstable transitions **expensive** so they naturally lose competition for resources.
### Implementation
```rust
/// Energy cost model for transitions
pub struct EnergyCostModel {
/// Base cost for any transition
base_cost: f64,
/// Instability multiplier exponent
instability_exponent: f64,
/// Maximum cost cap (prevents infinite costs)
max_cost: f64,
}
impl EnergyCostModel {
pub fn compute_cost(&self, transition: &Transition) -> f64 {
let instability = self.measure_instability(transition);
let cost = self.base_cost * (1.0 + instability).powf(self.instability_exponent);
cost.min(self.max_cost)
}
fn measure_instability(&self, transition: &Transition) -> f64 {
let coherence_impact = transition.predicted_coherence_drop();
let locality_violation = transition.non_local_effects();
let attractor_distance = transition.distance_from_attractors();
// Weighted instability score
0.4 * coherence_impact + 0.3 * locality_violation + 0.3 * attractor_distance
}
}
/// Resource-aware transition executor
pub struct EnergyAwareExecutor {
cost_model: EnergyCostModel,
budget: AtomicF64,
budget_per_tick: f64,
}
impl EnergyAwareExecutor {
pub fn execute(&self, transition: Transition) -> Result<(), EnergyExhausted> {
let cost = self.cost_model.compute_cost(&transition);
// Try to spend energy
let mut budget = self.budget.load(Ordering::Acquire);
loop {
if budget < cost {
return Err(EnergyExhausted { required: cost, available: budget });
}
match self.budget.compare_exchange_weak(
budget,
budget - cost,
Ordering::AcqRel,
Ordering::Acquire,
) {
Ok(_) => break,
Err(current) => budget = current,
}
}
// Execute transition
transition.apply()
}
pub fn replenish(&self) {
// Called periodically to refill budget
self.budget.fetch_add(self.budget_per_tick, Ordering::Release);
}
}
```
### WASM Binding
```rust
#[wasm_bindgen]
pub struct WasmEnergyCost {
model: EnergyCostModel,
}
#[wasm_bindgen]
impl WasmEnergyCost {
#[wasm_bindgen(constructor)]
pub fn new(base_cost: f64, exponent: f64, max_cost: f64) -> Self {
Self {
model: EnergyCostModel {
base_cost,
instability_exponent: exponent,
max_cost,
},
}
}
#[wasm_bindgen]
pub fn cost(&self, coherence_drop: f64, locality_violation: f64, attractor_dist: f64) -> f64 {
let instability = 0.4 * coherence_drop + 0.3 * locality_violation + 0.3 * attractor_dist;
(self.model.base_cost * (1.0 + instability).powf(self.model.instability_exponent))
.min(self.model.max_cost)
}
}
```
## Layer 2: Scheduling
### Concept
Delay or deprioritize transitions based on their stability impact.
### Implementation
```rust
/// Priority levels for transitions
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub enum TransitionPriority {
/// Execute immediately
Immediate = 0,
/// Execute soon
High = 1,
/// Execute when convenient
Normal = 2,
/// Execute when system is stable
Low = 3,
/// Defer until explicitly requested
Deferred = 4,
}
/// Scheduler for Δ-constrained transitions
pub struct DeltaScheduler {
/// Priority queues for each level
queues: [VecDeque<Transition>; 5],
/// Current system coherence
coherence: AtomicF64,
/// Scheduling policy
policy: SchedulingPolicy,
}
pub struct SchedulingPolicy {
/// Coherence threshold for each priority level
coherence_thresholds: [f64; 5],
/// Maximum transitions per tick at each priority
rate_limits: [usize; 5],
/// Backoff multiplier when coherence is low
backoff_multiplier: f64,
}
impl DeltaScheduler {
pub fn schedule(&mut self, transition: Transition) {
let priority = self.compute_priority(&transition);
self.queues[priority as usize].push_back(transition);
}
fn compute_priority(&self, transition: &Transition) -> TransitionPriority {
let coherence_impact = transition.predicted_coherence_drop();
let current_coherence = self.coherence.load(Ordering::Acquire);
// Lower coherence = more conservative scheduling
let adjusted_impact = coherence_impact / current_coherence.max(0.1);
match adjusted_impact {
x if x < 0.05 => TransitionPriority::Immediate,
x if x < 0.10 => TransitionPriority::High,
x if x < 0.20 => TransitionPriority::Normal,
x if x < 0.40 => TransitionPriority::Low,
_ => TransitionPriority::Deferred,
}
}
pub fn tick(&mut self) -> Vec<Transition> {
let current_coherence = self.coherence.load(Ordering::Acquire);
let mut executed = Vec::new();
for (priority, queue) in self.queues.iter_mut().enumerate() {
// Check if coherence allows this priority level
if current_coherence < self.policy.coherence_thresholds[priority] {
continue;
}
// Execute up to rate limit
let limit = self.policy.rate_limits[priority];
for _ in 0..limit {
if let Some(transition) = queue.pop_front() {
executed.push(transition);
}
}
}
executed
}
}
```
### Backpressure Mechanism
```rust
/// Backpressure controller for high-instability periods
pub struct BackpressureController {
/// Current backpressure level (0.0 = none, 1.0 = maximum)
level: AtomicF64,
/// Coherence history for trend detection
history: RwLock<RingBuffer<f64>>,
/// Configuration
config: BackpressureConfig,
}
impl BackpressureController {
pub fn update(&self, current_coherence: f64) {
let mut history = self.history.write().unwrap();
history.push(current_coherence);
let trend = history.trend(); // Negative = declining
let volatility = history.volatility();
// Compute new backpressure level
let base_pressure = if current_coherence < self.config.low_coherence_threshold {
(self.config.low_coherence_threshold - current_coherence)
/ self.config.low_coherence_threshold
} else {
0.0
};
let trend_pressure = (-trend).max(0.0) * self.config.trend_sensitivity;
let volatility_pressure = volatility * self.config.volatility_sensitivity;
let total_pressure = (base_pressure + trend_pressure + volatility_pressure).clamp(0.0, 1.0);
self.level.store(total_pressure, Ordering::Release);
}
pub fn apply_backpressure(&self, base_delay: Duration) -> Duration {
let level = self.level.load(Ordering::Acquire);
let multiplier = 1.0 + (level * self.config.max_delay_multiplier);
Duration::from_secs_f64(base_delay.as_secs_f64() * multiplier)
}
}
```
## Layer 3: Memory Gate
### Concept
The final line of defense: **block** writes that would corrupt coherence.
### Implementation
```rust
/// Memory gate that blocks incoherent writes
pub struct CoherenceGate {
/// Current system coherence
coherence: AtomicF64,
/// Minimum coherence to allow writes
min_write_coherence: f64,
/// Minimum coherence after write
min_post_write_coherence: f64,
/// Gate state
state: AtomicU8, // 0=open, 1=throttled, 2=closed
}
#[derive(Debug)]
pub enum GateDecision {
Open,
Throttled { wait: Duration },
Closed { reason: GateClosedReason },
}
#[derive(Debug)]
pub enum GateClosedReason {
CoherenceTooLow,
WriteTooDestabilizing,
SystemInRecovery,
EmergencyHalt,
}
impl CoherenceGate {
pub fn check(&self, predicted_post_write_coherence: f64) -> GateDecision {
let current = self.coherence.load(Ordering::Acquire);
let state = self.state.load(Ordering::Acquire);
// Emergency halt state
if state == 2 {
return GateDecision::Closed {
reason: GateClosedReason::EmergencyHalt
};
}
// Current coherence check
if current < self.min_write_coherence {
return GateDecision::Closed {
reason: GateClosedReason::CoherenceTooLow
};
}
// Post-write coherence check
if predicted_post_write_coherence < self.min_post_write_coherence {
return GateDecision::Closed {
reason: GateClosedReason::WriteTooDestabilizing
};
}
// Throttled state
if state == 1 {
let wait = Duration::from_millis(
((self.min_write_coherence - current) * 1000.0) as u64
);
return GateDecision::Throttled { wait };
}
GateDecision::Open
}
pub fn emergency_halt(&self) {
self.state.store(2, Ordering::Release);
}
pub fn recover(&self) {
let current = self.coherence.load(Ordering::Acquire);
if current >= self.min_write_coherence * 1.2 {
// 20% above minimum before reopening
self.state.store(0, Ordering::Release);
} else if current >= self.min_write_coherence {
self.state.store(1, Ordering::Release); // Throttled
}
}
}
```
### Gated Memory Write
```rust
/// Memory with coherence-gated writes
pub struct GatedMemory<T> {
storage: RwLock<T>,
gate: CoherenceGate,
coherence_computer: Box<dyn Fn(&T) -> f64>,
}
impl<T: Clone> GatedMemory<T> {
pub fn write(&self, mutator: impl FnOnce(&mut T)) -> Result<(), GateDecision> {
// Simulate the write
let mut simulation = self.storage.read().unwrap().clone();
mutator(&mut simulation);
// Compute post-write coherence
let predicted_coherence = (self.coherence_computer)(&simulation);
// Check gate
match self.gate.check(predicted_coherence) {
GateDecision::Open => {
let mut storage = self.storage.write().unwrap();
mutator(&mut storage);
self.gate.coherence.store(predicted_coherence, Ordering::Release);
Ok(())
}
decision => Err(decision),
}
}
}
```
## Combined Enforcement
```rust
/// Complete Δ-behavior enforcement system
pub struct DeltaEnforcer {
energy: EnergyAwareExecutor,
scheduler: DeltaScheduler,
gate: CoherenceGate,
}
impl DeltaEnforcer {
pub fn submit(&mut self, transition: Transition) -> EnforcementResult {
// Layer 1: Energy check
let cost = self.energy.cost_model.compute_cost(&transition);
if self.energy.budget.load(Ordering::Acquire) < cost {
return EnforcementResult::RejectedByEnergy { cost };
}
// Layer 2: Schedule
self.scheduler.schedule(transition);
EnforcementResult::Scheduled
}
pub fn execute_tick(&mut self) -> Vec<ExecutionResult> {
let transitions = self.scheduler.tick();
let mut results = Vec::new();
for transition in transitions {
// Layer 1: Spend energy
if let Err(e) = self.energy.execute(transition.clone()) {
results.push(ExecutionResult::EnergyExhausted(e));
self.scheduler.schedule(transition); // Re-queue
continue;
}
// Layer 3: Gate check
let predicted = transition.predict_coherence();
match self.gate.check(predicted) {
GateDecision::Open => {
transition.apply();
self.gate.coherence.store(predicted, Ordering::Release);
results.push(ExecutionResult::Applied);
}
GateDecision::Throttled { wait } => {
results.push(ExecutionResult::Throttled(wait));
self.scheduler.schedule(transition); // Re-queue
}
GateDecision::Closed { reason } => {
results.push(ExecutionResult::Rejected(reason));
}
}
}
results
}
}
```
## Consequences
### Positive
- Three independent safety layers
- Graceful degradation under stress
- Self-regulating resource usage
### Negative
- Added complexity
- Potential for false rejections
- Requires careful tuning
### Neutral
- Clear separation of concerns
- Debuggable enforcement chain

View File

@@ -0,0 +1,399 @@
# ADR-003: Attractor Basins and Closure Preference
## Status
PROPOSED
## Context
Δ-behavior systems **prefer closure** - they naturally settle into stable, repeatable patterns called **attractors**.
## What Are Attractors?
An **attractor** is a state (or set of states) toward which the system naturally evolves:
```
trajectory(s₀, t) → A as t → ∞
```
Types of attractors:
- **Fixed point**: Single stable state
- **Limit cycle**: Repeating sequence of states
- **Strange attractor**: Complex but bounded pattern (chaos with structure)
## Attractor Basins
The **basin of attraction** is the set of all initial states that evolve toward a given attractor:
```
Basin(A) = { s₀ : trajectory(s₀, t) → A }
```
## Implementation
### Attractor Discovery
```rust
/// Discovered attractor in the system
pub struct Attractor {
/// Unique identifier
pub id: AttractorId,
/// Type of attractor
pub kind: AttractorKind,
/// Representative state(s)
pub states: Vec<SystemState>,
/// Stability measure (higher = more stable)
pub stability: f64,
/// Coherence when in this attractor
pub coherence: f64,
/// Energy cost to reach this attractor
pub energy_cost: f64,
}
pub enum AttractorKind {
/// Single stable state
FixedPoint,
/// Repeating cycle of states
LimitCycle { period: usize },
/// Bounded but complex pattern
StrangeAttractor { lyapunov_exponent: f64 },
}
/// Attractor discovery through simulation
pub struct AttractorDiscoverer {
/// Number of random initial states to try
sample_count: usize,
/// Maximum simulation steps
max_steps: usize,
/// Convergence threshold
convergence_epsilon: f64,
}
impl AttractorDiscoverer {
pub fn discover(&self, system: &impl DeltaSystem) -> Vec<Attractor> {
let mut attractors: HashMap<AttractorId, Attractor> = HashMap::new();
for _ in 0..self.sample_count {
let initial = system.random_state();
let trajectory = self.simulate(system, initial);
if let Some(attractor) = self.identify_attractor(&trajectory) {
attractors
.entry(attractor.id.clone())
.or_insert(attractor)
.stability += 1.0; // More samples → more stable
}
}
// Normalize stability
let max_stability = attractors.values().map(|a| a.stability).max_by(f64::total_cmp);
for attractor in attractors.values_mut() {
attractor.stability /= max_stability.unwrap_or(1.0);
}
attractors.into_values().collect()
}
fn simulate(&self, system: &impl DeltaSystem, initial: SystemState) -> Vec<SystemState> {
let mut trajectory = vec![initial.clone()];
let mut current = initial;
for _ in 0..self.max_steps {
let next = system.step(&current);
// Check convergence
if current.distance(&next) < self.convergence_epsilon {
break;
}
trajectory.push(next.clone());
current = next;
}
trajectory
}
fn identify_attractor(&self, trajectory: &[SystemState]) -> Option<Attractor> {
let n = trajectory.len();
if n < 10 {
return None;
}
// Check for fixed point (last states are identical)
let final_states = &trajectory[n-5..];
if final_states.windows(2).all(|w| w[0].distance(&w[1]) < self.convergence_epsilon) {
return Some(Attractor {
id: AttractorId::from_state(&trajectory[n-1]),
kind: AttractorKind::FixedPoint,
states: vec![trajectory[n-1].clone()],
stability: 1.0,
coherence: trajectory[n-1].coherence(),
energy_cost: 0.0,
});
}
// Check for limit cycle
for period in 2..20 {
if n > period * 2 {
let recent = &trajectory[n-period..];
let previous = &trajectory[n-2*period..n-period];
if recent.iter().zip(previous).all(|(a, b)| a.distance(b) < self.convergence_epsilon) {
return Some(Attractor {
id: AttractorId::from_cycle(recent),
kind: AttractorKind::LimitCycle { period },
states: recent.to_vec(),
stability: 1.0,
coherence: recent.iter().map(|s| s.coherence()).sum::<f64>() / period as f64,
energy_cost: 0.0,
});
}
}
}
None
}
}
```
### Attractor-Aware Transitions
```rust
/// System that prefers transitions toward attractors
pub struct AttractorGuidedSystem {
/// Known attractors
attractors: Vec<Attractor>,
/// Current state
current: SystemState,
/// Guidance strength (0 = no guidance, 1 = strong guidance)
guidance_strength: f64,
}
impl AttractorGuidedSystem {
/// Find nearest attractor to current state
pub fn nearest_attractor(&self) -> Option<&Attractor> {
self.attractors
.iter()
.min_by(|a, b| {
let dist_a = self.distance_to_attractor(a);
let dist_b = self.distance_to_attractor(b);
dist_a.partial_cmp(&dist_b).unwrap()
})
}
fn distance_to_attractor(&self, attractor: &Attractor) -> f64 {
attractor
.states
.iter()
.map(|s| self.current.distance(s))
.min_by(f64::total_cmp)
.unwrap_or(f64::INFINITY)
}
/// Bias transition toward attractor
pub fn guided_transition(&self, proposed: Transition) -> Transition {
if let Some(attractor) = self.nearest_attractor() {
let current_dist = self.distance_to_attractor(attractor);
let proposed_state = proposed.apply_to(&self.current);
let proposed_dist = attractor
.states
.iter()
.map(|s| proposed_state.distance(s))
.min_by(f64::total_cmp)
.unwrap_or(f64::INFINITY);
// If proposed moves away from attractor, dampen it
if proposed_dist > current_dist {
let damping = (proposed_dist - current_dist) / current_dist;
let damping_factor = (1.0 - self.guidance_strength * damping).max(0.1);
proposed.scale(damping_factor)
} else {
// Moving toward attractor - allow or amplify
let boost = (current_dist - proposed_dist) / current_dist;
let boost_factor = 1.0 + self.guidance_strength * boost * 0.5;
proposed.scale(boost_factor)
}
} else {
proposed
}
}
}
```
### Closure Pressure
```rust
/// Pressure that pushes system toward closure
pub struct ClosurePressure {
/// Attractors to prefer
attractors: Vec<Attractor>,
/// Pressure strength
strength: f64,
/// History of recent states
recent_states: RingBuffer<SystemState>,
/// Divergence detection
divergence_threshold: f64,
}
impl ClosurePressure {
/// Compute closure pressure for a transition
pub fn pressure(&self, from: &SystemState, transition: &Transition) -> f64 {
let to = transition.apply_to(from);
// Distance to nearest attractor (normalized)
let attractor_dist = self.attractors
.iter()
.map(|a| self.normalized_distance(&to, a))
.min_by(f64::total_cmp)
.unwrap_or(1.0);
// Divergence from recent trajectory
let divergence = self.compute_divergence(&to);
// Combined pressure: high when far from attractors and diverging
self.strength * (attractor_dist + divergence) / 2.0
}
fn normalized_distance(&self, state: &SystemState, attractor: &Attractor) -> f64 {
let min_dist = attractor
.states
.iter()
.map(|s| state.distance(s))
.min_by(f64::total_cmp)
.unwrap_or(f64::INFINITY);
// Normalize by attractor's typical basin size (heuristic)
(min_dist / attractor.stability.max(0.1)).min(1.0)
}
fn compute_divergence(&self, state: &SystemState) -> f64 {
if self.recent_states.len() < 3 {
return 0.0;
}
// Check if state is diverging from recent trajectory
let recent_mean = self.recent_states.mean();
let recent_variance = self.recent_states.variance();
let deviation = state.distance(&recent_mean);
let normalized_deviation = deviation / recent_variance.sqrt().max(0.001);
(normalized_deviation / self.divergence_threshold).min(1.0)
}
/// Check if system is approaching an attractor
pub fn is_converging(&self) -> bool {
if self.recent_states.len() < 10 {
return false;
}
let distances: Vec<f64> = self.recent_states
.iter()
.map(|s| {
self.attractors
.iter()
.map(|a| a.states.iter().map(|as_| s.distance(as_)).min_by(f64::total_cmp).unwrap())
.min_by(f64::total_cmp)
.unwrap_or(f64::INFINITY)
})
.collect();
// Check if distances are decreasing
distances.windows(2).filter(|w| w[0] > w[1]).count() > distances.len() / 2
}
}
```
### WASM Attractor Support
```rust
// ruvector-delta-wasm/src/attractor.rs
#[wasm_bindgen]
pub struct WasmAttractorField {
attractors: Vec<WasmAttractor>,
current_position: Vec<f32>,
}
#[wasm_bindgen]
pub struct WasmAttractor {
center: Vec<f32>,
strength: f32,
radius: f32,
}
#[wasm_bindgen]
impl WasmAttractorField {
#[wasm_bindgen(constructor)]
pub fn new() -> Self {
Self {
attractors: Vec::new(),
current_position: Vec::new(),
}
}
#[wasm_bindgen]
pub fn add_attractor(&mut self, center: &[f32], strength: f32, radius: f32) {
self.attractors.push(WasmAttractor {
center: center.to_vec(),
strength,
radius,
});
}
#[wasm_bindgen]
pub fn closure_force(&self, position: &[f32]) -> Vec<f32> {
let mut force = vec![0.0f32; position.len()];
for attractor in &self.attractors {
let dist = euclidean_distance(position, &attractor.center);
if dist < attractor.radius && dist > 0.001 {
let magnitude = attractor.strength * (1.0 - dist / attractor.radius);
for (i, f) in force.iter_mut().enumerate() {
*f += magnitude * (attractor.center[i] - position[i]) / dist;
}
}
}
force
}
#[wasm_bindgen]
pub fn nearest_attractor_distance(&self, position: &[f32]) -> f32 {
self.attractors
.iter()
.map(|a| euclidean_distance(position, &a.center))
.min_by(|a, b| a.partial_cmp(b).unwrap())
.unwrap_or(f32::INFINITY)
}
}
```
## Consequences
### Positive
- System naturally stabilizes
- Predictable long-term behavior
- Reduced computational exploration
### Negative
- May get stuck in suboptimal attractors
- Exploration is discouraged
- Novel states are harder to reach
### Neutral
- Trade-off between stability and adaptability
- Requires periodic attractor re-discovery