Merge commit 'd803bfe2b1fe7f5e219e50ac20d6801a0a58ac75' as 'vendor/ruvector'

This commit is contained in:
ruv
2026-02-28 14:39:40 -05:00
7854 changed files with 3522914 additions and 0 deletions

View File

@@ -0,0 +1,232 @@
# Strange Loop Self-Organizing Swarms
## What is a Strange Loop?
A **strange loop** is a phenomenon first described by Douglas Hofstadter in his book "Gödel, Escher, Bach". It occurs when a hierarchical system has a level that refers back to itself, creating a self-referential cycle.
Think of an Escher drawing where stairs keep going up but somehow end where they started. Or think of a camera filming itself in a mirror - what it sees affects what appears in the mirror, which affects what it sees...
## The Strange Loop in This Example
This example demonstrates a computational strange loop where:
```
┌──────────────────────────────────────────┐
│ Swarm observes its own structure │
│ ↓ │
│ Swarm finds weaknesses │
│ ↓ │
│ Swarm reorganizes itself │
│ ↓ │
│ Swarm observes its NEW structure │
│ ↓ │
│ (loop back to start) │
└──────────────────────────────────────────┘
```
### The Key Insight
The swarm is simultaneously:
- The **observer** (analyzing connectivity)
- The **observed** (being analyzed)
- The **actor** (reorganizing based on analysis)
This creates a feedback cycle that leads to **emergent self-organization** - behavior that wasn't explicitly programmed but emerges from the loop itself.
## How It Works
### 1. Self-Observation (`observe_self()`)
The swarm uses **min-cut analysis** to examine its own structure:
```rust
// The swarm "looks at itself"
let min_cut = solver.karger_stein(100);
let critical_edges = self.find_critical_edges(min_cut);
```
It discovers:
- What is its minimum cut value? (How fragile is the connectivity?)
- Which edges are critical? (Where are the weak points?)
- How stable is the current configuration?
### 2. Self-Modeling (`update_self_model()`)
The swarm builds an internal model of itself:
```rust
// Predictions about own future state
predicted_vulnerabilities: Vec<(usize, usize)>,
predicted_min_cut: i64,
confidence: f64,
```
This is **meta-cognition** - thinking about thinking. The swarm predicts how it will behave.
### 3. Self-Modification (`apply_reorganization()`)
Based on what it observes, the swarm changes itself:
```rust
ReorganizationAction::Strengthen { edges, weight_increase }
// The swarm makes itself stronger where it's weak
```
### 4. The Loop Closes
After reorganizing, the swarm observes its **new self**, and the cycle continues. Each iteration:
- Improves the structure
- Increases stability
- Builds more confidence in predictions
## Why This Matters
### Emergent Intelligence
The swarm exhibits behavior that seems "intelligent":
- It recognizes its own weaknesses
- It learns from experience (past observations)
- It adapts and improves over time
- It achieves a stable state through self-organization
**None of this intelligence was explicitly programmed** - it emerged from the strange loop!
### Self-Reference Creates Complexity
Just like how human consciousness arises from neurons observing and affecting other neurons (including themselves), this computational system creates emergent properties through self-reference.
### Applications
This pattern appears in many systems:
- **Neural networks** learning from their own predictions
- **Evolutionary algorithms** adapting based on fitness
- **Distributed systems** self-healing based on health checks
- **AI agents** improving through self-critique
## Running the Example
```bash
cd /home/user/ruvector/examples/mincut/strange_loop
cargo run
```
You'll see:
1. Initial weak swarm configuration
2. Each iteration of the strange loop:
- Self-observation
- Self-model update
- Decision making
- Reorganization
3. Convergence to stable state
4. Journey summary showing emergent improvement
## Key Observations
### What You'll Notice
1. **Learning Curve**: Early iterations make dramatic changes; later ones are subtle
2. **Confidence Growth**: The self-model becomes more confident over time
3. **Emergent Stability**: The swarm finds a stable configuration without being told what "stable" means
4. **Self-Awareness**: The system tracks its own history and uses it for predictions
### The "Aha!" Moment
Watch for when the swarm:
- Identifies a weakness (low min-cut)
- Strengthens critical edges
- Observes the improvement
- Continues until satisfied with its own robustness
This is **computational self-improvement** through strange loops!
## Philosophical Implications
### Hofstadter's Vision
Hofstadter proposed that consciousness itself is a strange loop - our sense of "I" emerges from the brain observing and modeling itself at increasingly abstract levels.
This example is a tiny computational echo of that idea:
- The swarm has a "self" (its graph structure)
- The swarm observes that self (min-cut analysis)
- The swarm models that self (predictions)
- The swarm modifies that self (reorganization)
The loop creates something greater than the sum of its parts.
### From Simple Rules to Complex Behavior
The fascinating thing is that the complex, seemingly "intelligent" behavior emerges from:
- Simple min-cut analysis
- Basic reorganization rules
- The feedback loop structure
This demonstrates how **complexity can emerge from simplicity** when systems can reference themselves.
## Technical Details
### Min-Cut as Self-Observation
We use min-cut analysis because it reveals:
- **Global vulnerability**: The weakest point in connectivity
- **Critical structure**: Which edges matter most
- **Robustness metric**: Quantitative measure of stability
### The Feedback Mechanism
Each iteration:
```
State_n → Observe(State_n) → Decide(observation) →
→ Modify(State_n) → State_{n+1}
```
The key is that `State_{n+1}` becomes the input to the next iteration, closing the loop.
### Convergence
The swarm reaches stability when:
- Min-cut value is high enough
- Critical edges are few
- Recent observations show consistent stability
- Self-model predictions match reality
## Further Exploration
### Modify the Example
Try changing:
- `stability_threshold`: Make convergence harder/easier
- Initial graph structure: Start with different weaknesses
- Reorganization strategies: Add new actions
- Number of nodes: Scale up the swarm
### Research Questions
- What happens with 100 nodes?
- Can multiple swarms observe each other? (mutual strange loops)
- What if the swarm has conflicting goals?
- Can the swarm evolve its own reorganization strategies?
## References
- **"Gödel, Escher, Bach"** by Douglas Hofstadter - The original exploration of strange loops
- **"I Am a Strange Loop"** by Douglas Hofstadter - A more accessible treatment
- **Min-Cut Algorithms** - Used here as the self-observation mechanism
- **Self-Organizing Systems** - Broader field of emergent complexity
## The Big Picture
This example shows that when a system can:
1. Observe itself
2. Model itself
3. Modify itself
4. Loop back to step 1
Something magical happens - **emergent self-organization** that looks like intelligence.
The strange loop is the key. It's not just feedback - it's **self-referential feedback at multiple levels of abstraction**.
And that, Hofstadter argues, is the essence of consciousness itself.
---
*"In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference."* - Douglas Hofstadter

View File

@@ -0,0 +1,425 @@
//! # Strange Loop Self-Organizing Swarms
//!
//! This example demonstrates Hofstadter's "strange loops" - where a system's
//! self-observation creates emergent self-organization and intelligence.
//!
//! The MetaSwarm observes its own connectivity using min-cut analysis, then
//! reorganizes itself based on what it discovers. This creates a feedback loop:
//! "I am weak here" → "I will strengthen here" → "Now I am strong"
//!
//! Run: `cargo run --example strange_loop`
use std::collections::HashMap;
// ============================================================================
// SIMPLE GRAPH IMPLEMENTATION
// ============================================================================
/// A simple undirected weighted graph
#[derive(Debug, Clone)]
struct Graph {
vertices: Vec<u64>,
edges: HashMap<(u64, u64), f64>,
adjacency: HashMap<u64, Vec<(u64, f64)>>,
}
impl Graph {
fn new() -> Self {
Self {
vertices: Vec::new(),
edges: HashMap::new(),
adjacency: HashMap::new(),
}
}
fn add_vertex(&mut self, v: u64) {
if !self.vertices.contains(&v) {
self.vertices.push(v);
self.adjacency.insert(v, Vec::new());
}
}
fn add_edge(&mut self, u: u64, v: u64, weight: f64) {
self.add_vertex(u);
self.add_vertex(v);
let key = if u < v { (u, v) } else { (v, u) };
self.edges.insert(key, weight);
self.adjacency.get_mut(&u).unwrap().push((v, weight));
self.adjacency.get_mut(&v).unwrap().push((u, weight));
}
fn degree(&self, v: u64) -> usize {
self.adjacency.get(&v).map(|a| a.len()).unwrap_or(0)
}
fn weighted_degree(&self, v: u64) -> f64 {
self.adjacency
.get(&v)
.map(|adj| adj.iter().map(|(_, w)| w).sum())
.unwrap_or(0.0)
}
/// Approximate min-cut using minimum weighted degree
fn approx_mincut(&self) -> f64 {
self.vertices
.iter()
.map(|&v| self.weighted_degree(v))
.min_by(|a, b| a.partial_cmp(b).unwrap())
.unwrap_or(0.0)
}
/// Find vertices with lowest connectivity (critical points)
fn find_weak_vertices(&self) -> Vec<u64> {
let min_degree = self
.vertices
.iter()
.map(|&v| self.degree(v))
.min()
.unwrap_or(0);
self.vertices
.iter()
.filter(|&&v| self.degree(v) == min_degree)
.copied()
.collect()
}
fn vertex_count(&self) -> usize {
self.vertices.len()
}
fn edge_count(&self) -> usize {
self.edges.len()
}
}
// ============================================================================
// STRANGE LOOP SWARM
// ============================================================================
/// Self-model: predictions about own behavior
#[derive(Debug, Clone)]
struct SelfModel {
/// Predicted min-cut value
predicted_mincut: f64,
/// Predicted weak vertices
predicted_weak: Vec<u64>,
/// Confidence in predictions (0.0 - 1.0)
confidence: f64,
/// History of prediction errors
errors: Vec<f64>,
}
impl SelfModel {
fn new() -> Self {
Self {
predicted_mincut: 0.0,
predicted_weak: Vec::new(),
confidence: 0.5,
errors: Vec::new(),
}
}
/// Update model based on observation
fn update(&mut self, actual_mincut: f64, actual_weak: &[u64]) {
// Calculate prediction error
let error = (self.predicted_mincut - actual_mincut).abs();
self.errors.push(error);
// Update confidence based on error
if error < 0.5 {
self.confidence = (self.confidence + 0.1).min(1.0);
} else {
self.confidence = (self.confidence - 0.1).max(0.1);
}
// Simple prediction: expect similar values next time
self.predicted_mincut = actual_mincut;
self.predicted_weak = actual_weak.to_vec();
}
}
/// Observation record
#[derive(Debug, Clone)]
struct Observation {
iteration: usize,
mincut: f64,
weak_vertices: Vec<u64>,
action_taken: String,
}
/// Action the swarm can take on itself
#[derive(Debug, Clone)]
enum Action {
Strengthen(Vec<u64>), // Add edges to these vertices
Redistribute, // Balance connectivity
Stabilize, // Do nothing - optimal state
}
/// A swarm that observes and reorganizes itself through strange loops
struct MetaSwarm {
graph: Graph,
self_model: SelfModel,
observations: Vec<Observation>,
iteration: usize,
stability_threshold: f64,
}
impl MetaSwarm {
fn new(num_agents: usize) -> Self {
let mut graph = Graph::new();
// Create initial ring topology
for i in 0..num_agents as u64 {
graph.add_edge(i, (i + 1) % num_agents as u64, 1.0);
}
Self {
graph,
self_model: SelfModel::new(),
observations: Vec::new(),
iteration: 0,
stability_threshold: 0.1,
}
}
/// The main strange loop: observe → model → decide → act
fn think(&mut self) -> bool {
self.iteration += 1;
println!("\n╔══════════════════════════════════════════════════════════╗");
println!(
"║ ITERATION {} - STRANGE LOOP CYCLE ",
self.iteration
);
println!("╚══════════════════════════════════════════════════════════╝");
// STEP 1: OBSERVE SELF
println!("\n📡 Step 1: Self-Observation");
let current_mincut = self.graph.approx_mincut();
let weak_vertices = self.graph.find_weak_vertices();
println!(" Min-cut value: {:.2}", current_mincut);
println!(" Weak vertices: {:?}", weak_vertices);
println!(
" Graph: {} vertices, {} edges",
self.graph.vertex_count(),
self.graph.edge_count()
);
// STEP 2: UPDATE SELF-MODEL
println!("\n🧠 Step 2: Update Self-Model");
let predicted = self.self_model.predicted_mincut;
let error = (predicted - current_mincut).abs();
self.self_model.update(current_mincut, &weak_vertices);
println!(" Predicted min-cut: {:.2}", predicted);
println!(" Actual min-cut: {:.2}", current_mincut);
println!(" Prediction error: {:.2}", error);
println!(
" Model confidence: {:.1}%",
self.self_model.confidence * 100.0
);
// STEP 3: DECIDE REORGANIZATION
println!("\n🤔 Step 3: Decide Reorganization");
let action = self.decide();
let action_str = match &action {
Action::Strengthen(v) => format!("Strengthen {:?}", v),
Action::Redistribute => "Redistribute".to_string(),
Action::Stabilize => "Stabilize (optimal)".to_string(),
};
println!(" Decision: {}", action_str);
// STEP 4: APPLY REORGANIZATION
println!("\n⚡ Step 4: Apply Reorganization");
let changed = self.apply_action(&action);
if changed {
let new_mincut = self.graph.approx_mincut();
println!(
" New min-cut: {:.2} (Δ = {:.2})",
new_mincut,
new_mincut - current_mincut
);
} else {
println!(" No changes applied (stable state)");
}
// Record observation
self.observations.push(Observation {
iteration: self.iteration,
mincut: current_mincut,
weak_vertices: weak_vertices.clone(),
action_taken: action_str,
});
// Check for convergence
let converged = self.check_convergence();
if converged {
println!("\n✨ STRANGE LOOP CONVERGED!");
println!(" The swarm has reached self-organized stability.");
}
converged
}
/// Decide what action to take based on self-observation
fn decide(&self) -> Action {
let mincut = self.graph.approx_mincut();
let weak = self.graph.find_weak_vertices();
// Decision logic based on self-knowledge
if mincut < 2.0 {
// Very weak - strengthen urgently
Action::Strengthen(weak)
} else if mincut < 4.0 && !weak.is_empty() {
// Somewhat weak - strengthen weak points
Action::Strengthen(weak)
} else if self.self_model.confidence > 0.8 && mincut > 3.0 {
// High confidence, good connectivity - stable
Action::Stabilize
} else {
// Redistribute for better balance
Action::Redistribute
}
}
/// Apply the chosen action to reorganize
fn apply_action(&mut self, action: &Action) -> bool {
match action {
Action::Strengthen(vertices) => {
let n = self.graph.vertex_count() as u64;
for &v in vertices {
// Connect to a vertex far away
let target = (v + n / 2) % n;
if self.graph.degree(v) < 4 {
self.graph.add_edge(v, target, 1.0);
println!(" Added edge: {} -- {}", v, target);
}
}
!vertices.is_empty()
}
Action::Redistribute => {
// Find most connected and least connected
let max_v = self
.graph
.vertices
.iter()
.max_by_key(|&&v| self.graph.degree(v))
.copied();
let min_v = self
.graph
.vertices
.iter()
.min_by_key(|&&v| self.graph.degree(v))
.copied();
if let (Some(max), Some(min)) = (max_v, min_v) {
if self.graph.degree(max) > self.graph.degree(min) + 1 {
self.graph.add_edge(min, max, 0.5);
println!(" Redistributed: {} -- {}", min, max);
return true;
}
}
false
}
Action::Stabilize => false,
}
}
/// Check if the strange loop has converged
fn check_convergence(&self) -> bool {
if self.observations.len() < 3 {
return false;
}
// Check if min-cut has stabilized
let recent: Vec<f64> = self
.observations
.iter()
.rev()
.take(3)
.map(|o| o.mincut)
.collect();
let variance: f64 = {
let mean = recent.iter().sum::<f64>() / recent.len() as f64;
recent.iter().map(|x| (x - mean).powi(2)).sum::<f64>() / recent.len() as f64
};
variance < self.stability_threshold && self.self_model.confidence > 0.7
}
/// Print the journey summary
fn print_summary(&self) {
println!("\n{:═^60}", " STRANGE LOOP JOURNEY ");
println!("\nIteration | Min-Cut | Action");
println!("{}", "-".repeat(60));
for obs in &self.observations {
println!(
"{:^9} | {:^7.2} | {}",
obs.iteration, obs.mincut, obs.action_taken
);
}
if let (Some(first), Some(last)) = (self.observations.first(), self.observations.last()) {
println!("\n📊 Summary:");
println!(" Starting min-cut: {:.2}", first.mincut);
println!(" Final min-cut: {:.2}", last.mincut);
println!(" Improvement: {:.2}", last.mincut - first.mincut);
println!(" Iterations: {}", self.iteration);
println!(
" Final confidence: {:.1}%",
self.self_model.confidence * 100.0
);
}
}
}
// ============================================================================
// MAIN
// ============================================================================
fn main() {
println!("╔════════════════════════════════════════════════════════════╗");
println!("║ STRANGE LOOP SELF-ORGANIZING SWARMS ║");
println!("║ Hofstadter's Self-Reference in Action ║");
println!("╚════════════════════════════════════════════════════════════╝");
println!("\n📖 Concept: A swarm that observes itself and reorganizes");
println!(" based on what it discovers about its own structure.\n");
println!(" This creates emergent intelligence through self-reference.");
// Create a swarm of 10 agents
let mut swarm = MetaSwarm::new(10);
// Run the strange loop until convergence or max iterations
let max_iterations = 15;
let mut converged = false;
for _ in 0..max_iterations {
if swarm.think() {
converged = true;
break;
}
}
// Print summary
swarm.print_summary();
if converged {
println!("\n✅ The swarm achieved self-organized stability!");
println!(" Through self-observation and self-modification,");
println!(" it evolved into a robust configuration.");
} else {
println!("\n⚠️ Max iterations reached.");
println!(" The swarm is still evolving.");
}
println!("\n🔮 Key Insight: The strange loop creates intelligence");
println!(" not from complex rules, but from simple self-reference.");
println!(" 'I observe myself' → 'I change' → 'I observe the change'");
}