Files
wifi-densepose/docs/plans/subpolynomial-time-mincut/research-notes.md
ruv d803bfe2b1 Squashed 'vendor/ruvector/' content from commit b64c2172
git-subtree-dir: vendor/ruvector
git-subtree-split: b64c21726f2bb37286d9ee36a7869fef60cc6900
2026-02-28 14:39:40 -05:00

1112 lines
32 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Research Notes: State-of-the-Art Dynamic Minimum Cut Algorithms
**Research Date**: December 21, 2025
**Focus**: Subpolynomial-time algorithms for fully dynamic minimum cut
---
## Executive Summary
The field of dynamic minimum cut has seen remarkable breakthroughs in 2024-2025, achieving **subpolynomial n^{o(1)} update times** that were previously thought impossible. The progression from Thorup's 2007 Õ(√n) bound to recent exact algorithms with n^{o(1)} time represents a fundamental advance in dynamic graph algorithms.
**Key Breakthroughs:**
- **2024**: First exact subpolynomial algorithm for superconstant cuts
- **2025**: First approximate (1+o(1)) subpolynomial algorithm for general cuts
- **Dec 2024**: Deterministic exact algorithm improving previous bounds
---
## 1. Historical Foundation: Thorup's 2007 Work
### Paper Details
- **Title**: "Fully-Dynamic Min-Cut"
- **Author**: Mikkel Thorup
- **Publication**: Combinatorica, Vol 27, pp. 91-127 (Feb 2007)
- **Preliminary**: STOC 2001
### Key Results
**Main Theorem**: Maintains up to polylogarithmic edge connectivity in **Õ(√n) worst-case time** per edge insertion/deletion.
**Technical Approach**:
- Based on greedy tree packings using **top trees**
- Maintains concrete min-cut as pointer to tree + cut edge listing
- Cut edges retrievable in O(log n) time per edge
- Provides (1+o(1))-approximation for general connectivity via sampling
**Complexity Analysis**:
```
Update Time: Õ(√n) per edge insertion/deletion
Query Time: O(log n) per cut edge
Space: O(n log n)
Cut Size Range: Up to polylog(n) edge connectivity
```
**Significance**:
- First o(n) bound for edge connectivity > 3
- Previous best for 3-connectivity was O(n^{2/3}) from FOCS'92
- Within logarithmic factors matches 1-edge connectivity bound
### Limitations
- √n barrier seemed fundamental for 15+ years
- Only efficient for polylogarithmic connectivity
- Randomized approximation for general cuts
**Reference**: [Thorup, Combinatorica 2007](https://link.springer.com/content/pdf/10.1007/s00493-007-0045-2.pdf)
---
## 2. Recent Breakthroughs: Subpolynomial Algorithms (2024-2025)
### 2.1 SODA 2024: Exact Small Cuts (Jin, Sun, Thorup)
**Paper**: "Fully Dynamic Min-Cut of Superconstant Size in Subpolynomial Time"
**Authors**: Wenyu Jin, Xiaorui Sun, Mikkel Thorup
**Publication**: SODA 2024
**arXiv**: [2401.09700](https://arxiv.org/abs/2401.09700)
**Main Result**: First **deterministic** fully dynamic algorithm with **subpolynomial worst-case time** that outputs exact minimum cut when cut size ≤ c for c = (log n)^{o(1)}.
**Complexity**:
```
Update Time: n^{o(1)} deterministic
Cut Size Range: c ≤ (log n)^{o(1)}
Previous Best: Õ(√n) for c > 2, c = O(log n)
```
**Technical Contributions**:
- Breaks the √n barrier for exact algorithms
- Handles superconstant (but subpolylogarithmic) cuts
- Fully deterministic (no randomization)
- Extends beyond Thorup's polylog limitation
**Significance**: First subpolynomial exact dynamic min-cut algorithm, settling a major open problem.
---
### 2.2 SODA 2025: Approximate General Cuts (El-Hayek, Henzinger, Li)
**Paper**: "Fully Dynamic Approximate Minimum Cut in Subpolynomial Time per Operation"
**Authors**: Antoine El-Hayek, Monika Henzinger, Tianyi Li
**Publication**: SODA 2025 (Jan 12-15, New Orleans)
**arXiv**: [2412.15069](https://arxiv.org/html/2412.15069)
**Main Result**: First fully dynamic **(1+o(1))-approximate** minimum cut with **n^{o(1)} update time** for **general cuts** (no size restriction).
**Complexity**:
```
Update Time: n^{o(1)}
Approximation: (1 + o(1))
Cut Size Range: Arbitrary (no restriction)
Previous Best: Õ(√n) for (1+o(1))-approximation
```
**Technical Innovations**:
- **Randomized LocalKCut procedure** for finding local cuts
- Combines sparsification with local cut detection
- No conditional lower bounds known for this problem
- Achieves subpolynomial time for first time
**Trade-offs**:
- Approximation algorithm (not exact)
- Randomized (uses Monte Carlo techniques)
- Better suited for large arbitrary cuts
**Reference**: [SODA 2025 Proceedings](https://epubs.siam.org/doi/10.1137/1.9781611978322.22)
---
### 2.3 December 2024: Deterministic Exact Algorithm
**Paper**: "Deterministic and Exact Fully-dynamic Minimum Cut of Superpolylogarithmic Size in Subpolynomial Time"
**Publication**: arXiv (Dec 2024, ~1 week old)
**arXiv**: [2512.13105](https://arxiv.org/html/2512.13105v1)
**Main Result**: **Deterministic** exact fully-dynamic min-cut in **n^{o(1)} time** when cut size ≤ 2^{Θ(log^{3/4-c} n)} for any c>0.
**Improvements over SODA 2024**:
- Larger cut size range: 2^{Θ(log^{3/4-c} n)} vs (log n)^{o(1)}
- Still deterministic and exact
- Still subpolynomial update time
**Key Technical Contribution**:
- **Deterministic local minimum cut algorithm**
- Replaces randomized LocalKCut from El-Hayek et al.
- Enables exact guarantees without randomization
**Significance**: State-of-the-art for exact deterministic dynamic min-cut.
---
## 3. Core Data Structures
### 3.1 Link-Cut Trees (Sleator-Tarjan 1983)
**Original Paper**: "A Data Structure for Dynamic Trees" (STOC'81)
**Authors**: Daniel Sleator, Robert Tarjan
**Also Known As**: Dynamic trees, ST-trees
**Functionality**:
```rust
// Core Operations (all O(log n) amortized)
Link(v, w) // Connect node v to w as child
Cut(v) // Disconnect v from its parent
FindRoot(v) // Find root of tree containing v
Path(v) // Path aggregate queries
Evert(v) // Change root to v
```
**Time Complexity**:
```
All operations: O(log n) amortized per operation
Worst-case: O(log n) for individual operations
Space: O(n)
```
**Internal Structure**:
- Forest partitioned into **vertex-disjoint preferred paths**
- Each path represented as **splay tree** (binary search tree)
- Nodes in symmetric order by depth
- Splay operations maintain balance
**Applications**:
- Dynamic connectivity in acyclic graphs
- Network flow algorithms (max flow in O(nm log n))
- Nearest common ancestors (O(log n) per query)
- Blocking flows (O(m log n))
- Essential for path-aggregate queries
**Rust Implementation Considerations**:
- Parent pointers required (complicates ownership)
- Splay trees need careful memory management
- Consider `Rc<RefCell<>>` or arena allocation
- Existing impl: [github.com/Amritha16/Link-Cut-Tree](https://github.com/Amritha16/Link-Cut-Tree) (C++)
**References**:
- [Original Paper (CMU)](https://www.cs.cmu.edu/~sleator/papers/dynamic-trees.pdf)
- [Wikipedia](https://en.wikipedia.org/wiki/Link/cut_tree)
- [MIT Lecture Notes](https://courses.csail.mit.edu/6.851/spring12/scribe/L19.pdf)
---
### 3.2 Euler Tour Trees (Henzinger-King)
**Origin**: Henzinger and King's dynamic connectivity work
**Key Idea**: Store Euler tour of tree in balanced BST
**Core Concept**:
```
Euler Tour: Path traversing each edge exactly twice
- Once entering subtree
- Once leaving subtree
- Forms circular sequence around tree
```
**Operations**:
```rust
// All O(log n) per operation
Link(u, v) // Join two trees
Cut(e) // Remove edge e, split tree
Reroot(v) // Make v the new root
SubtreeQuery(v) // Aggregate over v's subtree
```
**Advantages over Link-Cut Trees**:
- **Subtree aggregates**: Contiguous range in sequence
- **Simpler implementation**: Just BST operations
- **Better for**: Subtree queries, tree modifications
- **Parallel-friendly**: Can be parallelized efficiently
**Disadvantages**:
- Less efficient for path queries
- Not ideal for network flows
- More complex for LCA queries
**Time Complexity**:
```
Update: O(log n) per Link/Cut
Query: O(log n) for subtree aggregates
Space: O(n) for tour + BST
```
**Implementation with Treaps**:
```rust
// Pseudocode for ETT using Treap
struct ETTNode {
value: usize,
priority: u64,
subtree_aggregate: Aggregate,
left: Option<Box<ETTNode>>,
right: Option<Box<ETTNode>>,
}
// Split tour at position, merge two tours
fn split(root: ETTNode, pos: usize) -> (ETTNode, ETTNode);
fn merge(left: ETTNode, right: ETTNode) -> ETTNode;
```
**Parallel Extensions**:
- **Batch-Parallel ETT** (Tseng, Dhulipala, Blelloch 2019)
- Work-efficient parallel algorithms
- Good for GPU/multi-core implementations
**Rust Implementation Notes**:
- Treaps easiest to implement (randomized BST)
- Need parent pointers for some variants
- Range query support via augmentation
- Consider `petgraph` integration
**References**:
- [Codeforces Tutorial](https://codeforces.com/blog/entry/102087)
- [MIT Lecture Notes](https://courses.csail.mit.edu/6.851/spring07/scribe/lec05.pdf)
- [Batch-Parallel ETT Paper](https://www.cs.cmu.edu/~guyb/paralg/papers/TsengDhulipalaBelloch19.pdf)
---
### 3.3 Tree Decomposition for Cuts
**Purpose**: Hierarchical representation of graph connectivity
**Levels**: Dynamic connectivity uses **log n levels** of spanning forests:
- Level 0: Densest forest
- Level log n: Sparsest forest
- Edge level only decreases over time
**Key Properties**:
```
Invariant: Level-i edges form spanning forest
Edge levels: Decrease monotonically
Query: Two nodes connected iff in same tree at some level
```
**Integration with ETT/Link-Cut**:
- Each level's forest stored in ETT/Link-Cut structure
- Edge insertions may raise edge levels
- Edge deletions require replacement edge search
**Complexity Impact**:
```
Total data structure size: O(m log n)
Update amortized cost: O(log² n) for connectivity
Space overhead: Logarithmic in edges
```
---
## 4. Sparsification Techniques
### 4.1 Nagamochi-Ibaraki Sparsification (1992)
**Core Papers**:
1. "Computing edge-connectivity in multigraphs and capacitated graphs" (SIAM J. Discrete Math)
2. "A linear-time algorithm for finding a sparse k-connected spanning subgraph" (Algorithmica)
**Main Theorem**: Given unweighted graph G and parameter k, compute subgraph with **O(nk) edges** preserving all cuts of value ≤ k.
**Algorithm Overview**:
```
1. Modified BFS traversal from arbitrary vertex
2. At each step, visit vertex most strongly connected to visited set
3. Identify contractible edges (won't increase min-cut)
4. Contract until single node remains
```
**Time Complexity**:
```
Original: O(mn + n² log n)
With optimization: O(m + n² log n) for simple graphs
Space: O(n + m)
```
**Key Innovation**:
- **No flow computations required**
- Uses maximum spanning forest instead
- Provides graph partitioning into forests
- Each edge gets "NI index" indicating connectivity level
**Applications to Dynamic Min-Cut**:
- Reduce graph size before running expensive algorithms
- Maintain sparse certificate during updates
- O(nk) sparsifier updated in O(k log n) time per edge
**Extensions**:
- **Weighted graphs**: Use MSF indices instead of NI indices
- **Hypergraphs**: Certificate with sum of degrees O(kn)
- **Stoer-Wagner variant**: Simpler implementation, same complexity
**Rust Implementation**:
- [github.com/Rajan100994/Nagamochi_Ibaraki](https://github.com/Rajan100994/Nagamochi_Ibaraki)
- Key: Use union-find for contraction
- BFS with priority queue (most connected vertex first)
**References**:
- [Practical Minimum Cut Algorithms](https://eprints.cs.univie.ac.at/5324/1/paper.pdf)
- [Recent Sparsification Work](https://arxiv.org/abs/2110.15891)
---
### 4.2 Modern Sparsification (Cut Sparsifiers)
**Recent Advances**:
- **Friendly Cut Sparsifiers** (2021): Faster Gomory-Hu trees
- **Weighted Graph Sparsification** (ICALP 2022): Improved bounds
- Works in fully dynamic setting with updates
**Cut Sparsifier Definition**:
- Graph H is (1+ε)-cut sparsifier of G if:
- For all cuts S: (1-ε)|cut_G(S)| ≤ |cut_H(S)| ≤ (1+ε)|cut_G(S)|
- Size: O(n log n / ε²) edges (Benczúr-Karger)
**Dynamic Maintenance**:
- Update time: O(polylog n) for (1+ε)-sparsifiers
- Space: O(n polylog n)
- Quality: Maintains approximation during updates
**References**:
- [Faster Cut Sparsification](https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.61)
---
## 5. Deterministic Derandomization
### 5.1 Network Decomposition Breakthrough
**Paper**: "Polylogarithmic-Time Deterministic Network Decomposition and Distributed Derandomization"
**Impact**: Settles decades-old open problems in distributed algorithms
**arXiv**: [1907.10937](https://arxiv.org/abs/1907.10937)
**Main Theorem**: **P-RLOCAL = P-LOCAL**
Any polylogarithmic-time randomized local algorithm can be derandomized to polylogarithmic-time deterministic algorithm.
**Breakthrough**:
- Improves Panconesi-Srinivasan's 2^{O(√log n)} time (STOC'92)
- First polylog-time deterministic network decomposition
- Resolves Linial's question about MIS complexity
**Applications**:
- Maximal Independent Set: O(log² Δ · log n) deterministic
- Dynamic matching: O(1) update time deterministic
- Graph coloring, dominating sets, etc.
---
### 5.2 Expander Decomposition
**Recent Work**: Deterministic distributed expander decomposition (2020)
**arXiv**: [2007.14898](https://arxiv.org/abs/2007.14898)
**Key Results**:
- (ε,φ)-expander decomposition computable in poly(ε^{-1}) n^{o(1)} rounds
- Deterministic expander routing algorithms
- Applications to CONGEST model algorithms
**Relevance to Min-Cut**:
- Expanders have large min-cuts (good expansion)
- Decompose graph into high-conductance components
- Find cuts between components efficiently
---
### 5.3 Local Rounding Methods
**Technique**: Derandomize via pairwise independence + local rounding
**Advantage**: Works without network decompositions
**Recent**: ACM Transactions on Algorithms (2024)
**How it Works**:
1. Analyze randomized algorithm with pairwise independence
2. Apply local rounding deterministically
3. Maintain probabilistic guarantees deterministically
**Applications**:
- MIS, matching, set cover, and beyond
- Local distributed algorithms
- Dynamic graph problems
**Reference**: [ACM ToA 2024](https://dl.acm.org/doi/10.1145/3742476)
---
## 6. Algorithm Complexity Analysis
### Comparison Table
| Algorithm | Update Time | Approximation | Cut Size | Deterministic | Year |
|-----------|-------------|---------------|----------|---------------|------|
| Thorup | Õ(√n) | Exact (polylog) | ≤ polylog(n) | ✓ | 2007 |
| Thorup | Õ(√n) | (1+o(1)) | Arbitrary | ✗ (sampling) | 2007 |
| Jin-Sun-Thorup | n^{o(1)} | Exact | ≤ (log n)^{o(1)} | ✓ | 2024 |
| El-Hayek et al. | n^{o(1)} | (1+o(1)) | Arbitrary | ✗ | 2025 |
| Latest (Dec 2024) | n^{o(1)} | Exact | ≤ 2^{Θ(log^{3/4-c} n)} | ✓ | 2024 |
### Asymptotic Analysis
**Update Time Progression**:
```
1985: O(√m) Frederickson
1992: O(n^{2/3}) 3-edge connectivity
2007: Õ(√n) Thorup (up to polylog connectivity)
2024: n^{o(1)} Jin et al. (small cuts, exact)
2025: n^{o(1)} El-Hayek et al. (all cuts, approximate)
```
**n^{o(1)} Notation**:
- Subpolynomial: grows slower than any polynomial
- Example: n^{1/log log n}, n^{log* n}
- Practically: Very close to polylogarithmic
- Much better than √n for large n
**Space Complexity**:
- Thorup: O(n log n)
- Recent algorithms: O(m log n) due to hierarchical levels
- Sparsification reduces to O(n polylog n)
**Query Time**:
- Min-cut value: O(1) after updates
- Listing cut edges: O(k + log n) for k edges
- Connectivity queries: O(log n)
---
## 7. Implementation Insights
### 7.1 Data Structure Selection
**For Path Queries** (network flows):
-**Link-Cut Trees**
- Operations: Path aggregates, LCA, flow updates
- Complexity: O(log n) amortized
**For Subtree Queries** (dynamic connectivity):
-**Euler Tour Trees**
- Operations: Subtree sums, connected components
- Complexity: O(log n) worst-case
- Simpler to implement than Link-Cut
**For General Dynamic Min-Cut**:
-**Hybrid Approach**
- ETT for connectivity levels
- Sparsification for large graphs
- Local cut algorithms for small cuts
---
### 7.2 Rust-Specific Considerations
**Memory Management**:
```rust
// Option 1: Reference counting (simplest)
type NodeRef = Rc<RefCell<Node>>;
// Option 2: Arena allocation (faster)
use typed_arena::Arena;
struct Forest<'a> {
arena: &'a Arena<Node>,
}
// Option 3: Unsafe with indices (most efficient)
struct Forest {
nodes: Vec<Node>,
}
// Access via indices instead of pointers
```
**Parallelization**:
```rust
// Use rayon for parallel operations
use rayon::prelude::*;
// Batch updates can be parallelized
updates.par_iter().for_each(|update| {
process_update(update);
});
// Euler tour construction parallelizes well
```
**Type Safety**:
```rust
// Use phantom types for compile-time guarantees
struct Level<const L: usize>;
struct Forest<const L: usize> {
trees: Vec<ETT>,
_marker: PhantomData<Level<L>>,
}
// Prevent mixing levels at compile time
```
**Existing Rust Libraries**:
- `petgraph`: General graph algorithms (no dynamic min-cut)
- `rustworkx`: Stoer-Wagner static min-cut available
- Need to implement dynamic structures from scratch
**Performance Tips**:
1. Use `SmallVec` for small edge lists
2. Pre-allocate capacity for dynamic arrays
3. Profile with `cargo flamegraph`
4. Consider SIMD for batch operations
5. Use `#[inline]` for small hot functions
---
### 7.3 Practical Implementation Strategy
**Phase 1: Static Algorithms** (baseline)
```rust
// Implement proven static algorithms first
1. Karger-Stein (randomized, O(n² log³ n))
2. Stoer-Wagner (deterministic, O(mn + n² log n))
3. Nagamochi-Ibaraki sparsification
```
**Phase 2: Basic Dynamic** (foundation)
```rust
// Build core data structures
1. Euler Tour Trees with treaps
2. Union-Find for connectivity
3. Simple dynamic connectivity (no min-cut yet)
```
**Phase 3: Thorup's Algorithm** (proven approach)
```rust
// Implement Õ(√n) algorithm
1. Top trees or simplified variant
2. Polylog connectivity maintenance
3. Validate against test suite
```
**Phase 4: Modern Algorithms** (research frontier)
```rust
// Attempt subpolynomial algorithms
1. Local cut procedures
2. Hierarchical decomposition
3. Careful derandomization
```
---
## 8. Trade-offs Analysis
### 8.1 Exact vs. Approximate
**Exact Algorithms**:
**Advantages**:
- Guaranteed correctness for all queries
- Suitable for correctness-critical applications
- Better for small to medium cuts
**Disadvantages**:
- More complex implementation
- Higher constant factors
- Limited to smaller cut sizes for subpolynomial time
**Approximate Algorithms**:
**Advantages**:
- Work for arbitrary cut sizes
- Often simpler and faster in practice
- (1+o(1)) is nearly exact
- Better scaling for large graphs
**Disadvantages**:
- May miss exact minimum in rare cases
- Randomization adds complexity
- Need to tune approximation parameter
**Recommendation**: Start with (1+ε)-approximation, provide exact mode for small cuts.
---
### 8.2 Deterministic vs. Randomized
**Deterministic**:
**Advantages**:
- Predictable, reproducible behavior
- Worst-case guarantees
- Easier to reason about correctness
- No need for entropy source
**Disadvantages**:
- Often more complex
- Larger constant factors
- Limited by derandomization overhead
**Randomized**:
**Advantages**:
- Often simpler to implement
- Better constants in practice
- Easier to prove expected bounds
- Enables powerful techniques (sampling, etc.)
**Disadvantages**:
- Non-deterministic output
- Need high-quality randomness
- Expected vs. worst-case gap
- Harder to debug
**Recommendation**: Implement randomized first (faster to prototype), add deterministic option later.
---
### 8.3 Theoretical vs. Practical Performance
**Theoretical Optimality** (n^{o(1)} algorithms):
**Advantages**:
- Asymptotically optimal
- Scales to massive graphs
- Research frontier
**Disadvantages**:
- Huge constant factors
- Complex implementation (months of work)
- May not beat simpler algorithms on real graphs
- Limited reference implementations
**Practical Algorithms** (Õ(√n) or simpler):
**Advantages**:
- Well-understood and documented
- Reasonable constants
- Easier to implement and debug
- Proven in production systems
**Disadvantages**:
- Worse asymptotic complexity
- May not scale to extreme sizes
**Recommendation**:
- **v1.0**: Thorup's Õ(√n) algorithm (proven, practical)
- **v2.0**: Jin et al.'s n^{o(1)} for small cuts (research feature)
- **v3.0**: Full subpolynomial with El-Hayek et al. (ambitious)
---
### 8.4 Memory vs. Time Trade-offs
**High Memory Approach**:
- Store all log n levels explicitly
- Pre-compute sparsifiers
- Cache partial results
- **Trade-off**: 2-4x memory for 2-3x speed
**Low Memory Approach**:
- Lazy level construction
- On-demand sparsification
- Minimal caching
- **Trade-off**: 50% memory savings, 1.5-2x slower
**Recommendation**: Make it configurable:
```rust
pub struct DynamicMinCut {
config: Config,
}
pub struct Config {
memory_mode: MemoryMode, // HighMem, Balanced, LowMem
cache_size: usize,
max_levels: usize,
}
```
---
## 9. Recommended Rust Implementation Approach
### Architecture Overview
```rust
// High-level architecture
pub mod dynamic_mincut {
pub mod data_structures {
pub mod euler_tour_tree;
pub mod link_cut_tree;
pub mod union_find;
pub mod treap;
}
pub mod algorithms {
pub mod static_mincut {
pub mod karger_stein;
pub mod stoer_wagner;
pub mod nagamochi_ibaraki;
}
pub mod dynamic {
pub mod thorup; // Õ(√n) algorithm
pub mod jin_sun_thorup; // n^{o(1)} small cuts
pub mod el_hayek; // n^{o(1)} approximate
}
pub mod sparsification;
pub mod local_cuts;
}
pub mod core {
pub mod graph;
pub mod cut;
pub mod connectivity;
}
}
```
---
### Phase 1: Foundation (Weeks 1-2)
**Core Data Structures**:
```rust
// 1. Graph representation
pub struct DynamicGraph {
nodes: Vec<NodeId>,
edges: HashMap<EdgeId, Edge>,
adjacency: HashMap<NodeId, Vec<EdgeId>>,
}
// 2. Euler Tour Tree (with treaps)
pub struct EulerTourTree {
root: Option<NodeRef>,
node_map: HashMap<NodeId, NodeRef>,
}
// 3. Union-Find with path compression
pub struct UnionFind {
parent: Vec<usize>,
rank: Vec<usize>,
}
```
**Priority**: Get basic dynamic connectivity working first.
---
### Phase 2: Static Baselines (Weeks 3-4)
**Implement Proven Algorithms**:
```rust
// 1. Karger-Stein (for testing)
pub fn karger_stein(graph: &Graph) -> Cut {
// O(n² log³ n) randomized
// Use for correctness testing
}
// 2. Nagamochi-Ibaraki
pub fn nagamochi_ibaraki_sparsify(
graph: &Graph,
k: usize
) -> Graph {
// O(m + n² log n)
// Reduce to O(nk) edges
}
// 3. Stoer-Wagner (reference implementation)
pub fn stoer_wagner(graph: &Graph) -> Cut {
// O(mn + n² log n) deterministic
}
```
**Testing Strategy**:
- Generate random graphs
- Compare all implementations
- Build test suite with known min-cuts
---
### Phase 3: Thorup's Algorithm (Weeks 5-8)
**Implementation Plan**:
```rust
pub struct ThorupDynamicMinCut {
// Connectivity levels (0 to log n)
levels: Vec<ForestLevel>,
// Current minimum cut
min_cut: Option<Cut>,
// Configuration
config: Config,
}
impl ThorupDynamicMinCut {
pub fn insert_edge(&mut self, e: Edge) {
// 1. Add to level 0
// 2. Update connectivity
// 3. Check if min-cut affected
// O(√n) amortized
}
pub fn delete_edge(&mut self, e: EdgeId) {
// 1. Find edge level
// 2. Remove from that level
// 3. Search for replacement edge
// 4. Update min-cut if needed
// O(√n) amortized
}
pub fn query_mincut(&self) -> &Cut {
// O(1) after updates
}
}
```
**Key Challenges**:
- Top trees are complex → consider simplified variant
- Maintaining polylog connectivity efficiently
- Handling edge level promotions/demotions
**Simplified Alternative**: Use ET-trees with hierarchical levels instead of full top trees.
---
### Phase 4: Optimizations (Weeks 9-10)
**Performance Improvements**:
```rust
// 1. Sparsification integration
pub struct SparseMinCut {
sparsifier: NagamochiIbarakiSparsifier,
dynamic_core: ThorupDynamicMinCut,
}
// 2. Batch updates
pub fn batch_insert(&mut self, edges: &[Edge]) {
// Process multiple edges efficiently
// Amortize recomputation costs
}
// 3. Parallel query support
pub fn par_query_cuts(&self, pairs: &[(NodeId, NodeId)])
-> Vec<Cut>
{
// Use rayon for parallel queries
}
```
---
### Phase 5: Advanced Features (Weeks 11-12+)
**Subpolynomial Algorithms** (research implementation):
```rust
// Jin-Sun-Thorup for small cuts
pub struct SubpolySmallCut {
max_cut_size: usize, // (log n)^{o(1)}
local_cut_oracle: LocalCutOracle,
}
// El-Hayek et al. for approximate general cuts
pub struct SubpolyApproxCut {
epsilon: f64, // Approximation factor
local_k_cut: LocalKCut,
}
```
**Warning**: These are complex research algorithms. Budget 4-8 weeks for each, expect challenges.
---
### Testing & Benchmarking
```rust
#[cfg(test)]
mod tests {
// Unit tests
#[test]
fn test_small_graphs() { /* ... */ }
#[test]
fn test_dynamic_updates() { /* ... */ }
// Property-based testing
#[quickcheck]
fn prop_min_cut_correct(graph: Graph) {
// Compare with Karger-Stein
}
// Benchmarks
#[bench]
fn bench_insert_edge(b: &mut Bencher) { /* ... */ }
}
```
**Benchmark Suite**:
1. Random graphs (ErdősRényi)
2. Scale-free networks (BarabásiAlbert)
3. Grid graphs
4. Real-world graphs (SNAP datasets)
---
## 10. Key Takeaways for Implementation
### Do's ✅
1. **Start simple**: Implement Stoer-Wagner and Nagamochi-Ibaraki first
2. **Use Euler Tour Trees**: Simpler than Link-Cut for connectivity
3. **Extensive testing**: Compare against multiple baselines
4. **Benchmark early**: Profile before optimizing
5. **Thorup's algorithm**: Target this for v1.0 (well-understood, practical)
6. **Document heavily**: Algorithms are complex, future you will thank you
7. **Modular design**: Separate data structures, algorithms, graph representation
### Don'ts ❌
1. **Don't start with n^{o(1)} algorithms**: Too complex, unclear benefit for most use cases
2. **Don't optimize prematurely**: Get correctness first
3. **Don't skip sparsification**: Essential for large graphs
4. **Don't forget edge cases**: Empty graphs, disconnected components, etc.
5. **Don't ignore constants**: Asymptotic complexity ≠ practical performance
6. **Don't implement top trees unless necessary**: Very complex, consider alternatives
---
## 11. Open Questions & Research Directions
### Theoretical
1. **Conditional lower bounds**: No known lower bound for dynamic min-cut
2. **Deterministic subpolynomial**: Can El-Hayek et al. be fully derandomized?
3. **Exact arbitrary cuts**: n^{o(1)} time for all cut sizes (open problem)
### Practical
1. **Parallel algorithms**: Can we exploit multi-core effectively?
2. **External memory**: Algorithms for graphs larger than RAM
3. **Distributed setting**: Dynamic min-cut in distributed graphs
4. **Streaming**: One-pass or few-pass dynamic min-cut
### Implementation
1. **GPU acceleration**: Which parts parallelize well?
2. **Incremental-only**: Faster algorithms for insert-only (no deletes)
3. **Batch updates**: Better than processing one-by-one?
4. **Approximate dynamic**: Trade accuracy for speed dynamically
---
## 12. Resources & References
### Foundational Papers
- [Thorup 2007: Fully-Dynamic Min-Cut](https://link.springer.com/content/pdf/10.1007/s00493-007-0045-2.pdf)
- [Sleator & Tarjan 1983: Link-Cut Trees](https://www.cs.cmu.edu/~sleator/papers/dynamic-trees.pdf)
- [Nagamochi & Ibaraki 1992: Sparsification](https://eprints.cs.univie.ac.at/5324/1/paper.pdf)
### Recent Breakthroughs (2024-2025)
- [Jin, Sun, Thorup SODA'24: Subpolynomial Small Cuts](https://arxiv.org/abs/2401.09700)
- [El-Hayek, Henzinger, Li SODA'25: Subpolynomial Approximate](https://arxiv.org/html/2412.15069)
- [Dec 2024: Deterministic Exact](https://arxiv.org/html/2512.13105v1)
### Data Structures
- [Link-Cut Trees Wikipedia](https://en.wikipedia.org/wiki/Link/cut_tree)
- [Euler Tour Trees Tutorial (Codeforces)](https://codeforces.com/blog/entry/102087)
- [MIT Lecture on Dynamic Trees](https://courses.csail.mit.edu/6.851/spring12/scribe/L19.pdf)
### Derandomization
- [Polylog Network Decomposition](https://arxiv.org/abs/1907.10937)
- [Expander Decomposition](https://arxiv.org/abs/2007.14898)
- [Local Rounding Methods](https://dl.acm.org/doi/10.1145/3742476)
### Implementations
- [Karger's Algorithm (Rust)](https://gvelim.github.io/CSX0003RUST/graph_min_cut.html)
- [rustworkx Stoer-Wagner](https://qiskit.org/ecosystem/rustworkx/dev/apiref/rustworkx.stoer_wagner_min_cut.html)
- [Link-Cut Tree (C++)](https://github.com/Amritha16/Link-Cut-Tree)
- [Nagamochi-Ibaraki (Python)](https://github.com/Rajan100994/Nagamochi_Ibaraki)
### Surveys & Tutorials
- [Recent Advances in Fully Dynamic Graph Algorithms](https://eprints.cs.univie.ac.at/7316/1/Recent_Advances.pdf)
- [Practical Fully Dynamic Min-Cut Algorithms](https://arxiv.org/pdf/2101.05033)
---
## 13. Timeline Estimate for Rust Implementation
### Conservative Estimate (Solo Developer)
| Phase | Duration | Deliverable |
|-------|----------|-------------|
| Phase 1: Core structures | 2 weeks | ETT, Union-Find, basic graph |
| Phase 2: Static algorithms | 2 weeks | Karger-Stein, Nagamochi-Ibaraki, Stoer-Wagner |
| Phase 3: Thorup algorithm | 4 weeks | Õ(√n) dynamic min-cut |
| Phase 4: Optimization | 2 weeks | Sparsification integration, batch updates |
| Phase 5: Testing & docs | 2 weeks | Comprehensive test suite, documentation |
| **Total for v1.0** | **12 weeks** | **Production-ready Thorup implementation** |
| Phase 6: Advanced (optional) | 6-12 weeks | Subpolynomial algorithms (research) |
### Aggressive Estimate (Experienced Team)
| Phase | Duration | Deliverable |
|-------|----------|-------------|
| Parallel development | 4 weeks | All core components simultaneously |
| Integration & testing | 2 weeks | End-to-end system |
| Optimization | 2 weeks | Performance tuning |
| **Total for v1.0** | **8 weeks** | **Production-ready system** |
---
## 14. Recommended First Steps (Next 48 Hours)
1. **Set up project structure**:
```bash
cargo new ruvector-mincut
cd ruvector-mincut
# Add dependencies: petgraph, rand, criterion (benchmarks)
```
2. **Implement Union-Find**:
- Simple, well-understood
- Needed for all algorithms
- Good warm-up exercise
3. **Implement basic graph**:
- Adjacency list representation
- Edge insertion/deletion
- Basic queries (degree, neighbors)
4. **Implement Karger-Stein**:
- Simple randomized algorithm
- Use for testing correctness
- ~200 lines of code
5. **Create test framework**:
- Generate random graphs
- Known min-cut examples
- Property-based tests
6. **Research one paper in depth**:
- Suggested: Thorup 2007
- Read carefully, take notes
- Understand proof sketch
---
## Conclusion
The field of dynamic minimum cut has experienced revolutionary progress in 2024-2025, breaking the long-standing √n barrier with subpolynomial algorithms. For practical Rust implementation, I recommend:
**v1.0 Target**: Thorup's Õ(√n) algorithm
- Well-understood and documented
- Practical performance on real graphs
- Achievable in 12 weeks
- Battle-tested approach
**v2.0 Enhancement**: Jin-Sun-Thorup for small cuts
- Research frontier
- Exact subpolynomial for important special case
- Demonstrates cutting-edge capabilities
**v3.0 Ambitious**: El-Hayek et al. approximate algorithm
- Most general subpolynomial algorithm
- Highly complex implementation
- Consider only after v1.0 proven successful
The key to success: **start simple, test extensively, optimize incrementally**. The research papers are impressive, but a working, well-tested Õ(√n) implementation beats a buggy n^{o(1)} implementation every time.
---
**Document Version**: 1.0
**Last Updated**: December 21, 2025
**Next Review**: After Phase 1 implementation