Merge commit 'd803bfe2b1fe7f5e219e50ac20d6801a0a58ac75' as 'vendor/ruvector'
This commit is contained in:
889
vendor/ruvector/docs/research/agentic-robotics/README.md
vendored
Normal file
889
vendor/ruvector/docs/research/agentic-robotics/README.md
vendored
Normal file
@@ -0,0 +1,889 @@
|
||||
# SOTA Integration Analysis: agentic-robotics + ruvector
|
||||
|
||||
**Document Class:** State of the Art Research Analysis
|
||||
**Version:** 1.0.0
|
||||
**Date:** 2026-02-27
|
||||
**Authors:** RuVector Research Team
|
||||
**Status:** Technical Proposal
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Executive Summary](#1-executive-summary)
|
||||
2. [SOTA Context](#2-sota-context)
|
||||
3. [Framework Profiles](#3-framework-profiles)
|
||||
4. [Integration Thesis](#4-integration-thesis)
|
||||
5. [Synergy Map](#5-synergy-map)
|
||||
6. [Technical Compatibility Assessment](#6-technical-compatibility-assessment)
|
||||
7. [Key Integration Vectors](#7-key-integration-vectors)
|
||||
8. [Performance Projections](#8-performance-projections)
|
||||
9. [Risk Assessment](#9-risk-assessment)
|
||||
10. [References](#10-references)
|
||||
|
||||
---
|
||||
|
||||
## 1. Executive Summary
|
||||
|
||||
**agentic-robotics** (v0.1.3) is a Rust-native robotics middleware framework that reimplements the ROS communication substrate from scratch, achieving sub-microsecond latency (540ns serialization, 30ns channel messaging) through Zenoh pub/sub, CDR/rkyv zero-copy serialization, and a dual-runtime real-time executor. **ruvector** (v2.0.5) is a comprehensive Rust workspace comprising 100+ crates that span vector database operations (HNSW indexing, hyperbolic embeddings), graph neural networks, attention mechanisms, neuromorphic computing (spiking networks, EWC plasticity, BTSP learning), formal verification, FPGA transformer inference, distributed consensus (Raft), and self-optimizing neural architectures (SONA). Both frameworks are built on overlapping Rust dependency stacks (tokio, serde, rkyv, crossbeam, rayon, parking_lot, nalgebra, NAPI-RS, wasm-bindgen) and target identical deployment surfaces: native Rust, Node.js via NAPI, and browser/edge via WASM.
|
||||
|
||||
Integrating these two systems creates a platform that does not exist in the current robotics or ML landscape: a single Rust workspace where real-time sensor streams from physical robots flow directly into vector-indexed memory, graph neural network inference, neuromorphic processing, and formally verified decision pipelines -- all at sub-microsecond transport latency and with deterministic scheduling guarantees. This positions the combined platform uniquely against ROS2+PyTorch stacks (which incur Python FFI overhead and GC pauses), NVIDIA Isaac Sim (which requires GPU-heavy infrastructure), and Drake/MuJoCo (which focus on simulation rather than production middleware). The integration is not merely additive; it is multiplicative -- real-time robotics perception fused with learned vector representations and bio-inspired cognition enables closed-loop systems that perceive, learn, and act within a single deterministic runtime.
|
||||
|
||||
---
|
||||
|
||||
## 2. SOTA Context
|
||||
|
||||
### 2.1 Current Landscape of Robotics + ML Integration
|
||||
|
||||
The robotics industry has converged on a standard stack: **ROS2** (DDS/RTPS middleware) for communication, **Python** for ML inference (PyTorch, TensorFlow), and **C++** for real-time control loops. This architecture has known pathologies:
|
||||
|
||||
| Problem | Root Cause | Impact |
|
||||
|---------|-----------|--------|
|
||||
| Python FFI overhead | Cross-language serialization between C++ control and Python ML | 10-100us per inference call |
|
||||
| GC pauses | Python garbage collector interrupts real-time loops | Unbounded worst-case latency |
|
||||
| Serialization tax | ROS2 CDR encoding/decoding at every topic boundary | 1-5us per message |
|
||||
| Memory fragmentation | Allocator pressure from high-frequency message passing | Throughput degradation over time |
|
||||
| Deployment complexity | Separate runtimes for control (C++), ML (Python), middleware (DDS) | 3+ processes, IPC overhead |
|
||||
|
||||
**Key platforms in the current SOTA:**
|
||||
|
||||
- **ROS2 Humble/Iron/Jazzy** -- The industry standard. DDS-based pub/sub with rclcpp/rclpy clients. Supports real-time via rmw (ROS Middleware) layer. Bottleneck: serialization and multi-process IPC.
|
||||
- **NVIDIA Isaac Sim / Isaac ROS** -- GPU-accelerated simulation and perception. Requires NVIDIA hardware. Tight coupling to CUDA ecosystem.
|
||||
- **Drake (MIT/Toyota)** -- Model-based design with multibody physics. Strong formal methods (Lyapunov stability). No native ML integration; relies on external Python bridges.
|
||||
- **MuJoCo (DeepMind)** -- Physics simulation for RL. Excellent contact dynamics. No production deployment story.
|
||||
- **PyBullet** -- Lightweight simulation for RL research. Python-only. Not real-time capable.
|
||||
- **Pinocchio / Crocoddyl** -- Rigid body dynamics and optimal control in C++. Strong math but no perception or ML stack.
|
||||
- **micro-ROS** -- ROS2 for microcontrollers. Limited ML capability on embedded targets.
|
||||
- **Zenoh** -- Next-gen pub/sub middleware. Used by agentic-robotics as its transport layer. Lower latency than DDS but no ML integration.
|
||||
|
||||
### 2.2 The Missing Layer
|
||||
|
||||
No existing platform provides a unified Rust runtime that integrates:
|
||||
|
||||
1. Real-time robotics middleware (sub-microsecond messaging)
|
||||
2. Vector-indexed memory (HNSW approximate nearest neighbor search)
|
||||
3. Graph neural network inference on sensor topologies
|
||||
4. Neuromorphic processing (spiking networks, BTSP learning)
|
||||
5. Formally verified decision pipelines
|
||||
6. Edge deployment (WASM, embedded, NAPI)
|
||||
|
||||
This is the gap that agentic-robotics + ruvector fills. The closest analog would be assembling ROS2 + FAISS + PyG + Brian2 + Lean4 + emscripten -- six separate ecosystems with incompatible memory models, runtime assumptions, and deployment targets. The integrated Rust workspace eliminates all cross-language boundaries and provides a single compilation unit from sensor to actuator.
|
||||
|
||||
### 2.3 Academic Context
|
||||
|
||||
Recent work motivating this integration:
|
||||
|
||||
- **PointNet++ / PointTransformer** (Qi et al., 2017; Zhao et al., 2021) -- Point cloud processing with attention mechanisms. agentic-robotics provides PointCloud messages; ruvector-attention provides the attention layers.
|
||||
- **Neural Radiance Fields for Robotics** (Yen-Chen et al., 2022) -- NeRF-based scene understanding requires fast vector lookups for radiance field queries; HNSW indexing accelerates this by orders of magnitude.
|
||||
- **Spiking Neural Networks for Robotic Control** (Bing et al., 2018) -- Bio-inspired controllers with temporal coding. ruvector-nervous-system implements spiking networks with e-prop learning rules.
|
||||
- **Formal Verification of Robotic Systems** (Luckcuck et al., 2019) -- Safety-critical autonomy requires verified decision logic. ruvector-verified provides proof-carrying operations via lean-agentic dependent types.
|
||||
- **Real-Time Graph Neural Networks** (Gao & Ji, 2022) -- GNN inference on dynamic sensor graphs within control loop deadlines. ruvector-gnn on HNSW topology with ruvector-sparse-inference provides this.
|
||||
|
||||
---
|
||||
|
||||
## 3. Framework Profiles
|
||||
|
||||
### 3.1 Side-by-Side Comparison
|
||||
|
||||
| Dimension | agentic-robotics (v0.1.3) | ruvector (v2.0.5) |
|
||||
|-----------|--------------------------|-------------------|
|
||||
| **Primary Domain** | Real-time robotics middleware | Vector DB, ML, and cognitive architecture |
|
||||
| **Crate Count** | 6 | 100+ |
|
||||
| **Rust Edition** | 2021 | 2021 |
|
||||
| **Min Rust Version** | 1.70 | 1.77 |
|
||||
| **License** | MIT / Apache-2.0 | MIT |
|
||||
| **Async Runtime** | Tokio (dual-pool: 2 HiPri + 4 LoPri) | Tokio (multi-thread) |
|
||||
| **Serialization** | CDR, JSON, rkyv | rkyv, bincode, serde/JSON |
|
||||
| **Lock-Free Primitives** | Crossbeam channels | Crossbeam, DashMap, parking_lot |
|
||||
| **Parallelism** | Rayon (in executor) | Rayon (workspace-wide) |
|
||||
| **Math Library** | nalgebra, wide (SIMD) | nalgebra, ndarray, simsimd |
|
||||
| **Node.js Bindings** | NAPI-RS 3.0 (cdylib) | NAPI-RS 2.16 (cdylib) |
|
||||
| **WASM Support** | Not yet (planned) | wasm-bindgen 0.2 (20+ WASM crates) |
|
||||
| **Networking** | Zenoh 1.0, rustdds 0.11 | TCP (cluster), HTTP (server) |
|
||||
| **Persistence** | None (in-memory) | REDB, memmap2, PostgreSQL |
|
||||
| **Benchmarking** | Criterion 0.5, HDR histogram | Criterion 0.5, proptest |
|
||||
| **Build Profile** | LTO fat, opt-level 3, codegen-units 1 | LTO fat, opt-level 3, codegen-units 1 |
|
||||
| **MCP Support** | agentic-robotics-mcp (JSON-RPC 2.0) | mcp-gate (Coherence Gate MCP) |
|
||||
| **Embedded** | Embassy/RTIC feature flags | RVF eBPF kernel, FPGA backends |
|
||||
| **Formal Verification** | None | lean-agentic dependent types |
|
||||
|
||||
### 3.2 agentic-robotics Crate Architecture
|
||||
|
||||
```
|
||||
agentic-robotics workspace
|
||||
|
|
||||
|-- agentic-robotics-core Pub/sub messaging, CDR/rkyv serialization,
|
||||
| | Zenoh middleware, Crossbeam channels,
|
||||
| | Message trait, RobotState/PointCloud/Pose
|
||||
| |
|
||||
| |-- agentic-robotics-rt Dual Tokio runtime (2+4 threads),
|
||||
| | BinaryHeap priority scheduler,
|
||||
| | HDR histogram latency tracking
|
||||
| |
|
||||
| |-- agentic-robotics-mcp MCP 2025-11 server, JSON-RPC 2.0,
|
||||
| | Tool registration, stdio + SSE (Axum)
|
||||
| |
|
||||
| |-- agentic-robotics-embedded Embassy/RTIC feature flags,
|
||||
| | EmbeddedPriority, tick rate config
|
||||
| |
|
||||
| |-- agentic-robotics-node NAPI-RS bindings: AgenticNode,
|
||||
| AgenticPublisher, AgenticSubscriber
|
||||
|
|
||||
|-- agentic-robotics-benchmarks Criterion: CDR vs JSON, pubsub latency,
|
||||
executor perf, message size scaling
|
||||
```
|
||||
|
||||
### 3.3 ruvector Crate Architecture (Grouped by Domain)
|
||||
|
||||
```
|
||||
ruvector workspace (100+ crates)
|
||||
|
|
||||
|-- VECTOR DATABASE
|
||||
| |-- ruvector-core HNSW indexing, SIMD distance, quantization,
|
||||
| | REDB persistence, embeddings, arena allocator
|
||||
| |-- ruvector-collections Collection management
|
||||
| |-- ruvector-filter Query filtering and expression engine
|
||||
| |-- ruvector-server HTTP API server
|
||||
| |-- ruvector-postgres PostgreSQL storage backend
|
||||
| |-- ruvector-snapshot Point-in-time snapshots
|
||||
|
|
||||
|-- GRAPH & GNN
|
||||
| |-- ruvector-graph Graph data structures
|
||||
| |-- ruvector-gnn GNN layers on HNSW topology, EWC, cold-tier
|
||||
| |-- ruvector-graph-transformer Proof-gated mutation (8 verified modules)
|
||||
| |-- ruvector-dag DAG operations
|
||||
|
|
||||
|-- ATTENTION & TRANSFORMERS
|
||||
| |-- ruvector-attention Geometric, graph, sparse, sheaf attention
|
||||
| |-- ruvector-mincut Mincut attention partitioning
|
||||
| |-- ruvector-mincut-gated-transformer Gated transformer with mincut
|
||||
| |-- ruvector-fpga-transformer FPGA backend, deterministic latency
|
||||
| |-- ruvector-sparse-inference PowerInfer-style sparse neural inference
|
||||
|
|
||||
|-- NEUROMORPHIC / COGNITIVE
|
||||
| |-- ruvector-nervous-system Spiking networks, BTSP, EWC plasticity, HDC
|
||||
| |-- ruvector-cognitive-container WASM cognitive containers
|
||||
| |-- sona Self-Optimizing Neural Architecture (SONA)
|
||||
| |-- ruvector-coherence Coherence measurement for attention
|
||||
|
|
||||
|-- DISTRIBUTED / CONSENSUS
|
||||
| |-- ruvector-cluster Distributed sharding
|
||||
| |-- ruvector-raft Raft consensus for metadata
|
||||
| |-- ruvector-replication Data replication
|
||||
| |-- ruvector-delta-* Delta indexing, consensus, graph
|
||||
|
|
||||
|-- VERIFICATION & MATH
|
||||
| |-- ruvector-verified Formal proofs via lean-agentic
|
||||
| |-- ruvector-math OT, mixed-curvature, topology-gated
|
||||
| |-- ruvector-solver Constraint solver
|
||||
| |-- ruQu / ruqu-* Quantum-inspired algorithms
|
||||
|
|
||||
|-- LLM / INFERENCE
|
||||
| |-- ruvllm LLM runtime
|
||||
| |-- ruvector-temporal-tensor Temporal tensor compression
|
||||
| |-- prime-radiant Foundation model infrastructure
|
||||
|
|
||||
|-- FORMAT & RUNTIME
|
||||
| |-- rvf/* RVF container format (types, wire, quant,
|
||||
| | crypto, manifest, index, runtime, kernel,
|
||||
| | eBPF, launch, server, CLI)
|
||||
| |-- rvlite Lightweight runtime
|
||||
|
|
||||
|-- BINDINGS (per-module)
|
||||
| |-- *-node NAPI-RS bindings (8+ crates)
|
||||
| |-- *-wasm wasm-bindgen bindings (20+ crates)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Integration Thesis
|
||||
|
||||
### 4.1 Core Argument
|
||||
|
||||
The fundamental insight is that **robotics is a perception-cognition-action loop**, and each phase maps to a distinct ruvector subsystem:
|
||||
|
||||
```
|
||||
AGENTIC-ROBOTICS RUVECTOR
|
||||
(Transport & Scheduling) (Intelligence & Memory)
|
||||
|
||||
Sensors ──> [Zenoh Pub/Sub] ──> [CDR/rkyv Deserialize] ──> [Vector Indexing]
|
||||
| |
|
||||
[RT Executor] [GNN Inference]
|
||||
[Priority Sched] [Attention Mech]
|
||||
| |
|
||||
[Deadline Guard] [Nervous System]
|
||||
| |
|
||||
Actuators <── [Publisher] <── [CDR/rkyv Serialize] <── [Verified Decision]
|
||||
```
|
||||
|
||||
Today, this loop spans multiple processes, languages, and memory spaces. The integrated platform runs it in a single address space with zero-copy message passing between stages.
|
||||
|
||||
### 4.2 Why This Is Multiplicative, Not Additive
|
||||
|
||||
Consider a concrete scenario: a mobile robot performing visual-semantic SLAM (Simultaneous Localization and Mapping).
|
||||
|
||||
**Without integration (traditional ROS2 + Python stack):**
|
||||
|
||||
1. Camera image arrives via DDS (1-5us serialization)
|
||||
2. Image forwarded to Python feature extractor via bridge (50-100us FFI overhead)
|
||||
3. Features converted to vectors in NumPy (copy overhead)
|
||||
4. Vector search in FAISS for loop closure detection (separate process, IPC)
|
||||
5. Graph optimization in g2o (C++, another process)
|
||||
6. Map update published back through DDS (1-5us)
|
||||
7. **Total pipeline latency: 200-500us minimum, unbounded worst-case due to GC**
|
||||
|
||||
**With integration (agentic-robotics + ruvector):**
|
||||
|
||||
1. Camera image arrives via Zenoh (540ns serialization via rkyv)
|
||||
2. Feature extraction via ruvector-sparse-inference (Rust, same process)
|
||||
3. Zero-copy vector handoff to ruvector-core HNSW for loop closure (30ns channel)
|
||||
4. Graph optimization via ruvector-graph-transformer (same process)
|
||||
5. Map update published via Zenoh Publisher (540ns serialization)
|
||||
6. **Total pipeline latency: 5-20us typical, bounded worst-case via RT executor**
|
||||
|
||||
This is a 10-100x improvement in end-to-end latency with deterministic scheduling guarantees.
|
||||
|
||||
### 4.3 Unique Capabilities Enabled
|
||||
|
||||
The integration enables capabilities that neither framework can provide alone:
|
||||
|
||||
| Capability | Requires agentic-robotics | Requires ruvector | Neither Alone |
|
||||
|-----------|--------------------------|-------------------|---------------|
|
||||
| Real-time semantic SLAM | Sensor transport | Vector HNSW search | End-to-end pipeline |
|
||||
| Neuromorphic robot control | RT executor, pub/sub | Spiking networks, BTSP | Closed-loop spiking control |
|
||||
| Verified autonomous decisions | Message transport | Formal proofs (lean-agentic) | Verified perception-to-action |
|
||||
| Swarm intelligence with shared memory | Multi-robot Zenoh mesh | Distributed vector DB (Raft) | Shared spatial memory across robots |
|
||||
| On-device learning | Embedded runtime | SONA, EWC plasticity | Continual learning on edge |
|
||||
| Point cloud understanding | PointCloud messages | GNN on point topology | Real-time 3D scene graphs |
|
||||
|
||||
---
|
||||
|
||||
## 5. Synergy Map
|
||||
|
||||
### 5.1 Module-to-Module Mapping
|
||||
|
||||
| agentic-robotics Module | ruvector Module(s) | Integration Point | Value Created |
|
||||
|------------------------|--------------------|--------------------|---------------|
|
||||
| `agentic-robotics-core::Message` trait | `ruvector-core::types` | Implement `Message` for vector types; embed vectors in robot messages | Typed vector transport over Zenoh |
|
||||
| `agentic-robotics-core::PointCloud` | `ruvector-gnn` | Feed point clouds into GNN layers operating on kNN graph topology | Real-time 3D scene understanding |
|
||||
| `agentic-robotics-core::RobotState` | `ruvector-core::VectorDB` | Index robot state trajectories as vectors for similarity search and anomaly detection | Experience-based planning |
|
||||
| `agentic-robotics-core::Pose` | `ruvector-math` (mixed-curvature) | Represent poses in SE(3) manifold with hyperbolic embeddings | Geometrically faithful pose retrieval |
|
||||
| `agentic-robotics-core::Publisher` | `ruvector-delta-core` | Publish delta-encoded state changes instead of full snapshots | 3-6x bandwidth reduction |
|
||||
| `agentic-robotics-core::Subscriber` | `ruvector-attention` | Apply attention-gated filtering on incoming message streams | Selective perception |
|
||||
| `agentic-robotics-core::serialization` (rkyv) | `ruvector-core` (rkyv 0.8) | Shared zero-copy serialization; no re-encoding between systems | Zero overhead at boundary |
|
||||
| `agentic-robotics-core::Zenoh` | `ruvector-cluster` | Use Zenoh as transport for distributed vector DB cluster communication | Unified network layer |
|
||||
| `agentic-robotics-rt::ROS3Executor` | `ruvector-sparse-inference` | Schedule ML inference tasks with priority and deadline guarantees | Deterministic inference latency |
|
||||
| `agentic-robotics-rt::PriorityScheduler` | `ruvector-nervous-system` | Priority-schedule spiking network ticks within control loops | Real-time neuromorphic control |
|
||||
| `agentic-robotics-rt::LatencyTracker` | `ruvector-profiler` | Unified latency histograms across robotics and ML pipelines | End-to-end observability |
|
||||
| `agentic-robotics-mcp` | `mcp-gate` | Bridge robotics MCP tools with coherence-gated ML tools | Unified MCP tool surface |
|
||||
| `agentic-robotics-embedded` | `ruvector-fpga-transformer` | FPGA inference co-processor controlled by embedded runtime | Hardware-accelerated edge AI |
|
||||
| `agentic-robotics-embedded` | `ruvector-nervous-system` (HDC) | Hyperdimensional computing on microcontrollers for lightweight cognition | Ultra-low-power robot cognition |
|
||||
| `agentic-robotics-node` | `ruvector-node`, `ruvector-gnn-node` | Unified TypeScript API for robotics + ML | Single JS/TS development surface |
|
||||
| `agentic-robotics-benchmarks` | `ruvector-bench` | Combined benchmark suite measuring end-to-end pipeline performance | Integrated performance regression testing |
|
||||
|
||||
### 5.2 Data Flow Diagram
|
||||
|
||||
```
|
||||
AGENTIC-ROBOTICS LAYER
|
||||
================================================
|
||||
Sensors Zenoh Mesh Actuators
|
||||
[LiDAR] ----+ +---- [Motors]
|
||||
[Camera] ---+--> Pub/Sub Bus <---+---- [Grippers]
|
||||
[IMU] -----+ (30ns chan) +---- [LEDs]
|
||||
[Force] ----+ | +---- [Speakers]
|
||||
|
|
||||
rkyv zero-copy
|
||||
|
|
||||
================================================
|
||||
INTEGRATION BRIDGE
|
||||
================================================
|
||||
|
|
||||
+--------------+--------------+
|
||||
| | |
|
||||
[Vector Index] [GNN Layer] [Nervous Sys]
|
||||
ruvector-core ruvector-gnn ruvector-ns
|
||||
HNSW search Point graph Spiking nets
|
||||
~2.5K qps GNN forward BTSP learn
|
||||
| | |
|
||||
+--------------+--------------+
|
||||
|
|
||||
[Attention]
|
||||
ruvector-attention
|
||||
Graph/sparse/sheaf
|
||||
|
|
||||
[Decision Engine]
|
||||
ruvector-verified
|
||||
Proof-carrying ops
|
||||
ruvector-solver
|
||||
|
|
||||
[Delta Publish]
|
||||
ruvector-delta-core
|
||||
Compressed output
|
||||
|
|
||||
================================================
|
||||
AGENTIC-ROBOTICS LAYER
|
||||
================================================
|
||||
|
|
||||
Zenoh Publisher
|
||||
(540ns serialize)
|
||||
|
|
||||
Actuators
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Technical Compatibility Assessment
|
||||
|
||||
### 6.1 Shared Dependency Matrix
|
||||
|
||||
| Dependency | agentic-robotics Version | ruvector Version | Compatible | Notes |
|
||||
|-----------|-------------------------|-----------------|------------|-------|
|
||||
| `tokio` | 1.47 (full) | 1.41 (rt-multi-thread, sync, macros) | YES | Semver compatible; workspace unifies to 1.47 |
|
||||
| `serde` | 1.0 (derive) | 1.0 (derive) | YES | Identical |
|
||||
| `serde_json` | 1.0 | 1.0 | YES | Identical |
|
||||
| `rkyv` | 0.8 | 0.8 | YES | Identical; critical for zero-copy bridge |
|
||||
| `crossbeam` | 0.8 | 0.8 | YES | Identical |
|
||||
| `rayon` | 1.10 | 1.10 | YES | Identical |
|
||||
| `parking_lot` | 0.12 | 0.12 | YES | Identical |
|
||||
| `nalgebra` | (via wide SIMD) | 0.33 | YES | ruvector uses nalgebra directly |
|
||||
| `napi` | 3.0 | 2.16 | MINOR | Both use NAPI-RS; version gap manageable via workspace |
|
||||
| `napi-derive` | 3.0 | 2.16 | MINOR | Same as above |
|
||||
| `criterion` | 0.5 | 0.5 | YES | Identical |
|
||||
| `anyhow` | 1.0 | 1.0 | YES | Identical |
|
||||
| `thiserror` | 1.0/2.0 (mixed) | 2.0 | MINOR | thiserror 1.x and 2.x can coexist; align to 2.0 |
|
||||
| `tracing` | 0.1 | 0.1 | YES | Identical |
|
||||
| `rand` | 0.8 | 0.8 | YES | Identical |
|
||||
| `wasm-bindgen` | Not used | 0.2 | N/A | ruvector only; agentic-robotics can adopt |
|
||||
|
||||
**Compatibility Score: 14/16 exact matches, 2 minor version gaps. No blocking conflicts.**
|
||||
|
||||
### 6.2 Rust Edition and Toolchain
|
||||
|
||||
| Parameter | agentic-robotics | ruvector | Action Required |
|
||||
|-----------|-----------------|----------|----------------|
|
||||
| Rust edition | 2021 | 2021 | None |
|
||||
| Minimum Rust version | 1.70 | 1.77 | Align to 1.77 (ruvector minimum) |
|
||||
| Resolver | 2 | 2 | None |
|
||||
| LTO profile | fat | fat | None |
|
||||
| opt-level (release) | 3 | 3 | None |
|
||||
| codegen-units (release) | 1 | 1 | None |
|
||||
| strip (release) | true | true | None |
|
||||
| panic strategy | unwind | unwind | None |
|
||||
|
||||
### 6.3 Build Profile Alignment
|
||||
|
||||
Both frameworks use identical aggressive release profiles:
|
||||
|
||||
```toml
|
||||
[profile.release]
|
||||
opt-level = 3
|
||||
lto = "fat"
|
||||
codegen-units = 1
|
||||
strip = true
|
||||
panic = "unwind"
|
||||
|
||||
[profile.bench]
|
||||
inherits = "release"
|
||||
debug = true
|
||||
```
|
||||
|
||||
This means integrated benchmarks will reflect production-equivalent binary optimization with no profile conflicts.
|
||||
|
||||
### 6.4 NAPI Binding Parity
|
||||
|
||||
Both frameworks produce `cdylib` artifacts for Node.js consumption via NAPI-RS:
|
||||
|
||||
| Feature | agentic-robotics-node | ruvector-node |
|
||||
|---------|----------------------|---------------|
|
||||
| Crate type | cdylib | cdylib |
|
||||
| NAPI version | 3.0 | 2.16 |
|
||||
| Build tool | napi-build 2.3 | napi-build 2.1 |
|
||||
| Async support | Tokio bridge | Tokio bridge |
|
||||
| Features | napi9, async, tokio_rt | napi9, async, tokio_rt |
|
||||
|
||||
**Integration path:** Create a unified `ruvector-robotics-node` crate that re-exports both `agentic-robotics-node` and `ruvector-node` types, providing a single `.node` binary for TypeScript consumers.
|
||||
|
||||
### 6.5 WASM Parity
|
||||
|
||||
ruvector has extensive WASM support (20+ crates). agentic-robotics does not yet compile to WASM. Integration plan:
|
||||
|
||||
1. agentic-robotics-core can be compiled to WASM by gating Zenoh/DDS behind feature flags and using WebSocket-based transport
|
||||
2. rkyv serialization works in WASM (already proven by ruvector)
|
||||
3. Crossbeam channels work in WASM with `wasm32-unknown-unknown` target
|
||||
4. The RT executor needs a WASM-compatible scheduler (requestAnimationFrame or Web Workers)
|
||||
|
||||
---
|
||||
|
||||
## 7. Key Integration Vectors
|
||||
|
||||
### 7.1 Vector-Indexed Robot Memory
|
||||
|
||||
**Concept:** Every robot observation (sensor reading, state, event) is indexed as a vector in HNSW, creating an experience database that supports approximate nearest neighbor queries for analogical reasoning.
|
||||
|
||||
**Implementation:**
|
||||
|
||||
```rust
|
||||
use agentic_robotics_core::{Message, RobotState, Subscriber};
|
||||
use ruvector_core::{VectorDB, HnswIndex, DistanceMetric};
|
||||
|
||||
/// Bridge: robot state -> vector index
|
||||
struct RobotMemory {
|
||||
db: VectorDB,
|
||||
state_sub: Subscriber<RobotState>,
|
||||
}
|
||||
|
||||
impl RobotMemory {
|
||||
async fn index_experience(&mut self) -> anyhow::Result<()> {
|
||||
while let Some(state) = self.state_sub.recv().await {
|
||||
// Encode robot state as a 6D vector [pos_x, pos_y, pos_z, vel_x, vel_y, vel_z]
|
||||
let vector = vec![
|
||||
state.position[0] as f32,
|
||||
state.position[1] as f32,
|
||||
state.position[2] as f32,
|
||||
state.velocity[0] as f32,
|
||||
state.velocity[1] as f32,
|
||||
state.velocity[2] as f32,
|
||||
];
|
||||
|
||||
self.db.insert(state.timestamp as u64, &vector)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Find the K most similar past experiences to the current state
|
||||
fn recall(&self, current: &RobotState, k: usize) -> Vec<(u64, f32)> {
|
||||
let query = vec![
|
||||
current.position[0] as f32,
|
||||
current.position[1] as f32,
|
||||
current.position[2] as f32,
|
||||
current.velocity[0] as f32,
|
||||
current.velocity[1] as f32,
|
||||
current.velocity[2] as f32,
|
||||
];
|
||||
self.db.search(&query, k)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Expected Impact:** Enables experience-based planning where the robot recalls similar past situations and their outcomes before making decisions.
|
||||
|
||||
### 7.2 Real-Time GNN on Point Cloud Topology
|
||||
|
||||
**Concept:** Transform incoming PointCloud messages into a kNN graph and run GNN inference to produce per-point semantic embeddings within the RT executor's deadline.
|
||||
|
||||
**Implementation:**
|
||||
|
||||
```rust
|
||||
use agentic_robotics_core::{PointCloud, Point3D};
|
||||
use agentic_robotics_rt::{ROS3Executor, Priority, Deadline};
|
||||
use ruvector_gnn::layer::GNNLayer;
|
||||
use ruvector_core::index::HnswIndex;
|
||||
|
||||
struct PointCloudProcessor {
|
||||
gnn: GNNLayer,
|
||||
knn_index: HnswIndex,
|
||||
}
|
||||
|
||||
impl PointCloudProcessor {
|
||||
/// Process a point cloud within a 1ms deadline
|
||||
fn process(&mut self, cloud: &PointCloud) -> Vec<Vec<f32>> {
|
||||
// Step 1: Build kNN graph from point positions (~100us for 10K points)
|
||||
let points_as_vectors: Vec<Vec<f32>> = cloud.points.iter()
|
||||
.map(|p| vec![p.x, p.y, p.z])
|
||||
.collect();
|
||||
|
||||
self.knn_index.rebuild(&points_as_vectors);
|
||||
|
||||
// Step 2: Extract adjacency from HNSW layers
|
||||
let adjacency = self.knn_index.get_adjacency(/* k = */ 16);
|
||||
|
||||
// Step 3: GNN forward pass on the graph (~200-500us)
|
||||
let embeddings = self.gnn.forward(&points_as_vectors, &adjacency);
|
||||
|
||||
embeddings
|
||||
}
|
||||
}
|
||||
|
||||
// Schedule with RT executor
|
||||
async fn run_perception(executor: &ROS3Executor, processor: &mut PointCloudProcessor) {
|
||||
executor.spawn_rt(
|
||||
Priority::High,
|
||||
Deadline::from_millis(1), // 1ms hard deadline
|
||||
async {
|
||||
// Receive and process point cloud
|
||||
let cloud = receive_point_cloud().await;
|
||||
let embeddings = processor.process(&cloud);
|
||||
publish_embeddings(embeddings).await;
|
||||
}
|
||||
).unwrap();
|
||||
}
|
||||
```
|
||||
|
||||
**Expected Impact:** Real-time 3D scene understanding at 1kHz with bounded latency, replacing Python-based point cloud processing pipelines.
|
||||
|
||||
### 7.3 Neuromorphic Robot Controller
|
||||
|
||||
**Concept:** Replace PID controllers with spiking neural networks from ruvector-nervous-system, trained online via BTSP (Behavioral Time-Scale Plasticity) learning rules, executing within the real-time scheduler.
|
||||
|
||||
**Implementation:**
|
||||
|
||||
```rust
|
||||
use agentic_robotics_core::{RobotState, Publisher};
|
||||
use agentic_robotics_rt::{Priority, Deadline};
|
||||
use ruvector_nervous_system::spiking::{SpikingNetwork, LIFNeuron};
|
||||
use ruvector_nervous_system::plasticity::btsp::BTSPRule;
|
||||
|
||||
struct NeuromorphicController {
|
||||
network: SpikingNetwork<LIFNeuron>,
|
||||
learning_rule: BTSPRule,
|
||||
cmd_pub: Publisher<VelocityCommand>,
|
||||
}
|
||||
|
||||
impl NeuromorphicController {
|
||||
/// Run one control tick (target: <100us)
|
||||
fn tick(&mut self, state: &RobotState, dt_us: u64) {
|
||||
// Encode robot state as spike trains (rate coding)
|
||||
let input_spikes = self.encode_state(state);
|
||||
|
||||
// Propagate through spiking network
|
||||
let output_spikes = self.network.step(input_spikes, dt_us);
|
||||
|
||||
// Online learning: adjust synaptic weights
|
||||
self.learning_rule.update(&mut self.network, dt_us);
|
||||
|
||||
// Decode output spikes to motor commands
|
||||
let command = self.decode_command(output_spikes);
|
||||
|
||||
// Publish to actuators
|
||||
self.cmd_pub.publish_sync(&command);
|
||||
}
|
||||
|
||||
fn encode_state(&self, state: &RobotState) -> Vec<f64> {
|
||||
// Rate-code position and velocity into spike frequencies
|
||||
state.position.iter()
|
||||
.chain(state.velocity.iter())
|
||||
.map(|&v| (v * 100.0).clamp(0.0, 1000.0)) // Hz
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn decode_command(&self, spikes: Vec<f64>) -> VelocityCommand {
|
||||
VelocityCommand {
|
||||
linear: [spikes[0] / 100.0, spikes[1] / 100.0, 0.0],
|
||||
angular: [0.0, 0.0, spikes[2] / 100.0],
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Expected Impact:** Bio-inspired controllers that adapt online to changing dynamics without retraining, operating within real-time bounds.
|
||||
|
||||
### 7.4 Formally Verified Decision Pipeline
|
||||
|
||||
**Concept:** Use ruvector-verified to attach lean-agentic proofs to decision outputs, guaranteeing that autonomous actions satisfy formal safety specifications before being published to actuators.
|
||||
|
||||
**Implementation:**
|
||||
|
||||
```rust
|
||||
use agentic_robotics_core::Publisher;
|
||||
use ruvector_verified::{ProofContext, VerifiedOp, ProofCarrying};
|
||||
use ruvector_solver::ConstraintSolver;
|
||||
|
||||
struct VerifiedAutonomy {
|
||||
proof_ctx: ProofContext,
|
||||
solver: ConstraintSolver,
|
||||
cmd_pub: Publisher<VerifiedCommand>,
|
||||
}
|
||||
|
||||
impl VerifiedAutonomy {
|
||||
/// Generate a command with a machine-checkable safety proof
|
||||
fn decide(&self, perception: &SceneGraph) -> anyhow::Result<VerifiedCommand> {
|
||||
// Step 1: Solver produces candidate action
|
||||
let candidate = self.solver.solve(perception)?;
|
||||
|
||||
// Step 2: Generate formal proof that action satisfies safety invariants
|
||||
// - No collision with obstacles within safety margin
|
||||
// - Velocity within joint limits
|
||||
// - Torque within actuator bounds
|
||||
let proof = self.proof_ctx.prove(
|
||||
"safety_invariant",
|
||||
&[
|
||||
("no_collision", candidate.min_obstacle_distance > 0.5),
|
||||
("velocity_bound", candidate.max_velocity < 2.0),
|
||||
("torque_bound", candidate.max_torque < 100.0),
|
||||
],
|
||||
)?;
|
||||
|
||||
// Step 3: Attach proof to command (proof-carrying code pattern)
|
||||
Ok(VerifiedCommand {
|
||||
action: candidate,
|
||||
proof: proof.serialize(),
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Expected Impact:** Provably safe autonomous decisions -- a requirement for deployment in safety-critical domains (surgical robotics, autonomous vehicles, industrial automation).
|
||||
|
||||
### 7.5 Distributed Swarm with Shared Vector Memory
|
||||
|
||||
**Concept:** Multiple robots share a distributed vector database over Zenoh transport, using Raft consensus for consistent spatial memory. Each robot indexes its observations and queries the swarm's collective experience.
|
||||
|
||||
**Implementation:**
|
||||
|
||||
```rust
|
||||
use agentic_robotics_core::Zenoh;
|
||||
use ruvector_cluster::ShardedDB;
|
||||
use ruvector_raft::RaftNode;
|
||||
|
||||
struct SwarmMemory {
|
||||
zenoh: Zenoh,
|
||||
local_shard: ShardedDB,
|
||||
raft: RaftNode,
|
||||
}
|
||||
|
||||
impl SwarmMemory {
|
||||
/// Index a local observation and replicate to swarm
|
||||
async fn observe(&mut self, observation: Observation) -> anyhow::Result<()> {
|
||||
let vector = observation.to_vector();
|
||||
|
||||
// Index locally
|
||||
self.local_shard.insert(observation.id, &vector)?;
|
||||
|
||||
// Propose to Raft cluster for replicated metadata
|
||||
self.raft.propose(RaftEntry::Insert {
|
||||
id: observation.id,
|
||||
shard: self.local_shard.shard_id(),
|
||||
vector_hash: hash(&vector),
|
||||
}).await?;
|
||||
|
||||
// Publish observation summary to swarm via Zenoh
|
||||
self.zenoh.publish(
|
||||
"/swarm/observations",
|
||||
&observation.summary(),
|
||||
).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Query the entire swarm's collective memory
|
||||
async fn recall_swarm(&self, query: &[f32], k: usize) -> Vec<SwarmResult> {
|
||||
// Scatter query to all shards via Zenoh
|
||||
let responses = self.zenoh.query_all(
|
||||
"/swarm/memory/search",
|
||||
&SearchRequest { vector: query.to_vec(), k },
|
||||
).await?;
|
||||
|
||||
// Gather and merge results
|
||||
let mut results: Vec<SwarmResult> = responses.into_iter()
|
||||
.flat_map(|r| r.results)
|
||||
.collect();
|
||||
results.sort_by(|a, b| a.distance.partial_cmp(&b.distance).unwrap());
|
||||
results.truncate(k);
|
||||
results
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Expected Impact:** Multi-robot systems that collectively build and query a shared spatial understanding, enabling coordination without a central server.
|
||||
|
||||
### 7.6 MCP-Unified Tool Surface
|
||||
|
||||
**Concept:** Merge agentic-robotics-mcp (robot control tools) and mcp-gate (ML/coherence tools) into a unified MCP server that exposes both robotics actions and ML inference as LLM-callable tools.
|
||||
|
||||
**Implementation:**
|
||||
|
||||
```rust
|
||||
// Unified MCP tool registry combining robotics and ML capabilities
|
||||
//
|
||||
// Robot tools (from agentic-robotics-mcp):
|
||||
// - move_robot(x, y, z) -> Move to position
|
||||
// - get_sensor(sensor_id) -> Read sensor value
|
||||
// - set_gripper(open: bool) -> Control gripper
|
||||
//
|
||||
// ML tools (from mcp-gate):
|
||||
// - vector_search(query, k) -> Nearest neighbor search
|
||||
// - gnn_infer(graph) -> GNN inference on graph
|
||||
// - verify_action(action) -> Formal verification
|
||||
//
|
||||
// Combined tools (new):
|
||||
// - perceive_and_plan(scene) -> End-to-end perception -> planning
|
||||
// - learn_from_demo(demo) -> One-shot learning from demonstration
|
||||
|
||||
struct UnifiedMcpServer {
|
||||
robotics_tools: AgenticRoboticsMcp,
|
||||
ml_tools: McpGate,
|
||||
}
|
||||
```
|
||||
|
||||
**Expected Impact:** LLM-driven robot control with full access to both physical actions and learned models through a single protocol.
|
||||
|
||||
### 7.7 FPGA-Accelerated Edge Inference in RT Loop
|
||||
|
||||
**Concept:** Use ruvector-fpga-transformer as a co-processor within the agentic-robotics-embedded runtime, offloading transformer inference to FPGA while the CPU handles control.
|
||||
|
||||
```
|
||||
CPU (agentic-robotics-embedded) FPGA (ruvector-fpga-transformer)
|
||||
================================ ================================
|
||||
[Sensor Read] -----> DMA --------> [Quantized Attention]
|
||||
[RT Scheduler] [Q4 MatMul Pipeline]
|
||||
[Control Loop] <----- DMA <-------- [Softmax (LUT/PWL)]
|
||||
[Actuator Write] [Top-K Selection]
|
||||
Deterministic: <500us per token
|
||||
```
|
||||
|
||||
**Expected Impact:** Transformer-based perception or language understanding running on edge hardware with deterministic latency, suitable for embedded robotic platforms without GPU.
|
||||
|
||||
### 7.8 Temporal Tensor Compression for Sensor Streams
|
||||
|
||||
**Concept:** Use ruvector-temporal-tensor to compress high-frequency sensor streams (IMU at 1kHz, LiDAR at 20Hz) with tiered quantization, reducing storage and network bandwidth while maintaining temporal fidelity.
|
||||
|
||||
```rust
|
||||
use agentic_robotics_core::Subscriber;
|
||||
use ruvector_temporal_tensor::{TemporalCompressor, QuantTier};
|
||||
|
||||
struct SensorCompressor {
|
||||
compressor: TemporalCompressor,
|
||||
imu_sub: Subscriber<ImuReading>,
|
||||
}
|
||||
|
||||
impl SensorCompressor {
|
||||
async fn compress_stream(&mut self) {
|
||||
while let Some(reading) = self.imu_sub.recv().await {
|
||||
let tensor = reading.to_tensor(); // [accel_xyz, gyro_xyz, mag_xyz] = 9D
|
||||
|
||||
// Hot tier: full precision (recent 100ms)
|
||||
// Warm tier: FP16 quantized (recent 10s)
|
||||
// Cold tier: INT8 quantized (historical)
|
||||
self.compressor.ingest(tensor, reading.timestamp);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Expected Impact:** 4-32x compression of sensor history with tiered precision, enabling long-horizon reasoning on resource-constrained robots.
|
||||
|
||||
---
|
||||
|
||||
## 8. Performance Projections
|
||||
|
||||
### 8.1 Latency Budget for Integrated Pipeline
|
||||
|
||||
Target: Complete perception-to-action loop within 1ms (1kHz control rate).
|
||||
|
||||
| Stage | Component | Projected Latency | Basis |
|
||||
|-------|----------|-------------------|-------|
|
||||
| Sensor deserialize | agentic-robotics-core (rkyv) | 540ns | Measured benchmark |
|
||||
| Channel transport | Crossbeam (lock-free) | 30ns | Measured benchmark |
|
||||
| Vector indexing (HNSW) | ruvector-core | 50-200us | Benchmarked: ~2.5K qps on 10K vectors |
|
||||
| GNN forward pass | ruvector-gnn | 100-500us | Estimated from layer complexity |
|
||||
| Attention gating | ruvector-attention | 10-50us | Benchmarked sparse attention |
|
||||
| Decision + verify | ruvector-verified + solver | 10-100us | Benchmarked proof generation |
|
||||
| Delta encoding | ruvector-delta-core | 1-5us | Estimated from compression benchmarks |
|
||||
| Command serialize | agentic-robotics-core (rkyv) | 540ns | Measured benchmark |
|
||||
| Channel transport | Crossbeam (lock-free) | 30ns | Measured benchmark |
|
||||
| **Total** | **End-to-end** | **~200-900us** | **Within 1ms budget** |
|
||||
|
||||
### 8.2 Throughput Projections
|
||||
|
||||
| Metric | Standalone agentic-robotics | Standalone ruvector | Integrated |
|
||||
|--------|---------------------------|--------------------|-----------|
|
||||
| Message throughput | 33M msgs/sec (channel) | N/A | 33M msgs/sec (unchanged) |
|
||||
| Serialization rate | 1.85M ser/sec | ~500K vectors/sec (HNSW insert) | 500K vectors/sec (bottleneck: HNSW) |
|
||||
| Inference throughput | N/A | ~2.5K queries/sec (HNSW search) | 2.5K queries/sec (parallel with messaging) |
|
||||
| GNN forward passes | N/A | ~1-10K/sec (layer dependent) | 1-10K/sec (scheduled by RT executor) |
|
||||
| Spiking network ticks | N/A | ~100K ticks/sec (1K neurons) | 100K ticks/sec (bounded by deadline) |
|
||||
|
||||
### 8.3 Memory Footprint
|
||||
|
||||
| Component | Estimated Memory | Notes |
|
||||
|-----------|-----------------|-------|
|
||||
| agentic-robotics runtime | 10-50 MB | Zenoh session + Tokio + channel buffers |
|
||||
| ruvector-core (10K vectors, 512D) | 20-40 MB | HNSW graph + vector storage |
|
||||
| ruvector-gnn (3-layer) | 5-20 MB | Weight matrices + activation buffers |
|
||||
| ruvector-nervous-system (1K neurons) | 1-5 MB | Spike history + synaptic weights |
|
||||
| ruvector-verified (proof cache) | 1-10 MB | Proof arena + verification state |
|
||||
| **Total** | **40-130 MB** | **Suitable for embedded Linux (RPi 4+)** |
|
||||
|
||||
### 8.4 Comparison with Competing Stacks
|
||||
|
||||
| Stack | E2E Latency | Memory | Languages | Deployment |
|
||||
|-------|------------|--------|-----------|-----------|
|
||||
| ROS2 + PyTorch + FAISS | 200-500us (unbounded) | 500MB-2GB | C++/Python | Multi-process |
|
||||
| Isaac ROS + TensorRT | 50-200us | 2-8GB (GPU) | C++/Python/CUDA | GPU required |
|
||||
| Drake + JAX | 100-1000us | 500MB-1GB | C++/Python | Multi-process |
|
||||
| **agentic-robotics + ruvector** | **200-900us (bounded)** | **40-130MB** | **Rust (single)** | **Single process** |
|
||||
|
||||
Key differentiator: **bounded worst-case latency** from a single-process Rust runtime with no GC, no FFI, and deterministic scheduling.
|
||||
|
||||
---
|
||||
|
||||
## 9. Risk Assessment
|
||||
|
||||
### 9.1 Technical Risks
|
||||
|
||||
| Risk | Severity | Likelihood | Mitigation |
|
||||
|------|----------|-----------|------------|
|
||||
| **NAPI version mismatch** (3.0 vs 2.16) | Low | Medium | Align workspace to NAPI 3.0; backward-compatible API changes are minimal |
|
||||
| **thiserror version split** (1.x vs 2.x) | Low | Low | Both versions can coexist in cargo workspace; align to 2.0 over time |
|
||||
| **Zenoh dependency weight** (~50 transitive deps) | Medium | High | Feature-gate Zenoh behind `robotics` flag; allow in-process-only mode without Zenoh |
|
||||
| **HNSW rebuild latency in RT loop** | High | Medium | Use incremental insert (not full rebuild); pre-allocate graph capacity; schedule rebuilds in low-priority pool |
|
||||
| **GNN inference exceeding RT deadline** | High | Medium | Profile and prune GNN layers; use sparse inference; fall back to simpler model under deadline pressure |
|
||||
| **rkyv version drift** | Medium | Low | Currently identical (0.8); pin in workspace Cargo.toml |
|
||||
| **Embedded memory constraints** | High | Medium | Feature-gate ML components; provide `no_std` compatible subset; use INT4/INT8 quantization |
|
||||
| **Build time increase** (100+ crates) | Medium | High | Use workspace feature flags; conditional compilation; incremental builds |
|
||||
| **Zenoh + Raft consensus interaction** | Medium | Medium | Separate concerns: Zenoh for real-time messaging, Raft for metadata only; do not run Raft proposals in RT critical path |
|
||||
| **WASM target for agentic-robotics** | Medium | Medium | Requires abstracting Zenoh transport; use WebSocket fallback; gate DDS behind feature flag |
|
||||
|
||||
### 9.2 Architectural Risks
|
||||
|
||||
| Risk | Description | Mitigation |
|
||||
|------|------------|------------|
|
||||
| **Scope creep** | Integration surface is massive (100+ crates x 6 crates) | Prioritize 3 integration vectors first: vector memory, GNN perception, verified decisions |
|
||||
| **Abstraction leakage** | ruvector internals bleeding into robotics API | Define clean trait boundaries; use newtype wrappers for cross-crate types |
|
||||
| **Testing complexity** | End-to-end tests require both robotics and ML components | Create integration test harness with mock sensors and deterministic GNN weights |
|
||||
| **Documentation debt** | Two large codebases with different documentation styles | Establish unified doc standards; generate cross-reference API docs |
|
||||
|
||||
### 9.3 Recommended Phasing
|
||||
|
||||
| Phase | Scope | Timeline | Deliverable |
|
||||
|-------|-------|----------|-------------|
|
||||
| **Phase 1** | Zero-copy bridge (rkyv shared types, Message trait impl) | 2-4 weeks | `ruvector-robotics-bridge` crate |
|
||||
| **Phase 2** | Vector-indexed robot memory + RT scheduling of HNSW search | 4-6 weeks | `ruvector-robotics-memory` crate |
|
||||
| **Phase 3** | GNN on PointCloud + attention pipeline | 6-8 weeks | `ruvector-robotics-perception` crate |
|
||||
| **Phase 4** | Neuromorphic controller + verified decision pipeline | 8-12 weeks | `ruvector-robotics-cognition` crate |
|
||||
| **Phase 5** | Distributed swarm memory + unified MCP + WASM target | 12-16 weeks | Full integration release |
|
||||
|
||||
---
|
||||
|
||||
## 10. References
|
||||
|
||||
### Repositories
|
||||
|
||||
1. **agentic-robotics** -- https://github.com/ruvnet/agentic-robotics
|
||||
2. **ruvector** -- https://github.com/ruvnet/ruvector
|
||||
3. **Zenoh** (pub/sub middleware) -- https://github.com/eclipse-zenoh/zenoh
|
||||
4. **NAPI-RS** (Node.js bindings) -- https://github.com/napi-rs/napi-rs
|
||||
5. **rkyv** (zero-copy serialization) -- https://github.com/rkyv/rkyv
|
||||
6. **lean-agentic** (formal verification) -- https://crates.io/crates/lean-agentic
|
||||
|
||||
### Key Papers
|
||||
|
||||
7. Qi, C.R., Yi, L., Su, H., & Guibas, L.J. (2017). "PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space." *NeurIPS*.
|
||||
8. Zhao, H., Jiang, L., Jia, J., Torr, P., & Koltun, V. (2021). "Point Transformer." *ICCV*.
|
||||
9. Yen-Chen, L., Srinivasan, P., Tancik, M., & Barron, J.T. (2022). "NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields." *ICRA*.
|
||||
10. Bing, Z., Meschede, C., Rohrbein, F., Huang, K., & Knoll, A.C. (2018). "A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks." *Frontiers in Neurorobotics*.
|
||||
11. Luckcuck, M., Farrell, M., Dennis, L.A., Fisher, C., & Lincoln, N. (2019). "Formal Specification and Verification of Autonomous Robotic Systems: A Survey." *ACM Computing Surveys*.
|
||||
12. Gao, H. & Ji, S. (2022). "Graph Neural Networks for Real-Time Dynamic Inference." *IEEE TPAMI*.
|
||||
13. Ongaro, D. & Ousterhout, J. (2014). "In Search of an Understandable Consensus Algorithm (Raft)." *USENIX ATC*.
|
||||
14. Malkov, Y.A. & Yashunin, D.A. (2020). "Efficient and Robust Approximate Nearest Neighbor Using Hierarchical Navigable Small World Graphs." *IEEE TPAMI*.
|
||||
|
||||
### Standards
|
||||
|
||||
15. OMG DDS (Data Distribution Service) Specification -- https://www.omg.org/spec/DDS/
|
||||
16. OMG CDR (Common Data Representation) -- https://www.omg.org/spec/CDR/
|
||||
17. MCP (Model Context Protocol) 2025-11 Specification -- https://modelcontextprotocol.io/
|
||||
18. JSON-RPC 2.0 Specification -- https://www.jsonrpc.org/specification
|
||||
|
||||
---
|
||||
|
||||
*This document represents a technical analysis of integration feasibility. All performance figures for agentic-robotics are from measured benchmarks; ruvector figures are from benchmarked crate operations. Integrated pipeline projections are estimates based on component-level measurements and should be validated with end-to-end benchmarks during Phase 1.*
|
||||
555
vendor/ruvector/docs/research/agentic-robotics/architecture-synergy.md
vendored
Normal file
555
vendor/ruvector/docs/research/agentic-robotics/architecture-synergy.md
vendored
Normal file
@@ -0,0 +1,555 @@
|
||||
# Architecture Compatibility and Synergy Analysis
|
||||
|
||||
**Document Class:** Technical Architecture Review
|
||||
**Version:** 1.0.0
|
||||
**Date:** 2026-02-27
|
||||
|
||||
---
|
||||
|
||||
## 1. Dependency Compatibility Matrix
|
||||
|
||||
### Shared Dependencies (Exact or Compatible Versions)
|
||||
|
||||
| Dependency | agentic-robotics | ruvector | Status | Resolution |
|
||||
|-----------|-----------------|----------|--------|------------|
|
||||
| tokio | 1.47 (full) | 1.41 (rt-multi-thread, sync, macros) | Minor mismatch | Upgrade ruvector to 1.47 |
|
||||
| serde | 1.0 (derive) | 1.0 (derive) | Compatible | No action |
|
||||
| serde_json | 1.0 | 1.0 | Compatible | No action |
|
||||
| rkyv | 0.8 | 0.8 | Compatible | No action |
|
||||
| crossbeam | 0.8 | 0.8 | Compatible | No action |
|
||||
| rayon | 1.10 | 1.10 | Compatible | No action |
|
||||
| parking_lot | 0.12 | 0.12 | Compatible | No action |
|
||||
| nalgebra | 0.33 | 0.33 (no-default-features) | Compatible | Unify feature flags |
|
||||
| thiserror | 2.0 | 2.0 | Compatible | No action |
|
||||
| anyhow | 1.0 | 1.0 | Compatible | No action |
|
||||
| tracing | 0.1 | 0.1 | Compatible | No action |
|
||||
| tracing-subscriber | 0.3 | 0.3 (env-filter) | Compatible | No action |
|
||||
| criterion | 0.5 (html_reports) | 0.5 (html_reports) | Compatible | No action |
|
||||
| rand | 0.8 | 0.8 | Compatible | No action |
|
||||
|
||||
### agentic-robotics-Unique Dependencies
|
||||
|
||||
| Dependency | Version | Size Impact | Feature-Gate Strategy |
|
||||
|-----------|---------|------------|----------------------|
|
||||
| zenoh | 1.0 | Large (~50+ transitive) | `feature = "robotics"` |
|
||||
| rustdds | 0.11 | Medium (~20 transitive) | `feature = "robotics-dds"` |
|
||||
| cdr | 0.2 | Small | `feature = "robotics"` |
|
||||
| hdrhistogram | 7.5 | Small | `feature = "robotics-rt"` |
|
||||
| wide | 0.7 | Small | `feature = "robotics-simd"` |
|
||||
| axum | 0.7 | Medium | `feature = "robotics-sse"` |
|
||||
|
||||
### ruvector-Unique Dependencies
|
||||
|
||||
| Dependency | Version | Notes |
|
||||
|-----------|---------|-------|
|
||||
| redb | 2.1 | Storage backend |
|
||||
| memmap2 | 0.9 | Memory-mapped files |
|
||||
| hnsw_rs | 0.3 (patched) | HNSW index (patched for WASM) |
|
||||
| simsimd | 5.9 | SIMD distance functions |
|
||||
| ndarray | 0.16 | N-dimensional arrays |
|
||||
| dashmap | 6.1 | Concurrent hashmap |
|
||||
| lean-agentic | 0.1.0 | Formal verification |
|
||||
| wasm-bindgen | 0.2 | WASM interop |
|
||||
|
||||
### Version Conflict Resolution Plan
|
||||
|
||||
**tokio 1.41 -> 1.47:**
|
||||
- Minor version bump, fully backward compatible
|
||||
- New features in 1.47 (improved multi-thread scheduling) benefit both
|
||||
- Change: `Cargo.toml` workspace `tokio = { version = "1.47", ... }`
|
||||
|
||||
**napi 2.16 -> 3.0:**
|
||||
- Breaking change: napi 3.0 has different macro syntax
|
||||
- Strategy: Maintain separate NAPI versions per crate until coordinated upgrade
|
||||
- OR: Upgrade all ruvector -node crates to napi 3.0 (recommended)
|
||||
|
||||
---
|
||||
|
||||
## 2. Architecture Layer Mapping
|
||||
|
||||
```
|
||||
+=========================================================================+
|
||||
| UNIFIED COGNITIVE ROBOTICS PLATFORM |
|
||||
+=========================================================================+
|
||||
| |
|
||||
| APPLICATION LAYER |
|
||||
| +----------------------------+ +------------------------------------+ |
|
||||
| | Robot Applications | | ML/AI Applications | |
|
||||
| | - Autonomous navigation | | - Vector search | |
|
||||
| | - Swarm coordination | | - Graph reasoning | |
|
||||
| | - Manipulation control | | - Attention inference | |
|
||||
| +----------------------------+ +------------------------------------+ |
|
||||
| | | |
|
||||
| MCP LAYER (AI TOOL INTERFACE) |
|
||||
| +-------------------------------------------------------------------+ |
|
||||
| | agentic-robotics-mcp + ruvector MCP tools | |
|
||||
| | - robot_move, sensor_read | vector_search, gnn_classify | |
|
||||
| | - path_plan, status_query | attention_focus, memory_recall | |
|
||||
| +-------------------------------------------------------------------+ |
|
||||
| | | |
|
||||
| SCHEDULING LAYER |
|
||||
| +-------------------------------------------------------------------+ |
|
||||
| | agentic-robotics-rt (Dual Runtime) | |
|
||||
| | HIGH-PRIORITY (2 threads) | LOW-PRIORITY (4 threads) | |
|
||||
| | - Control loops (<1ms) | - Planning (>1ms) | |
|
||||
| | - Sensor processing | - Index rebuilds | |
|
||||
| | - GNN inference (urgent) | - Batch vector ops | |
|
||||
| | - Attention (time-critical) | - Training updates | |
|
||||
| +-------------------------------------------------------------------+ |
|
||||
| | | |
|
||||
| MESSAGING LAYER |
|
||||
| +----------------------------+ +------------------------------------+ |
|
||||
| | agentic-robotics-core | | ruvector-cluster | |
|
||||
| | - Publisher<T>/Subscriber<T>| | - Raft consensus | |
|
||||
| | - Zenoh pub/sub | | - Replication | |
|
||||
| | - CDR/JSON serialization | | - Delta consensus | |
|
||||
| +----------------------------+ +------------------------------------+ |
|
||||
| | | |
|
||||
| COMPUTE LAYER |
|
||||
| +----------------------------+ +------------------------------------+ |
|
||||
| | Robotics Compute | | ML Compute | |
|
||||
| | - Kinematic solvers | | - HNSW indexing (ruvector-core) | |
|
||||
| | - Path planning | | - GNN forward (ruvector-gnn) | |
|
||||
| | - State estimation | | - Attention (ruvector-attention) | |
|
||||
| | | | - Graph transformer | |
|
||||
| | | | - Sparse inference | |
|
||||
| +----------------------------+ +------------------------------------+ |
|
||||
| | | |
|
||||
| STORAGE LAYER |
|
||||
| +-------------------------------------------------------------------+ |
|
||||
| | ruvector-core (redb + memmap2) | ruvector-postgres | |
|
||||
| | - Vector persistence | - SQL storage backend | |
|
||||
| | - Index snapshots | - Graph persistence | |
|
||||
| +-------------------------------------------------------------------+ |
|
||||
| |
|
||||
| BINDING LAYER |
|
||||
| +----------------------------+ +------------------------------------+ |
|
||||
| | NAPI (Node.js) | | WASM (Browser/Edge) | |
|
||||
| | agentic-robotics-node | | ruvector-*-wasm (20+ crates) | |
|
||||
| | ruvector-*-node (10+) | | agentic-robotics-embedded | |
|
||||
| +----------------------------+ +------------------------------------+ |
|
||||
| |
|
||||
+=========================================================================+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Data Flow Integration
|
||||
|
||||
### Sensor-to-Decision Pipeline
|
||||
|
||||
```
|
||||
[LiDAR Sensor]
|
||||
|
|
||||
v
|
||||
[PointCloud Message] ──> agentic-robotics-core Publisher
|
||||
|
|
||||
| (zero-copy via shared memory / crossbeam channel)
|
||||
v
|
||||
[BRIDGE: PointCloud -> Vec<f32>] ──> ruvector-robotics-bridge
|
||||
|
|
||||
├──> [HNSW Spatial Index] ──> ruvector-core
|
||||
| |
|
||||
| v
|
||||
| [Nearest Obstacles] (k-NN search, <500us)
|
||||
| |
|
||||
├──> [Scene Graph Build] ──> ruvector-graph
|
||||
| |
|
||||
| v
|
||||
| [Graph Transformer] ──> ruvector-graph-transformer
|
||||
| |
|
||||
| v
|
||||
| [Scene Understanding] (spatial reasoning)
|
||||
| |
|
||||
└──> [GNN Classification] ──> ruvector-gnn
|
||||
|
|
||||
v
|
||||
[Object Classes + Confidence]
|
||||
|
|
||||
v
|
||||
[Decision Fusion] ──> ruvector-attention (weighted)
|
||||
|
|
||||
v
|
||||
[Action Command] ──> agentic-robotics-core Publisher -> /cmd_vel
|
||||
```
|
||||
|
||||
### Data Type Mappings
|
||||
|
||||
```rust
|
||||
// Bridge: PointCloud -> HNSW-indexable vectors
|
||||
impl From<&PointCloud> for Vec<Vec<f32>> {
|
||||
fn from(cloud: &PointCloud) -> Self {
|
||||
cloud.points.iter()
|
||||
.map(|p| vec![p.x, p.y, p.z])
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
|
||||
// Bridge: RobotState -> feature vector for temporal tensor
|
||||
impl From<&RobotState> for Vec<f64> {
|
||||
fn from(state: &RobotState) -> Self {
|
||||
let mut v = Vec::with_capacity(7);
|
||||
v.extend_from_slice(&state.position);
|
||||
v.extend_from_slice(&state.velocity);
|
||||
v.push(state.timestamp as f64);
|
||||
v
|
||||
}
|
||||
}
|
||||
|
||||
// Bridge: Pose -> graph node features
|
||||
impl From<&Pose> for Vec<f64> {
|
||||
fn from(pose: &Pose) -> Self {
|
||||
let mut v = Vec::with_capacity(7);
|
||||
v.extend_from_slice(&pose.position);
|
||||
v.extend_from_slice(&pose.orientation);
|
||||
v
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Shared Pattern Analysis
|
||||
|
||||
### Concurrency Patterns
|
||||
|
||||
Both frameworks extensively use the same concurrency primitives:
|
||||
|
||||
**Arc<RwLock<T>> (read-heavy shared state):**
|
||||
```rust
|
||||
// agentic-robotics-mcp: tool registry
|
||||
tools: Arc<RwLock<HashMap<String, (McpTool, ToolHandler)>>>
|
||||
|
||||
// ruvector-core: vector index (conceptual)
|
||||
index: Arc<RwLock<HnswIndex>>
|
||||
```
|
||||
|
||||
**Arc<Mutex<T>> (write-heavy shared state):**
|
||||
```rust
|
||||
// agentic-robotics-rt: scheduler queue
|
||||
scheduler: Arc<Mutex<PriorityScheduler>>
|
||||
|
||||
// agentic-robotics-rt: latency histogram
|
||||
histogram: Arc<Mutex<Histogram<u64>>>
|
||||
```
|
||||
|
||||
**Crossbeam channels (message passing):**
|
||||
```rust
|
||||
// agentic-robotics-core: subscriber
|
||||
let (sender, receiver) = channel::unbounded();
|
||||
|
||||
// ruvector uses crossbeam for parallel processing pipelines
|
||||
```
|
||||
|
||||
### Serialization Strategies
|
||||
|
||||
Both frameworks support the same serialization stack:
|
||||
|
||||
| Format | agentic-robotics | ruvector | Use Case |
|
||||
|--------|-----------------|----------|----------|
|
||||
| serde (JSON) | Primary for NAPI | Configuration | Debug, interop |
|
||||
| rkyv 0.8 | Derive macros on all types | Storage backend | Zero-copy persistence |
|
||||
| CDR | Robot message wire format | N/A | DDS compatibility |
|
||||
| bincode | N/A | Compact binary | Network transfer |
|
||||
|
||||
**Key insight:** Both derive `rkyv::{Archive, Serialize, Deserialize}` on core types, enabling zero-copy data sharing.
|
||||
|
||||
### Error Handling
|
||||
|
||||
```rust
|
||||
// Both use thiserror for typed errors
|
||||
#[derive(Error, Debug)]
|
||||
pub enum Error {
|
||||
#[error("...")] Variant(String),
|
||||
#[error("...")] Io(#[from] std::io::Error),
|
||||
#[error("...")] Other(#[from] anyhow::Error),
|
||||
}
|
||||
pub type Result<T> = std::result::Result<T, Error>;
|
||||
```
|
||||
|
||||
### NAPI Binding Patterns
|
||||
|
||||
```rust
|
||||
// agentic-robotics-node (napi 3.0)
|
||||
#[napi]
|
||||
pub struct AgenticNode { ... }
|
||||
#[napi]
|
||||
impl AgenticNode {
|
||||
#[napi(constructor)]
|
||||
pub fn new(name: String) -> Result<Self> { ... }
|
||||
#[napi]
|
||||
pub async fn create_publisher(&self, topic: String) -> Result<AgenticPublisher> { ... }
|
||||
}
|
||||
|
||||
// ruvector-node (napi 2.16) - same pattern, older version
|
||||
#[napi]
|
||||
pub struct VectorIndex { ... }
|
||||
#[napi]
|
||||
impl VectorIndex {
|
||||
#[napi(constructor)]
|
||||
pub fn new(dimensions: u32) -> Result<Self> { ... }
|
||||
#[napi]
|
||||
pub async fn search(&self, query: Vec<f64>, k: u32) -> Result<SearchResults> { ... }
|
||||
}
|
||||
```
|
||||
|
||||
### Benchmark Patterns
|
||||
|
||||
Both use Criterion 0.5 with identical patterns:
|
||||
```rust
|
||||
fn benchmark_operation(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("GroupName");
|
||||
group.bench_function("name", |b| {
|
||||
b.iter(|| { black_box(operation()); })
|
||||
});
|
||||
group.finish();
|
||||
}
|
||||
criterion_group!(benches, benchmark_operation);
|
||||
criterion_main!(benches);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Integration Architecture Proposal
|
||||
|
||||
### Tier 1: Bridge Layer (Minimal Integration)
|
||||
|
||||
New crate: `ruvector-robotics-bridge`
|
||||
|
||||
```rust
|
||||
//! Bridge between agentic-robotics messages and ruvector operations
|
||||
|
||||
use agentic_robotics_core::message::{PointCloud, RobotState, Pose, Message};
|
||||
use ruvector_core::types::Vector;
|
||||
|
||||
/// Convert PointCloud to indexable vectors
|
||||
pub fn pointcloud_to_vectors(cloud: &PointCloud) -> Vec<Vector> {
|
||||
cloud.points.iter()
|
||||
.map(|p| Vector::from_slice(&[p.x as f64, p.y as f64, p.z as f64]))
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Convert RobotState to feature vector
|
||||
pub fn state_to_vector(state: &RobotState) -> Vector {
|
||||
let mut data = Vec::with_capacity(7);
|
||||
data.extend_from_slice(&state.position);
|
||||
data.extend_from_slice(&state.velocity);
|
||||
data.push(state.timestamp as f64);
|
||||
Vector::from_vec(data)
|
||||
}
|
||||
|
||||
/// Auto-indexing subscriber: indexes incoming PointClouds
|
||||
pub struct IndexingSubscriber {
|
||||
subscriber: Subscriber<PointCloud>,
|
||||
index: Arc<RwLock<HnswIndex>>,
|
||||
}
|
||||
|
||||
impl IndexingSubscriber {
|
||||
pub async fn run(&self) {
|
||||
loop {
|
||||
if let Ok(cloud) = self.subscriber.recv_async().await {
|
||||
let vectors = pointcloud_to_vectors(&cloud);
|
||||
let mut idx = self.index.write();
|
||||
for v in vectors { idx.insert(&v); }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Tier 2: Fusion Layer (Deep Integration)
|
||||
|
||||
New crate: `ruvector-robotics-perception`
|
||||
|
||||
```rust
|
||||
//! Perception pipeline: sensor data -> ML inference -> decisions
|
||||
|
||||
use agentic_robotics_rt::{ROS3Executor, Priority, Deadline};
|
||||
use ruvector_gnn::GraphNeuralNetwork;
|
||||
use ruvector_attention::AttentionMechanism;
|
||||
|
||||
pub struct PerceptionPipeline {
|
||||
executor: ROS3Executor,
|
||||
gnn: Arc<GraphNeuralNetwork>,
|
||||
attention: Arc<AttentionMechanism>,
|
||||
}
|
||||
|
||||
impl PerceptionPipeline {
|
||||
/// Process sensor data with RT-scheduled ML inference
|
||||
pub fn process(&self, cloud: PointCloud) {
|
||||
let gnn = self.gnn.clone();
|
||||
let attention = self.attention.clone();
|
||||
|
||||
// High-priority: real-time obstacle detection
|
||||
self.executor.spawn_high(async move {
|
||||
let scene_graph = build_scene_graph(&cloud);
|
||||
let gnn_output = gnn.forward(&scene_graph);
|
||||
let focused = attention.apply(&gnn_output);
|
||||
// Publish decision
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Tier 3: Unified Cognitive Platform (Long-term)
|
||||
|
||||
```
|
||||
ruvector-cognitive-robotics
|
||||
|-- agentic-robotics-core (sensing + actuation)
|
||||
|-- agentic-robotics-rt (scheduling)
|
||||
|-- agentic-robotics-mcp (AI interface)
|
||||
|-- ruvector-core (vector memory)
|
||||
|-- ruvector-gnn (spatial reasoning)
|
||||
|-- ruvector-attention (selective focus)
|
||||
|-- ruvector-nervous-system (cognitive architecture)
|
||||
|-- ruvector-temporal-tensor (temporal reasoning)
|
||||
|-- sona (self-learning)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Build System Integration
|
||||
|
||||
### Workspace Member Additions
|
||||
|
||||
Add to `Cargo.toml` `[workspace] members`:
|
||||
```toml
|
||||
members = [
|
||||
# ... existing 114 members ...
|
||||
"crates/agentic-robotics-core",
|
||||
"crates/agentic-robotics-rt",
|
||||
"crates/agentic-robotics-mcp",
|
||||
"crates/agentic-robotics-embedded",
|
||||
"crates/agentic-robotics-node",
|
||||
"crates/agentic-robotics-benchmarks",
|
||||
# New integration crates
|
||||
"crates/ruvector-robotics-bridge",
|
||||
]
|
||||
```
|
||||
|
||||
### Workspace Dependency Additions
|
||||
|
||||
```toml
|
||||
[workspace.dependencies]
|
||||
# New robotics dependencies
|
||||
zenoh = { version = "1.0", optional = true }
|
||||
rustdds = { version = "0.11", optional = true }
|
||||
cdr = { version = "0.2", optional = true }
|
||||
hdrhistogram = "7.5"
|
||||
wide = "0.7"
|
||||
```
|
||||
|
||||
### Feature Flag Strategy
|
||||
|
||||
```toml
|
||||
# In ruvector-core/Cargo.toml
|
||||
[features]
|
||||
default = ["storage"]
|
||||
robotics = ["agentic-robotics-core"]
|
||||
robotics-rt = ["robotics", "agentic-robotics-rt"]
|
||||
robotics-mcp = ["robotics", "agentic-robotics-mcp"]
|
||||
robotics-full = ["robotics-rt", "robotics-mcp", "agentic-robotics-embedded"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Performance Budget Analysis
|
||||
|
||||
### Latency Budget for Sensor-to-Decision Pipeline
|
||||
|
||||
| Stage | Budget | Mechanism |
|
||||
|-------|--------|-----------|
|
||||
| Sensor deserialization | 540ns | CDR zero-copy |
|
||||
| PointCloud -> vectors | ~100ns | Direct memory map, no allocation |
|
||||
| HNSW k-NN search (10K) | ~400us | O(log n) with SIMD distance |
|
||||
| Scene graph construction | ~50us | Pre-allocated graph structures |
|
||||
| GNN forward pass | ~200us | Small model, RT-scheduled |
|
||||
| Attention application | ~100us | Single-head, focused features |
|
||||
| Decision serialization | ~540ns | CDR output |
|
||||
| **Total** | **<1ms** | Meets hard RT requirement |
|
||||
|
||||
### Memory Layout Optimization
|
||||
|
||||
```
|
||||
Shared memory region (mmap):
|
||||
+------------------------------------------+
|
||||
| PointCloud (rkyv archived) |
|
||||
| points: [Point3D; N] <-- contiguous |
|
||||
| intensities: [f32; N] <-- SIMD-ready |
|
||||
+------------------------------------------+
|
||||
| HNSW Index Vectors |
|
||||
| [f32; 3] x N <-- same memory layout |
|
||||
+------------------------------------------+
|
||||
| GNN Graph (adjacency + features) |
|
||||
| nodes: [f32; D] x M |
|
||||
| edges: [(u32, u32)] x E |
|
||||
+------------------------------------------+
|
||||
```
|
||||
|
||||
Both Point3D `{x, y, z}: f32` and HNSW vectors `[f32; 3]` have identical memory layout, enabling zero-copy conversion via `unsafe { std::slice::from_raw_parts(...) }` when performance is critical.
|
||||
|
||||
---
|
||||
|
||||
## 8. NAPI/WASM Binding Unification Strategy
|
||||
|
||||
### Current State
|
||||
|
||||
| Binding | agentic-robotics | ruvector | Count |
|
||||
|---------|-----------------|----------|-------|
|
||||
| NAPI (Node.js) | 1 crate (napi 3.0) | 10+ crates (napi 2.16) | 11+ |
|
||||
| WASM | 0 (planned) | 20+ crates (wasm-bindgen 0.2) | 20+ |
|
||||
|
||||
### Unified TypeScript API Design
|
||||
|
||||
```typescript
|
||||
// @ruvector/platform - unified package
|
||||
import { RobotNode, VectorIndex, GnnModel } from '@ruvector/platform';
|
||||
|
||||
// Create robot node with integrated vector search
|
||||
const node = new RobotNode('perception_bot');
|
||||
const index = new VectorIndex({ dimensions: 3, metric: 'l2' });
|
||||
const gnn = await GnnModel.load('./scene_classifier.model');
|
||||
|
||||
// Subscribe to LiDAR, auto-index, classify
|
||||
const lidar = await node.subscribe('/lidar/points');
|
||||
lidar.onMessage(async (cloud) => {
|
||||
// Index points for spatial search
|
||||
await index.insertBatch(cloud.points);
|
||||
|
||||
// Find nearest obstacles
|
||||
const obstacles = await index.search(robot.position, { k: 20 });
|
||||
|
||||
// Classify scene
|
||||
const scene = gnn.classify(obstacles);
|
||||
|
||||
// Publish decision
|
||||
await node.publish('/nav/command', scene.safePath);
|
||||
});
|
||||
```
|
||||
|
||||
### WASM Build Strategy
|
||||
|
||||
```
|
||||
Phase 1: ruvector WASM crates work standalone (current)
|
||||
Phase 2: agentic-robotics-core builds to WASM (remove Zenoh, use web-sys channels)
|
||||
Phase 3: Combined ruvector-robotics-wasm with unified API
|
||||
Phase 4: Web-based robot simulator using combined WASM + WebGL
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
The architectural compatibility between agentic-robotics and ruvector is exceptionally high:
|
||||
- 14/16 shared dependencies are version-compatible
|
||||
- Identical Rust edition, build profiles, and coding patterns
|
||||
- Complementary rather than overlapping functionality
|
||||
- Both use rkyv 0.8 enabling zero-copy data sharing
|
||||
- NAPI binding patterns are structurally identical
|
||||
|
||||
The primary integration challenges are manageable:
|
||||
1. Zenoh dependency tree size (mitigated by feature flags)
|
||||
2. NAPI version mismatch (coordinated upgrade to 3.0)
|
||||
3. tokio minor version bump (backward compatible)
|
||||
|
||||
The synergy potential is substantial: no existing framework combines real-time robotics middleware with native vector database operations, GNN inference, and MCP tool exposure in a unified Rust workspace.
|
||||
967
vendor/ruvector/docs/research/agentic-robotics/crate-review.md
vendored
Normal file
967
vendor/ruvector/docs/research/agentic-robotics/crate-review.md
vendored
Normal file
@@ -0,0 +1,967 @@
|
||||
# Agentic Robotics Crate-by-Crate Deep Review
|
||||
|
||||
**Date**: 2026-02-27
|
||||
**Reviewer**: Research Agent
|
||||
**Source Location**: `/home/user/ruvector/crates/agentic-robotics-*/`
|
||||
**Total Crates**: 6
|
||||
**Total Lines of Rust**: 2,635 (source + benchmarks)
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Crate 1: agentic-robotics-core](#crate-1-agentic-robotics-core)
|
||||
2. [Crate 2: agentic-robotics-rt](#crate-2-agentic-robotics-rt)
|
||||
3. [Crate 3: agentic-robotics-mcp](#crate-3-agentic-robotics-mcp)
|
||||
4. [Crate 4: agentic-robotics-embedded](#crate-4-agentic-robotics-embedded)
|
||||
5. [Crate 5: agentic-robotics-node](#crate-5-agentic-robotics-node)
|
||||
6. [Crate 6: agentic-robotics-benchmarks](#crate-6-agentic-robotics-benchmarks)
|
||||
7. [Cross-Crate Dependency Graph](#cross-crate-dependency-graph)
|
||||
8. [Overall Assessment](#overall-assessment)
|
||||
9. [Integration Roadmap for ruvector](#integration-roadmap-for-ruvector)
|
||||
|
||||
---
|
||||
|
||||
## Crate 1: agentic-robotics-core
|
||||
|
||||
**Path**: `/home/user/ruvector/crates/agentic-robotics-core/`
|
||||
**Line Count**: 705 lines (669 source + 36 bench)
|
||||
**Complexity Estimate**: Low-Medium
|
||||
**Code Quality Rating**: B
|
||||
|
||||
### File Inventory
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `src/lib.rs` | 45 | Root module, re-exports, `init()` function |
|
||||
| `src/message.rs` | 119 | Message trait and concrete message types |
|
||||
| `src/publisher.rs` | 85 | Generic typed publisher with stats tracking |
|
||||
| `src/subscriber.rs` | 91 | Generic typed subscriber with crossbeam channels |
|
||||
| `src/service.rs` | 127 | RPC service server (Queryable) and client (Service) |
|
||||
| `src/middleware.rs` | 66 | Zenoh middleware abstraction (placeholder) |
|
||||
| `src/serialization.rs` | 107 | CDR/JSON/rkyv serialization pipeline |
|
||||
| `src/error.rs` | 29 | Error types using thiserror |
|
||||
| `benches/message_passing.rs` | 36 | Criterion benchmark for publish + serialization |
|
||||
|
||||
### Purpose
|
||||
|
||||
ROS3 Core provides the foundational pub/sub messaging layer modeled after ROS2 but rewritten in Rust. It targets microsecond-scale determinism with Zenoh as the middleware transport (currently stubbed) and supports CDR, JSON, and rkyv serialization formats.
|
||||
|
||||
### API Surface
|
||||
|
||||
#### Traits
|
||||
|
||||
```rust
|
||||
pub trait Message: Serialize + for<'de> Deserialize<'de> + Send + Sync + 'static {
|
||||
fn type_name() -> &'static str;
|
||||
fn version() -> &'static str { "1.0" }
|
||||
}
|
||||
```
|
||||
|
||||
The `Message` trait is the central abstraction. It requires serde bounds plus `Send + Sync + 'static` for async safety. A blanket implementation exists for `serde_json::Value`, enabling generic JSON message passing.
|
||||
|
||||
#### Public Types
|
||||
|
||||
| Type | Module | Description |
|
||||
|------|--------|-------------|
|
||||
| `Message` (trait) | `message` | Core message trait with type_name/version |
|
||||
| `RobotState` | `message` | position: [f64; 3], velocity: [f64; 3], timestamp: i64 |
|
||||
| `Point3D` | `message` | x/y/z as f32, Copy-able |
|
||||
| `PointCloud` | `message` | Vec<Point3D> + Vec<f32> intensities + timestamp |
|
||||
| `Pose` | `message` | position: [f64; 3], orientation: [f64; 4] (quaternion) |
|
||||
| `Publisher<T: Message>` | `publisher` | Generic typed publisher with stats |
|
||||
| `Subscriber<T: Message>` | `subscriber` | Generic typed subscriber with crossbeam channels |
|
||||
| `Queryable<Req, Res>` | `service` | RPC server with handler function |
|
||||
| `Service<Req, Res>` | `service` | RPC client (stub) |
|
||||
| `ServiceHandler<Req, Res>` | `service` | `Arc<dyn Fn(Req) -> Result<Res> + Send + Sync>` |
|
||||
| `Zenoh` | `middleware` | Zenoh session wrapper (placeholder) |
|
||||
| `ZenohConfig` | `middleware` | mode, connect, listen configuration |
|
||||
| `Format` | `serialization` | Enum: Cdr, Rkyv, Json |
|
||||
| `Serializer` | `serialization` | Format-aware serializer wrapper |
|
||||
| `Error` | `error` | Zenoh/Serialization/Connection/Timeout/Config/Io/Other |
|
||||
| `Result<T>` | `error` | `std::result::Result<T, Error>` |
|
||||
|
||||
#### Public Functions
|
||||
|
||||
| Function | Module | Signature |
|
||||
|----------|--------|-----------|
|
||||
| `init()` | `lib` | `fn init() -> Result<()>` -- initializes tracing |
|
||||
| `serialize_cdr<T: Serialize>` | `serialization` | `fn(msg: &T) -> Result<Vec<u8>>` |
|
||||
| `deserialize_cdr<T: Deserialize>` | `serialization` | `fn(data: &[u8]) -> Result<T>` |
|
||||
| `serialize_rkyv<T: Serialize>` | `serialization` | `fn(msg: &T) -> Result<Vec<u8>>` -- **STUB, returns Err** |
|
||||
| `serialize_json<T: Serialize>` | `serialization` | `fn(msg: &T) -> Result<String>` |
|
||||
| `deserialize_json<T: Deserialize>` | `serialization` | `fn(data: &str) -> Result<T>` |
|
||||
|
||||
### Architecture Analysis
|
||||
|
||||
#### Message Flow
|
||||
|
||||
```
|
||||
Application Code
|
||||
|
|
||||
v
|
||||
Publisher<T>.publish(&msg)
|
||||
|
|
||||
v
|
||||
Serializer.serialize(msg) --> Format dispatch
|
||||
| | |
|
||||
v v v
|
||||
CDR JSON rkyv (stub)
|
||||
|
|
||||
v
|
||||
[Wire: Zenoh placeholder -- no actual network send]
|
||||
|
|
||||
v
|
||||
Stats update (messages_sent++, bytes_sent += len)
|
||||
```
|
||||
|
||||
The publish path is: `Publisher::publish()` -> `Serializer::serialize()` -> stats update. There is no actual Zenoh network transmission -- the middleware layer (`middleware.rs`) is a placeholder that creates an `Arc<RwLock<()>>`. The publisher simply serializes and tracks byte counts.
|
||||
|
||||
#### Serialization Pipeline
|
||||
|
||||
Three formats are supported but at different maturity levels:
|
||||
|
||||
- **CDR (Common Data Representation)**: Fully functional via the `cdr` crate. Uses big-endian encoding (`CdrBe`). This is the default format and provides DDS-compatible wire representation.
|
||||
- **JSON**: Fully functional via `serde_json`. Used primarily for debugging and for the NAPI boundary in the node crate.
|
||||
- **rkyv (zero-copy)**: Declared in derives (`Archive, RkyvSerialize, RkyvDeserialize`) on message types but the `serialize_rkyv()` function returns `Err("rkyv serialization not fully implemented")`. The rkyv derives are present on `RobotState`, `Point3D`, `PointCloud`, and `Pose`, so the infrastructure is prepared but the serialization function is incomplete.
|
||||
|
||||
#### Subscriber Architecture
|
||||
|
||||
The subscriber uses `crossbeam::channel::unbounded()` for message delivery. This is a multi-producer multi-consumer channel but in the current implementation, messages never actually arrive because there is no Zenoh transport connection. The `recv_async()` method wraps the blocking `crossbeam::channel::recv()` in `tokio::task::spawn_blocking()`, which is correct for bridging sync/async boundaries but adds thread pool overhead.
|
||||
|
||||
The subscriber holds both a `Receiver` and a shared `Arc<Sender>` (`_sender`), which keeps the channel alive. The `Clone` implementation shares both the receiver and sender, enabling multiple concurrent readers -- though crossbeam's `Receiver::clone()` actually creates another consumer on the same channel, meaning messages are load-balanced (each message goes to one reader), not broadcast.
|
||||
|
||||
#### Service/RPC
|
||||
|
||||
The `Queryable` struct is a synchronous handler wrapped as `Arc<dyn Fn(Req) -> Result<Res>>`. Despite `handle()` being `async fn`, the actual handler execution is synchronous -- there is no `.await` in the handler call itself. The `Service` client is fully stubbed, always returning an error.
|
||||
|
||||
### Dependency Analysis
|
||||
|
||||
| Dependency | Purpose | Weight |
|
||||
|------------|---------|--------|
|
||||
| `zenoh` (workspace) | Middleware transport | Heavy -- Zenoh pulls in many transitive deps |
|
||||
| `rustdds` (workspace) | DDS compatibility | Heavy -- full DDS implementation |
|
||||
| `tokio` (workspace) | Async runtime | Standard |
|
||||
| `serde` + `serde_json` | Serialization | Standard |
|
||||
| `cdr` (workspace) | CDR binary encoding | Light |
|
||||
| `rkyv` (workspace) | Zero-copy archives | Medium |
|
||||
| `anyhow` + `thiserror` | Error handling | Light |
|
||||
| `tracing` + `tracing-subscriber` | Logging | Light |
|
||||
| `parking_lot` (workspace) | Fast mutexes/rwlocks | Light |
|
||||
| `crossbeam` (workspace) | Lock-free channels | Light |
|
||||
|
||||
**Notable**: Both `zenoh` and `rustdds` are listed as dependencies but neither is actually used in the source code. They are workspace-level declarations for future integration. This inflates compile time and binary size significantly for no runtime benefit.
|
||||
|
||||
### Code Quality Assessment
|
||||
|
||||
**Test Coverage**: 8 tests across 6 modules. Tests cover:
|
||||
- `init()` success path
|
||||
- `RobotState` default values and type_name
|
||||
- `PointCloud` default and type_name
|
||||
- Publisher publish + stats verification
|
||||
- Subscriber creation and try_recv (empty)
|
||||
- Queryable handler execution + stats
|
||||
- Service client creation
|
||||
- Zenoh session creation
|
||||
|
||||
Tests are present but shallow -- they only test the happy path and default construction. No tests for:
|
||||
- Serialization round-trips across all formats
|
||||
- Error paths (malformed data, channel disconnect)
|
||||
- Concurrent publisher/subscriber interaction
|
||||
- PointCloud with actual data
|
||||
- Pose message operations
|
||||
|
||||
**Error Handling**: Uses `thiserror` with 7 variant `Error` enum. Error propagation is clean with `?` operator. The `anyhow` integration via `#[from]` on the `Other` variant provides a catch-all. However, `serialize_rkyv()` returns a string error instead of a proper rkyv-specific error.
|
||||
|
||||
**Documentation**: Module-level doc comments on all files. Individual function docs are present on public API methods. No examples in doc comments.
|
||||
|
||||
**Safety**: No `unsafe` code. All synchronization uses `parking_lot` (which has well-audited unsafe internally) and `crossbeam`. PhantomData usage is correct for zero-sized type markers.
|
||||
|
||||
**Concerns**:
|
||||
1. `serialize_rkyv()` is misleadingly typed -- it accepts `T: Serialize` (serde) not `T: rkyv::Serialize`, so it could never actually perform rkyv serialization.
|
||||
2. `Publisher::publish()` is async but contains no actual await points (serialization is sync, stats update is sync). The method could be synchronous.
|
||||
3. The `Zenoh` struct holds `_config` and `_inner` with underscore prefixes, indicating they are acknowledged as unused placeholders.
|
||||
4. `tracing_subscriber::fmt().init()` in `init()` will panic if called twice (standard tracing limitation). No guard against double-init.
|
||||
|
||||
### Integration Points with ruvector
|
||||
|
||||
1. **PointCloud <-> Vector Data**: `PointCloud` stores `Vec<Point3D>` (3D f32 vectors) and `Vec<f32>` intensities. This maps directly to ruvector's core vector storage. A thin adapter could expose `PointCloud.points` as a collection of 3-dimensional vectors for HNSW indexing, nearest-neighbor search, or GNN node features.
|
||||
|
||||
2. **Message Trait for Distributed Vectors**: The `Message` trait could be implemented on ruvector's core types (e.g., embedding vectors, search results) to enable pub/sub distribution of vector operations across nodes.
|
||||
|
||||
3. **Serialization Synergy**: ruvector already has its own serialization needs. The CDR format could be used for DDS-compatible vector streaming (e.g., real-time sensor embeddings). The rkyv format, once completed, aligns with ruvector's zero-copy philosophy.
|
||||
|
||||
4. **Publisher/Subscriber for Vector Streaming**: Real-time embedding pipelines (sensor data -> encoder -> vector -> HNSW insert) could use the pub/sub pattern with typed publishers for specific vector dimensions.
|
||||
|
||||
---
|
||||
|
||||
## Crate 2: agentic-robotics-rt
|
||||
|
||||
**Path**: `/home/user/ruvector/crates/agentic-robotics-rt/`
|
||||
**Line Count**: 512 lines (483 source + 29 bench)
|
||||
**Complexity Estimate**: Medium
|
||||
**Code Quality Rating**: B-
|
||||
|
||||
### File Inventory
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `src/lib.rs` | 60 | RTPriority enum, re-exports |
|
||||
| `src/executor.rs` | 157 | Dual-runtime executor (high/low priority) |
|
||||
| `src/scheduler.rs` | 121 | BinaryHeap priority scheduler |
|
||||
| `src/latency.rs` | 145 | HDR histogram latency tracking |
|
||||
| `benches/latency.rs` | 29 | Criterion benchmarks |
|
||||
|
||||
### Purpose
|
||||
|
||||
Provides a dual-runtime real-time execution framework. The core idea is to maintain two separate Tokio runtimes: a high-priority runtime with 2 worker threads for sub-millisecond deadline tasks (control loops), and a low-priority runtime with 4 worker threads for relaxed-deadline tasks (planning, perception). Tasks are routed between runtimes based on their deadline requirements.
|
||||
|
||||
### Architecture
|
||||
|
||||
#### Dual-Runtime Design
|
||||
|
||||
```
|
||||
ROS3Executor
|
||||
/ \
|
||||
tokio_rt_high tokio_rt_low
|
||||
(2 threads) (4 threads)
|
||||
"ros3-rt-high" "ros3-rt-low"
|
||||
| |
|
||||
deadline < 1ms deadline >= 1ms
|
||||
(control loops) (planning tasks)
|
||||
```
|
||||
|
||||
The `ROS3Executor` creates two independent Tokio multi-threaded runtimes during construction. The routing decision in `spawn_rt()` is simple: if `deadline.0 < Duration::from_millis(1)`, the task goes to the high-priority runtime; otherwise it goes to the low-priority runtime. This is a coarse-grained approach -- the actual Tokio scheduler within each runtime does not respect priorities, so the "high-priority" runtime is simply a smaller, dedicated thread pool.
|
||||
|
||||
#### Priority System
|
||||
|
||||
Two overlapping priority systems exist:
|
||||
|
||||
1. **RTPriority** (in `lib.rs`): 5-level enum (Background=0, Low=1, Normal=2, High=3, Critical=4). Supports `From<u8>` conversion with saturation at Critical for values >= 4.
|
||||
|
||||
2. **Priority** (in `executor.rs`): Simple newtype wrapper `Priority(pub u8)`. Used by the executor's `spawn_rt()`.
|
||||
|
||||
The `spawn_rt()` method converts `Priority(u8)` to `RTPriority` via `.into()` for debug logging, but **the priority value is never actually used for scheduling**. Only the deadline threshold determines runtime assignment. The `PriorityScheduler` instance is stored in the executor but **never consulted during spawn**.
|
||||
|
||||
#### PriorityScheduler
|
||||
|
||||
The scheduler maintains a `BinaryHeap<ScheduledTask>` with ordering: higher `RTPriority` first, then earlier deadline (reverse chronological within same priority). The `Ord` implementation is correct for a max-heap with priority-first, deadline-second ordering.
|
||||
|
||||
However, the scheduler is entirely disconnected from the executor. The `schedule()` method creates `ScheduledTask` entries with `Instant::now() + deadline` and auto-incrementing `task_id`, but the executor never calls `schedule()` or `next_task()`. The scheduler exists as infrastructure for a future implementation.
|
||||
|
||||
#### LatencyTracker
|
||||
|
||||
The most complete component. Uses `hdrhistogram::Histogram<u64>` with 3 significant digits for microsecond-precision latency tracking. Key features:
|
||||
|
||||
- **Thread-safe**: Histogram wrapped in `Arc<Mutex<Histogram>>` (parking_lot mutex)
|
||||
- **Non-blocking record**: `record()` uses `try_lock()` -- measurements are silently dropped if the mutex is contended, preventing latency measurement from introducing latency
|
||||
- **RAII measurement**: `LatencyMeasurement` guard records elapsed time on drop
|
||||
- **Rich statistics**: `LatencyStats` provides min, max, mean, p50, p90, p99, p99.9 percentiles
|
||||
- **Display implementation**: Human-readable output with units
|
||||
|
||||
### API Surface
|
||||
|
||||
#### Public Types
|
||||
|
||||
| Type | Module | Description |
|
||||
|------|--------|-------------|
|
||||
| `RTPriority` | `lib` | 5-level priority enum (Background..Critical) |
|
||||
| `ROS3Executor` | `executor` | Dual-runtime task executor |
|
||||
| `Priority` | `executor` | `Priority(pub u8)` newtype |
|
||||
| `Deadline` | `executor` | `Deadline(pub Duration)` newtype with `From<Duration>` |
|
||||
| `PriorityScheduler` | `scheduler` | BinaryHeap-based priority task queue |
|
||||
| `ScheduledTask` | `scheduler` | Task entry with priority, deadline, task_id |
|
||||
| `LatencyTracker` | `latency` | HDR histogram-based latency tracker |
|
||||
| `LatencyStats` | `latency` | Statistics snapshot (count, min, max, mean, percentiles) |
|
||||
| `LatencyMeasurement` | `latency` | RAII drop guard for automatic timing |
|
||||
|
||||
#### Key Methods
|
||||
|
||||
```rust
|
||||
// Executor
|
||||
impl ROS3Executor {
|
||||
pub fn new() -> Result<Self>
|
||||
pub fn spawn_rt<F: Future>(&self, priority: Priority, deadline: Deadline, task: F)
|
||||
pub fn spawn_high<F: Future>(&self, task: F) // Priority(3), 500us deadline
|
||||
pub fn spawn_low<F: Future>(&self, task: F) // Priority(1), 100ms deadline
|
||||
pub fn spawn_blocking<F, R>(&self, f: F) -> JoinHandle<R>
|
||||
pub fn high_priority_runtime(&self) -> &Runtime
|
||||
pub fn low_priority_runtime(&self) -> &Runtime
|
||||
}
|
||||
|
||||
// Scheduler
|
||||
impl PriorityScheduler {
|
||||
pub fn new() -> Self
|
||||
pub fn schedule(&mut self, priority: RTPriority, deadline: Duration) -> u64
|
||||
pub fn next_task(&mut self) -> Option<ScheduledTask>
|
||||
pub fn pending_tasks(&self) -> usize
|
||||
pub fn clear(&mut self)
|
||||
}
|
||||
|
||||
// Latency
|
||||
impl LatencyTracker {
|
||||
pub fn new(name: impl Into<String>) -> Self
|
||||
pub fn record(&self, duration: Duration)
|
||||
pub fn stats(&self) -> LatencyStats
|
||||
pub fn reset(&self)
|
||||
pub fn measure(&self) -> LatencyMeasurement
|
||||
}
|
||||
```
|
||||
|
||||
### Dependency Analysis
|
||||
|
||||
| Dependency | Purpose | Weight |
|
||||
|------------|---------|--------|
|
||||
| `agentic-robotics-core` (path) | Core types | Internal |
|
||||
| `tokio` (workspace) | Async runtimes | Standard |
|
||||
| `parking_lot` (workspace) | Fast mutexes | Light |
|
||||
| `crossbeam` (workspace) | Lock-free primitives | Light -- **unused in source** |
|
||||
| `rayon` (workspace) | Data parallelism | Medium -- **unused in source** |
|
||||
| `anyhow` (workspace) | Error handling | Light |
|
||||
| `thiserror` (workspace) | Error derives | Light -- **unused in source** |
|
||||
| `tracing` (workspace) | Logging | Light |
|
||||
| `hdrhistogram` (workspace) | Latency histograms | Light |
|
||||
|
||||
**Notable**: `crossbeam`, `rayon`, and `thiserror` are declared as dependencies but not used in any source file. The `agentic-robotics-core` dependency is declared but also not directly used -- no imports from it exist in the rt crate's source.
|
||||
|
||||
### Code Quality Assessment
|
||||
|
||||
**Test Coverage**: 5 tests across 3 modules:
|
||||
- RTPriority u8 conversion round-trip
|
||||
- Executor creation success
|
||||
- Spawn high priority (no completion verification -- uses `thread::sleep` then no assertion)
|
||||
- Scheduler priority ordering (3-task dequeue order)
|
||||
- LatencyTracker record + stats verification
|
||||
- LatencyMeasurement RAII guard
|
||||
|
||||
The `test_spawn_high_priority` test is effectively a no-op -- it spawns a task and sleeps but never checks the `completed` AtomicBool's final value.
|
||||
|
||||
**Error Handling**: The `Default` implementation for `ROS3Executor` calls `.expect()` which will panic on failure. This is appropriate for a default constructor but could be surprising.
|
||||
|
||||
**Safety**: No `unsafe` code. All thread safety via `Arc<Mutex<>>` and `Arc<AtomicBool>`.
|
||||
|
||||
**Concerns**:
|
||||
1. The `PriorityScheduler` is completely disconnected from the `ROS3Executor`. The scheduler is created and stored but never used for routing decisions.
|
||||
2. The 1ms deadline threshold is hardcoded with no configuration mechanism.
|
||||
3. Creating two full Tokio runtimes (6 threads total) is heavyweight. On a system with few cores, this could cause contention.
|
||||
4. The `spawn_rt()` return type is `()` -- callers cannot await completion or get results from spawned tasks. Only `spawn_blocking()` returns a `JoinHandle`.
|
||||
5. `ScheduledTask.task_id` is a `u64` counter that will overflow after 2^64 tasks. Not practically concerning but worth noting the design assumes a monotonic non-wrapping counter.
|
||||
|
||||
### Integration Points with ruvector
|
||||
|
||||
1. **Attention Mechanism Scheduling**: ruvector's attention computations (flash attention, multi-head attention) have different latency profiles. The dual-runtime pattern could route real-time inference (< 1ms deadline) to the high-priority pool while batch retraining goes to the low-priority pool.
|
||||
|
||||
2. **GNN Inference RT**: Graph neural network forward passes for time-sensitive applications (e.g., real-time recommendation) could use `spawn_high()` to guarantee dedicated thread resources.
|
||||
|
||||
3. **LatencyTracker for Vector Search**: The HDR histogram tracker would be valuable for monitoring HNSW search latency distributions in production. The RAII `measure()` guard pattern integrates cleanly with ruvector's search functions.
|
||||
|
||||
4. **Priority-Based Query Routing**: The scheduler design (once connected) could route vector queries by importance -- critical real-time queries to dedicated threads, background batch queries to the shared pool.
|
||||
|
||||
---
|
||||
|
||||
## Crate 3: agentic-robotics-mcp
|
||||
|
||||
**Path**: `/home/user/ruvector/crates/agentic-robotics-mcp/`
|
||||
**Line Count**: 506 lines
|
||||
**Complexity Estimate**: Medium
|
||||
**Code Quality Rating**: B+
|
||||
|
||||
### File Inventory
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `src/lib.rs` | 349 | MCP types, McpServer, request handling, tests |
|
||||
| `src/server.rs` | 56 | ServerBuilder, helper functions |
|
||||
| `src/transport.rs` | 101 | StdioTransport + conditional SSE transport |
|
||||
|
||||
### Purpose
|
||||
|
||||
Implements a Model Context Protocol (MCP) 2025-11 compliant server. MCP enables AI assistants (like Claude) to interact with external tools via a standardized JSON-RPC 2.0 protocol. This crate exposes robot capabilities as MCP tools, with both stdio and SSE (Server-Sent Events) transport options.
|
||||
|
||||
### Architecture
|
||||
|
||||
#### Request Handling Pipeline
|
||||
|
||||
```
|
||||
Transport (stdio or SSE)
|
||||
|
|
||||
v
|
||||
JSON-RPC 2.0 Parse --> McpRequest
|
||||
|
|
||||
v
|
||||
McpServer.handle_request()
|
||||
|
|
||||
+-- "initialize" --> protocol version + capabilities
|
||||
+-- "tools/list" --> enumerate registered tools
|
||||
+-- "tools/call" --> dispatch to registered handler
|
||||
+-- <unknown> --> -32601 Method Not Found
|
||||
```
|
||||
|
||||
The server maintains a `HashMap<String, (McpTool, ToolHandler)>` behind `Arc<RwLock<>>` (tokio RwLock). Tool registration is async due to the write lock. Request handling reads the tool map with a read lock for tool listing and invocation.
|
||||
|
||||
#### Transport Layer
|
||||
|
||||
**Stdio Transport**: Reads line-delimited JSON from stdin, writes responses to stdout. The main loop is:
|
||||
1. Read line from stdin via `AsyncBufReadExt`
|
||||
2. Parse as `McpRequest`
|
||||
3. Dispatch to `McpServer::handle_request()`
|
||||
4. Serialize response to JSON
|
||||
5. Write to stdout with newline delimiter and flush
|
||||
|
||||
**SSE Transport** (feature-gated behind `sse`): Uses `axum` with two routes:
|
||||
- `POST /mcp`: Accepts JSON McpRequest, returns JSON McpResponse
|
||||
- `GET /mcp/stream`: Returns SSE stream (currently only sends a "connected" event)
|
||||
|
||||
The SSE implementation is minimal -- it does not implement bidirectional communication or event streaming for ongoing operations.
|
||||
|
||||
### API Surface
|
||||
|
||||
#### Public Types
|
||||
|
||||
| Type | Module | Description |
|
||||
|------|--------|-------------|
|
||||
| `McpTool` | `lib` | name, description, input_schema (JSON Value) |
|
||||
| `McpRequest` | `lib` | jsonrpc, id, method, params -- JSON-RPC 2.0 request |
|
||||
| `McpResponse` | `lib` | jsonrpc, id, result, error -- JSON-RPC 2.0 response |
|
||||
| `McpError` | `lib` | code (i32), message, optional data |
|
||||
| `ToolResult` | `lib` | content: Vec<ContentItem>, optional is_error flag |
|
||||
| `ContentItem` | `lib` | Tagged enum: Text, Resource, Image |
|
||||
| `ToolHandler` | `lib` | `Arc<dyn Fn(Value) -> Result<ToolResult> + Send + Sync>` |
|
||||
| `McpServer` | `lib` | Main server with tool registry |
|
||||
| `ServerInfo` | `lib` | name, version, description |
|
||||
| `ServerBuilder` | `server` | Builder pattern for McpServer |
|
||||
| `StdioTransport` | `transport` | Stdio-based transport |
|
||||
|
||||
#### Key Methods
|
||||
|
||||
```rust
|
||||
impl McpServer {
|
||||
pub fn new(name: impl Into<String>, version: impl Into<String>) -> Self
|
||||
pub async fn register_tool(&self, tool: McpTool, handler: ToolHandler) -> Result<()>
|
||||
pub async fn handle_request(&self, request: McpRequest) -> McpResponse
|
||||
}
|
||||
|
||||
impl ServerBuilder {
|
||||
pub fn new(name: impl Into<String>) -> Self
|
||||
pub fn version(mut self, version: impl Into<String>) -> Self
|
||||
pub fn build(self) -> McpServer
|
||||
}
|
||||
|
||||
impl StdioTransport {
|
||||
pub fn new(server: McpServer) -> Self
|
||||
pub async fn run(&self) -> Result<()>
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
pub fn tool<F>(f: F) -> ToolHandler
|
||||
pub fn text_response(text: impl Into<String>) -> ToolResult
|
||||
pub fn error_response(error: impl Into<String>) -> ToolResult
|
||||
```
|
||||
|
||||
#### Constants
|
||||
|
||||
```rust
|
||||
pub const MCP_VERSION: &str = "2025-11-15";
|
||||
```
|
||||
|
||||
### Protocol Compliance
|
||||
|
||||
The implementation covers the core MCP 2025-11 operations:
|
||||
- `initialize`: Returns protocol version, capabilities (tools + resources), server info
|
||||
- `tools/list`: Returns all registered tools
|
||||
- `tools/call`: Dispatches to handler by name with arguments
|
||||
|
||||
Missing MCP features:
|
||||
- `resources/list`, `resources/read` -- resources capability declared but not implemented
|
||||
- `prompts/list`, `prompts/get` -- not implemented
|
||||
- `notifications/initialized` -- server does not handle the post-init notification
|
||||
- `sampling` -- not implemented
|
||||
- Tool annotations (readOnlyHint, destructiveHint, openWorldHint)
|
||||
|
||||
Error codes used:
|
||||
- `-32601`: Method not found (standard JSON-RPC)
|
||||
- `-32602`: Invalid params (standard JSON-RPC)
|
||||
- `-32000`: Tool execution failure (server error range)
|
||||
|
||||
### Dependency Analysis
|
||||
|
||||
| Dependency | Purpose | Weight |
|
||||
|------------|---------|--------|
|
||||
| `agentic-robotics-core` (path) | Core types | Internal -- **not imported in source** |
|
||||
| `tokio` (workspace) | Async runtime + IO | Standard |
|
||||
| `serde` + `serde_json` | JSON-RPC serialization | Standard |
|
||||
| `anyhow` (workspace) | Error handling | Light |
|
||||
| `thiserror` (workspace) | Error derives | Light -- **unused in source** |
|
||||
| `tracing` (workspace) | Logging | Light -- **unused in source** |
|
||||
| `axum` (optional, `sse` feature) | HTTP server | Medium |
|
||||
| `tokio-stream` (optional, `sse` feature) | Stream utilities | Light |
|
||||
|
||||
**Notable**: `agentic-robotics-core`, `thiserror`, and `tracing` are declared but not imported or used. The crate is functionally independent of the core crate.
|
||||
|
||||
### Code Quality Assessment
|
||||
|
||||
**Test Coverage**: 3 async tests in `lib.rs`:
|
||||
- `test_mcp_initialize`: Verifies initialize response has result, no error
|
||||
- `test_mcp_list_tools`: Registers one tool, verifies list returns it
|
||||
- `test_mcp_call_tool`: Registers echo tool, calls it, verifies success
|
||||
|
||||
Tests are well-structured and test the full request/response cycle. No tests for:
|
||||
- Error paths (missing tool, invalid params, malformed request)
|
||||
- Transport layer (stdio, SSE)
|
||||
- Concurrent tool registration and invocation
|
||||
|
||||
**Error Handling**: Uses JSON-RPC error codes correctly. The `handle_call_tool` method has proper null checks for params and tool name. The `ToolHandler` returns `anyhow::Result` which is caught and converted to MCP error responses.
|
||||
|
||||
**Documentation**: Good module-level docs. The `lib.rs` doc comment accurately describes the MCP version and transport options.
|
||||
|
||||
**Safety**: No `unsafe` code. Uses `tokio::sync::RwLock` (not `parking_lot`) for the tool registry, which is correct since the lock is held across `.await` points in `handle_request`.
|
||||
|
||||
**Concerns**:
|
||||
1. `ContentItem::Resource` uses `mimeType` (camelCase) as a Rust field name instead of idiomatic `mime_type` with `#[serde(rename = "mimeType")]`. This works but violates Rust naming conventions.
|
||||
2. The tool handler `Arc<dyn Fn(Value) -> Result<ToolResult>>` is synchronous. Long-running tool operations will block the server's async runtime. Should be `Arc<dyn Fn(Value) -> BoxFuture<Result<ToolResult>>>` for proper async support.
|
||||
3. `serde_json::to_value(result).unwrap()` in `handle_call_tool` can panic if ToolResult serialization fails. Should use `?` or map to an error response.
|
||||
4. The stdio transport error handling on parse failure uses `eprintln!` instead of returning a JSON-RPC error response to the caller.
|
||||
|
||||
### Integration Points with ruvector
|
||||
|
||||
1. **Vector Search as MCP Tool**: ruvector's HNSW search could be exposed as an MCP tool:
|
||||
```json
|
||||
{
|
||||
"name": "vector_search",
|
||||
"description": "Search for nearest neighbors in vector space",
|
||||
"input_schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": { "type": "array", "items": { "type": "number" } },
|
||||
"k": { "type": "integer" },
|
||||
"ef_search": { "type": "integer" }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **GNN Inference as MCP Tool**: Expose graph neural network forward passes for agentic reasoning about graph-structured data.
|
||||
|
||||
3. **Attention Computation as MCP Tool**: Multi-head attention or flash attention could be exposed for external AI systems to use ruvector's optimized attention kernels.
|
||||
|
||||
4. **Embedding Generation**: Wrap ruvector's encoding capabilities as MCP tools for real-time embedding generation from sensor data.
|
||||
|
||||
---
|
||||
|
||||
## Crate 4: agentic-robotics-embedded
|
||||
|
||||
**Path**: `/home/user/ruvector/crates/agentic-robotics-embedded/`
|
||||
**Line Count**: 41 lines
|
||||
**Complexity Estimate**: Minimal
|
||||
**Code Quality Rating**: C
|
||||
|
||||
### File Inventory
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `src/lib.rs` | 41 | Priority enum + config struct |
|
||||
|
||||
### Purpose
|
||||
|
||||
Intended to provide embedded systems support using Embassy and RTIC frameworks. Currently contains only configuration types and an enum -- no actual embedded runtime integration.
|
||||
|
||||
### API Surface
|
||||
|
||||
#### Public Types
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `EmbeddedPriority` | 4-level enum: Low=0, Normal=1, High=2, Critical=3 |
|
||||
| `EmbeddedConfig` | tick_rate_hz: u32 (default 1000), stack_size: usize (default 4096) |
|
||||
|
||||
### Current State
|
||||
|
||||
This crate is a skeleton. The Cargo.toml declares:
|
||||
- `agentic-robotics-core` as a dependency (unused)
|
||||
- `serde`, `anyhow`, `thiserror` as dependencies (unused)
|
||||
- `embassy` and `rtic` as feature flags that enable nothing (the actual embassy-executor and rtic dependencies are commented out)
|
||||
|
||||
The `EmbeddedPriority` enum overlaps with `RTPriority` from the rt crate but with only 4 levels instead of 5 (missing `Background`). There is no conversion between them.
|
||||
|
||||
### Dependency Analysis
|
||||
|
||||
| Dependency | Purpose | Weight |
|
||||
|------------|---------|--------|
|
||||
| `agentic-robotics-core` (path) | Core types | **Unused** |
|
||||
| `serde` (workspace) | Serialization | **Unused** |
|
||||
| `anyhow` (workspace) | Error handling | **Unused** |
|
||||
| `thiserror` (workspace) | Error derives | **Unused** |
|
||||
|
||||
All 4 dependencies are declared but none are imported or used in source code.
|
||||
|
||||
### Code Quality Assessment
|
||||
|
||||
**Test Coverage**: 1 test verifying `EmbeddedConfig::default()` values.
|
||||
|
||||
**Concerns**:
|
||||
1. This crate provides no functionality beyond two simple types.
|
||||
2. The Embassy and RTIC dependencies are commented out, meaning the `embassy` and `rtic` feature flags are no-ops.
|
||||
3. The `EmbeddedPriority` enum is redundant with `RTPriority` from the rt crate.
|
||||
4. `EmbeddedConfig` fields are too generic -- `tick_rate_hz` and `stack_size` are meaningful only in the context of a real embedded runtime.
|
||||
|
||||
### Integration Points with ruvector
|
||||
|
||||
1. **Edge Deployment**: A completed embedded crate could enable deployment of ruvector-lite quantized models on embedded ARM/RISC-V devices. The config types would need to include memory constraints for vector storage, quantization levels, and inference batch sizes.
|
||||
|
||||
2. **Limited Utility Currently**: In its current state, this crate provides no integration value. It would need significant development to support actual embedded runtimes.
|
||||
|
||||
---
|
||||
|
||||
## Crate 5: agentic-robotics-node
|
||||
|
||||
**Path**: `/home/user/ruvector/crates/agentic-robotics-node/`
|
||||
**Line Count**: 236 lines (233 source + 3 build.rs)
|
||||
**Complexity Estimate**: Medium
|
||||
**Code Quality Rating**: B+
|
||||
|
||||
### File Inventory
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `src/lib.rs` | 233 | NAPI bindings: AgenticNode, AgenticPublisher, AgenticSubscriber |
|
||||
| `build.rs` | 3 | napi-build setup |
|
||||
|
||||
### Purpose
|
||||
|
||||
Provides Node.js/TypeScript bindings for the agentic-robotics-core pub/sub system via NAPI-RS. This enables JavaScript applications to create publishers and subscribers, publish JSON messages, and interact with the robotics middleware from Node.js.
|
||||
|
||||
### Architecture
|
||||
|
||||
The crate uses the `napi-derive` macro system to generate N-API bindings. Three main types are exposed to JavaScript:
|
||||
|
||||
```
|
||||
AgenticNode (factory)
|
||||
|
|
||||
+-- create_publisher(topic) --> AgenticPublisher
|
||||
| |
|
||||
| +-- publish(json_string)
|
||||
| +-- get_topic()
|
||||
| +-- get_stats() --> PublisherStats { messages, bytes }
|
||||
|
|
||||
+-- create_subscriber(topic) --> AgenticSubscriber
|
||||
| |
|
||||
| +-- try_recv() --> Option<String>
|
||||
| +-- recv() --> String (blocking via spawn_blocking)
|
||||
| +-- get_topic()
|
||||
|
|
||||
+-- list_publishers() --> Vec<String>
|
||||
+-- list_subscribers() --> Vec<String>
|
||||
+-- get_name() --> String
|
||||
+-- get_version() --> String (static)
|
||||
```
|
||||
|
||||
The NAPI boundary serializes all messages as JSON strings. The `AgenticNode` uses `Publisher<serde_json::Value>` with `Format::Json` explicitly (not CDR) because serde_json::Value cannot be CDR-serialized (CDR requires a fixed schema). This is a correct design decision for the JavaScript interop layer.
|
||||
|
||||
Internal state is managed with `Arc<RwLock<HashMap<String, Arc<T>>>>` for both publishers and subscribers, using `tokio::sync::RwLock` for async-safe access.
|
||||
|
||||
### API Surface (NAPI-exported)
|
||||
|
||||
| Class | Method | Async | Returns |
|
||||
|-------|--------|-------|---------|
|
||||
| `AgenticNode` | `new(name)` | No (constructor) | `AgenticNode` |
|
||||
| `AgenticNode` | `get_name()` | No | `String` |
|
||||
| `AgenticNode` | `create_publisher(topic)` | Yes | `AgenticPublisher` |
|
||||
| `AgenticNode` | `create_subscriber(topic)` | Yes | `AgenticSubscriber` |
|
||||
| `AgenticNode` | `get_version()` | No (static) | `String` |
|
||||
| `AgenticNode` | `list_publishers()` | Yes | `Vec<String>` |
|
||||
| `AgenticNode` | `list_subscribers()` | Yes | `Vec<String>` |
|
||||
| `AgenticPublisher` | `publish(data)` | Yes | `void` |
|
||||
| `AgenticPublisher` | `get_topic()` | No | `String` |
|
||||
| `AgenticPublisher` | `get_stats()` | No | `PublisherStats` |
|
||||
| `AgenticSubscriber` | `get_topic()` | No | `String` |
|
||||
| `AgenticSubscriber` | `try_recv()` | Yes | `Option<String>` |
|
||||
| `AgenticSubscriber` | `recv()` | Yes | `String` |
|
||||
| `PublisherStats` | (object) | -- | `{ messages: i64, bytes: i64 }` |
|
||||
|
||||
### Dependency Analysis
|
||||
|
||||
| Dependency | Purpose | Weight |
|
||||
|------------|---------|--------|
|
||||
| `agentic-robotics-core` (path) | Core pub/sub types | Internal -- **actually used** |
|
||||
| `napi` (workspace) | N-API runtime | Medium |
|
||||
| `napi-derive` (workspace) | Proc macros for #[napi] | Medium (compile-time) |
|
||||
| `tokio` (workspace) | Async runtime | Standard |
|
||||
| `serde` + `serde_json` | JSON at NAPI boundary | Standard |
|
||||
| `anyhow` (workspace) | Error handling | Light |
|
||||
| `napi-build` (build-dep) | Build script support | Light (build-time) |
|
||||
|
||||
This is the only non-benchmark crate that actually uses `agentic-robotics-core` types (`Publisher`, `Subscriber`) in its source code.
|
||||
|
||||
### Code Quality Assessment
|
||||
|
||||
**Test Coverage**: 5 async tests:
|
||||
- Node creation and name verification
|
||||
- Publisher creation with topic verification
|
||||
- Publish JSON and stats verification (messages count = 1)
|
||||
- Subscriber creation with topic verification
|
||||
- List publishers after creating 2 publishers
|
||||
|
||||
Tests are clean and verify the full NAPI-exposed API surface. The publish test verifies end-to-end JSON serialization through the publisher.
|
||||
|
||||
**Error Handling**: NAPI errors are constructed via `Error::from_reason()` with descriptive messages. Both JSON parse errors and publish/receive errors are mapped to NAPI Error types. This is the correct pattern for NAPI bindings.
|
||||
|
||||
**Documentation**: Module-level doc comment. Function-level comments on all `#[napi]` methods.
|
||||
|
||||
**Safety**: `#![deny(clippy::all)]` is enabled -- the only crate with an explicit clippy directive. No `unsafe` code (NAPI-RS generates the necessary unsafe FFI internally).
|
||||
|
||||
**Concerns**:
|
||||
1. `PublisherStats` uses `i64` instead of `u64` because NAPI does not support unsigned 64-bit integers in JavaScript (BigInt would be needed). This means stats will overflow at 2^63 instead of 2^64. Acceptable for practical use.
|
||||
2. The `try_recv()` method is marked `async` but the underlying `Subscriber::try_recv()` is synchronous. The async wrapper adds unnecessary overhead for a non-blocking operation.
|
||||
3. The `recv()` method delegates to `Subscriber::recv_async()` which uses `spawn_blocking` internally. This works but creates a double-async layering that could be simplified.
|
||||
|
||||
### Integration Points with ruvector
|
||||
|
||||
1. **Matches Existing NAPI Pattern**: ruvector already has `ruvector-node`, `ruvector-gnn-node`, etc. The agentic-robotics-node crate follows the same NAPI-RS pattern with `#[napi]` macros, `cdylib` crate type, and `napi-build` build script. Integration would follow established conventions.
|
||||
|
||||
2. **Unified Node.js API**: A combined NAPI module could expose both vector operations (search, insert, GNN inference) and robotics pub/sub in a single npm package, enabling Node.js applications to do real-time sensor data processing with ML inference.
|
||||
|
||||
3. **JSON Bridge**: The JSON serialization at the NAPI boundary is compatible with ruvector's existing JSON-based APIs. Vector data could be passed as JSON arrays, matching the pattern already used here.
|
||||
|
||||
---
|
||||
|
||||
## Crate 6: agentic-robotics-benchmarks
|
||||
|
||||
**Path**: `/home/user/ruvector/crates/agentic-robotics-benchmarks/`
|
||||
**Line Count**: 635 lines (all bench code)
|
||||
**Complexity Estimate**: Low (benchmark harness code)
|
||||
**Code Quality Rating**: B-
|
||||
|
||||
### File Inventory
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `benches/message_serialization.rs` | 187 | CDR/JSON ser/deser benchmarks, size comparison, scaling |
|
||||
| `benches/pubsub_latency.rs` | 193 | Publisher/subscriber creation, latency, throughput |
|
||||
| `benches/executor_performance.rs` | 255 | Executor creation, task spawning, scheduling overhead |
|
||||
|
||||
### Purpose
|
||||
|
||||
Comprehensive Criterion benchmark suite covering the core and rt crates. Provides performance characterization for serialization, pub/sub operations, and executor task management.
|
||||
|
||||
### Benchmark Coverage
|
||||
|
||||
#### message_serialization.rs
|
||||
|
||||
| Benchmark Group | Benchmarks | Description |
|
||||
|-----------------|------------|-------------|
|
||||
| CDR Serialization | RobotState, Pose, PointCloud_1k | CDR encode with throughput tracking |
|
||||
| CDR Deserialization | RobotState, Pose | CDR decode from pre-serialized bytes |
|
||||
| JSON vs CDR | CDR_serialize, JSON_serialize, CDR_deserialize, JSON_deserialize | Head-to-head format comparison with size reporting |
|
||||
| Message Size Scaling | PointCloud at 100, 1K, 10K, 100K points | Serialization scaling characteristics |
|
||||
|
||||
**Note**: This benchmark file references a `Pose` struct with `frame_id: String` and a `PointCloud` with `points: Vec<[f32; 3]>` and `frame_id: String`. These differ from the actual types in `agentic-robotics-core/src/message.rs` where `Pose` has no `frame_id` field and `PointCloud` uses `Vec<Point3D>` not `Vec<[f32; 3]>`. This benchmark will not compile against the current core crate without modifications.
|
||||
|
||||
#### pubsub_latency.rs
|
||||
|
||||
| Benchmark Group | Benchmarks | Description |
|
||||
|-----------------|------------|-------------|
|
||||
| Publisher Creation | create_publisher | Publisher construction overhead |
|
||||
| Subscriber Creation | create_subscriber | Subscriber construction overhead |
|
||||
| Publish Latency | single_publish | Single message publish time |
|
||||
| Publish Throughput | batch_publish (10, 100, 1000) | Burst publishing at different batch sizes |
|
||||
| End-to-End Latency | pubsub_roundtrip | Full publish cycle (no actual receive) |
|
||||
| Serializer Comparison | CDR_publish, JSON_publish | Publish with different formats |
|
||||
| Concurrent Publishers | concurrent (1, 2, 4, 8) | Multiple publisher scaling |
|
||||
|
||||
**Note**: This benchmark references `Publisher::new(topic, Serializer::Cdr)` with a two-argument constructor and `Serializer::Cdr` enum variant. The actual core crate uses `Publisher::new(topic)` (single argument, defaults to CDR) and `Serializer::new(Format::Cdr)` (struct, not enum). These API mismatches mean this benchmark will not compile against the current core crate.
|
||||
|
||||
#### executor_performance.rs
|
||||
|
||||
| Benchmark Group | Benchmarks | Description |
|
||||
|-----------------|------------|-------------|
|
||||
| Executor Creation | create_executor | Runtime initialization cost |
|
||||
| Task Spawning | spawn_high_priority, spawn_low_priority | Per-task spawn overhead |
|
||||
| Scheduler Overhead | priority_low/high, deadline_check_fast/slow | Scheduling decision cost |
|
||||
| Task Distribution | spawn_tasks (10, 100, 1000) | Bulk task spawning with mixed priorities |
|
||||
| Async Task Execution | execute_sync_task, execute_with_yield | Task execution overhead |
|
||||
| Priority Handling | mixed_priorities | Interleaved priority levels |
|
||||
| Deadline Distribution | tight_deadlines, loose_deadlines | High vs low priority runtime routing |
|
||||
|
||||
**Note**: This benchmark references `Priority::High`, `Priority::Medium`, `Priority::Low` as enum variants and `PriorityScheduler::should_use_high_priority()`. The actual rt crate uses `Priority(pub u8)` as a newtype (not an enum) and `PriorityScheduler` has no `should_use_high_priority()` method. These API mismatches mean this benchmark will not compile against the current rt crate.
|
||||
|
||||
### Dependency Analysis
|
||||
|
||||
| Dependency | Purpose | Weight |
|
||||
|------------|---------|--------|
|
||||
| `agentic-robotics-core` (path) | Core types for benchmarking | Internal |
|
||||
| `agentic-robotics-rt` (path) | RT types for benchmarking | Internal |
|
||||
| `criterion` 0.5 | Benchmark framework | Medium |
|
||||
| `tokio` 1.40 | Async runtime | Standard |
|
||||
| `serde` 1.0 | Serialization | Standard |
|
||||
| `serde_json` 1.0 | JSON | Standard |
|
||||
|
||||
**Notable**: Dependencies are specified with explicit versions (not workspace), unlike the other crates. This means they could drift from workspace versions. The crate has `publish = false`.
|
||||
|
||||
### Code Quality Assessment
|
||||
|
||||
**Compilation Status**: The benchmarks will NOT compile against the current source crates due to multiple API mismatches:
|
||||
1. `Pose` struct fields differ (benchmarks expect `frame_id`, source has none)
|
||||
2. `PointCloud` field types differ (`Vec<[f32; 3]>` vs `Vec<Point3D>`)
|
||||
3. `Publisher` constructor signature differs (2-arg vs 1-arg)
|
||||
4. `Serializer` is used as an enum variant, not a struct
|
||||
5. `Priority` is used as an enum, not a newtype
|
||||
6. `PriorityScheduler::should_use_high_priority()` does not exist
|
||||
|
||||
This suggests the benchmarks were written against a different (possibly planned or previous) version of the API.
|
||||
|
||||
**Benchmark Design**: Despite the compilation issues, the benchmark structure is well-designed:
|
||||
- Uses `Throughput::Bytes` for size-aware benchmarking
|
||||
- Scaling benchmarks test across multiple orders of magnitude (100 to 100K points)
|
||||
- Concurrent publisher benchmarks test scaling characteristics
|
||||
- `iter_custom` is used correctly for measuring async operation latency
|
||||
- `black_box` is applied consistently to prevent dead code elimination
|
||||
|
||||
**Concerns**:
|
||||
1. The `benchmark_task_distribution` test creates a new `ROS3Executor` (with 6 threads) per iteration -- this is extremely expensive and will dominate the benchmark.
|
||||
2. `futures::executor::block_on` is used in pubsub benchmarks instead of Criterion's `b.to_async()` pattern. This adds overhead from blocking the thread.
|
||||
3. No warm-up or steady-state verification for executor benchmarks.
|
||||
|
||||
### Integration Points with ruvector
|
||||
|
||||
1. **Combined Benchmark Suite**: The benchmark patterns could be extended to test combined robotics + ML workloads: e.g., serialize PointCloud -> HNSW insert -> search -> publish results.
|
||||
|
||||
2. **Latency Profiling**: The serialization benchmarks provide a template for benchmarking ruvector's own serialization paths (vector encoding, index persistence).
|
||||
|
||||
3. **Scaling Characterization**: The message size scaling pattern (100 to 100K points) directly applies to benchmarking vector search with different collection sizes.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Crate Dependency Graph
|
||||
|
||||
```
|
||||
agentic-robotics-benchmarks (publish=false)
|
||||
|
|
||||
+---> agentic-robotics-core
|
||||
+---> agentic-robotics-rt
|
||||
|
|
||||
+---> agentic-robotics-core
|
||||
|
||||
agentic-robotics-node
|
||||
|
|
||||
+---> agentic-robotics-core (ACTUALLY USED)
|
||||
|
||||
agentic-robotics-mcp
|
||||
|
|
||||
+---> agentic-robotics-core (declared but unused)
|
||||
|
||||
agentic-robotics-embedded
|
||||
|
|
||||
+---> agentic-robotics-core (declared but unused)
|
||||
|
||||
agentic-robotics-rt
|
||||
|
|
||||
+---> agentic-robotics-core (declared but unused)
|
||||
```
|
||||
|
||||
**Key observation**: Only `agentic-robotics-node` actually imports and uses types from `agentic-robotics-core`. The other 3 crates (`rt`, `mcp`, `embedded`) declare it as a dependency but do not use it. The benchmarks crate references core types by the old crate name (`ros3_core`, `ros3_rt`), which suggests the crates were renamed from `ros3-*` to `agentic-robotics-*` but the benchmarks were not updated.
|
||||
|
||||
---
|
||||
|
||||
## Overall Assessment
|
||||
|
||||
### Summary Table
|
||||
|
||||
| Crate | Lines | Tests | Quality | Compilable | Maturity |
|
||||
|-------|-------|-------|---------|------------|----------|
|
||||
| core | 705 | 8 | B | Yes | Alpha -- middleware stubbed |
|
||||
| rt | 512 | 5 | B- | Yes | Alpha -- scheduler disconnected |
|
||||
| mcp | 506 | 3 | B+ | Yes | Beta -- core protocol working |
|
||||
| embedded | 41 | 1 | C | Yes | Skeleton -- no functionality |
|
||||
| node | 236 | 5 | B+ | Yes (with napi) | Alpha -- working NAPI bindings |
|
||||
| benchmarks | 635 | 0 | B- | **No** -- API mismatches | Broken -- needs API updates |
|
||||
|
||||
### Total Metrics
|
||||
|
||||
- **Total source lines**: 2,635 (including benchmarks)
|
||||
- **Total tests**: 22
|
||||
- **Unsafe code**: None across all crates
|
||||
- **Broken crates**: 1 (benchmarks -- API mismatches with current source)
|
||||
- **Unused dependencies**: 12 instances across all crates
|
||||
|
||||
### Strengths
|
||||
|
||||
1. **Clean type design**: The `Message` trait, `Publisher<T>`, `Subscriber<T>` generics are well-structured with proper Send/Sync/static bounds.
|
||||
2. **Serialization flexibility**: CDR + JSON + rkyv (future) covers binary efficiency, debugging, and zero-copy use cases.
|
||||
3. **LatencyTracker**: The HDR histogram-based latency tracker is production-quality with RAII guards and non-blocking recording.
|
||||
4. **MCP implementation**: The MCP server is the most complete component, with proper JSON-RPC 2.0 handling and a clean tool registration API.
|
||||
5. **NAPI bindings**: Follow established patterns and work correctly with JSON serialization at the boundary.
|
||||
|
||||
### Weaknesses
|
||||
|
||||
1. **Placeholder code**: Zenoh middleware, rkyv serialization, Service client, and embedded support are all stubs.
|
||||
2. **Disconnected components**: The PriorityScheduler is never used by the executor. The core crate is declared as a dependency by 4 crates but actually used by only 1.
|
||||
3. **API drift**: The benchmarks reference a different API surface than what exists in the source, indicating either the API changed after benchmarks were written or the benchmarks target a planned future API.
|
||||
4. **Shallow testing**: Tests cover happy paths only. No error path testing, no concurrent access testing, no integration tests across crates.
|
||||
5. **Dependency bloat**: `zenoh` and `rustdds` in core are heavyweight dependencies that are never used. Multiple crates declare `crossbeam`, `rayon`, `thiserror`, `tracing` without importing them.
|
||||
|
||||
### Safety Assessment
|
||||
|
||||
| Concern | Status |
|
||||
|---------|--------|
|
||||
| Unsafe code | None -- all crates are safe Rust |
|
||||
| Thread safety | Proper use of Arc, Mutex, RwLock throughout |
|
||||
| Error handling | thiserror/anyhow pattern, but some `.unwrap()` in MCP server |
|
||||
| Panic risk | `Default::default()` on ROS3Executor uses `.expect()` |
|
||||
| Memory safety | No raw pointers, no manual memory management |
|
||||
| Input validation | Minimal -- MCP server validates JSON structure but not content |
|
||||
|
||||
---
|
||||
|
||||
## Integration Roadmap for ruvector
|
||||
|
||||
### Phase 1: Direct Utility (No modification needed)
|
||||
|
||||
1. **LatencyTracker adoption**: Import `agentic-robotics-rt::LatencyTracker` into ruvector's profiling infrastructure for HDR histogram-based latency monitoring of HNSW search, GNN inference, and attention computation.
|
||||
|
||||
2. **MCP tool exposure**: Use `agentic-robotics-mcp::McpServer` to expose ruvector capabilities as MCP tools:
|
||||
- `vector_search` -- HNSW nearest-neighbor queries
|
||||
- `vector_insert` -- Add vectors to collections
|
||||
- `gnn_inference` -- Graph neural network forward pass
|
||||
- `attention_compute` -- Multi-head/flash attention
|
||||
- `collection_stats` -- Index statistics and health
|
||||
|
||||
### Phase 2: Adapter Layer (Thin wrappers)
|
||||
|
||||
3. **PointCloud <-> Vector adapter**: Create a bidirectional conversion between `PointCloud` (robotics 3D sensor data) and ruvector's vector types. This enables real-time HNSW indexing of LiDAR/depth sensor data:
|
||||
```rust
|
||||
impl From<&PointCloud> for Vec<[f32; 3]> { ... }
|
||||
impl From<Vec<[f32; 3]>> for PointCloud { ... }
|
||||
```
|
||||
|
||||
4. **Message trait for vector types**: Implement `Message` on ruvector's core vector/embedding types to enable pub/sub distribution.
|
||||
|
||||
5. **NAPI unification**: Combine `agentic-robotics-node` with `ruvector-node` into a single npm package or create an interop layer.
|
||||
|
||||
### Phase 3: Deep Integration (Architectural changes)
|
||||
|
||||
6. **Dual-runtime for ML workloads**: Adapt the ROS3Executor pattern for ruvector:
|
||||
- High-priority runtime: Real-time inference queries (< 1ms SLA)
|
||||
- Low-priority runtime: Batch indexing, model training, compaction
|
||||
- Connect the PriorityScheduler to actually route tasks by deadline
|
||||
|
||||
7. **Zenoh-based distributed vectors**: When the Zenoh middleware is completed, use it for distributed vector index replication (similar to ruvector-replication but over Zenoh instead of custom protocols).
|
||||
|
||||
8. **CDR serialization for vector wire format**: Use CDR as the wire format for vector data in DDS-compatible robotics environments, enabling direct integration with ROS2 systems.
|
||||
|
||||
### Recommended Priority Order
|
||||
|
||||
1. LatencyTracker (immediate value, no risk)
|
||||
2. MCP tool exposure (high value for agentic workflows)
|
||||
3. PointCloud adapter (enables robotics use cases)
|
||||
4. NAPI unification (reduces maintenance burden)
|
||||
5. Dual-runtime (significant architectural benefit, higher risk)
|
||||
6. Zenoh distributed vectors (depends on Zenoh middleware completion)
|
||||
|
||||
### Prerequisites Before Integration
|
||||
|
||||
- Fix benchmark compilation errors (update to current API)
|
||||
- Remove unused dependencies from all crates (zenoh, rustdds, rayon, crossbeam where unused)
|
||||
- Connect PriorityScheduler to ROS3Executor
|
||||
- Complete rkyv serialization implementation or remove the stub
|
||||
- Add error path tests across all crates
|
||||
- Resolve the `ros3_core`/`ros3_rt` crate name references in benchmarks
|
||||
769
vendor/ruvector/docs/research/agentic-robotics/integration-roadmap.md
vendored
Normal file
769
vendor/ruvector/docs/research/agentic-robotics/integration-roadmap.md
vendored
Normal file
@@ -0,0 +1,769 @@
|
||||
# Integration Roadmap: agentic-robotics into ruvector
|
||||
|
||||
**Document Class:** Implementation Plan
|
||||
**Version:** 1.0.0
|
||||
**Date:** 2026-02-27
|
||||
**Timeline:** 18 weeks (6 phases)
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Foundation (Week 1-2)
|
||||
|
||||
### Objective
|
||||
Integrate agentic-robotics crates into the ruvector workspace with clean builds and passing tests.
|
||||
|
||||
### 0.1 Workspace Integration
|
||||
|
||||
Add the 6 agentic-robotics crates as workspace members. Required changes to root `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[workspace]
|
||||
members = [
|
||||
# ... existing 114 members ...
|
||||
# Agentic Robotics Integration
|
||||
"crates/agentic-robotics-core",
|
||||
"crates/agentic-robotics-rt",
|
||||
"crates/agentic-robotics-mcp",
|
||||
"crates/agentic-robotics-embedded",
|
||||
"crates/agentic-robotics-node",
|
||||
"crates/agentic-robotics-benchmarks",
|
||||
]
|
||||
```
|
||||
|
||||
### 0.2 Dependency Resolution
|
||||
|
||||
New workspace dependencies to add:
|
||||
|
||||
```toml
|
||||
[workspace.dependencies]
|
||||
# Robotics middleware
|
||||
zenoh = "1.0"
|
||||
rustdds = "0.11"
|
||||
cdr = "0.2"
|
||||
hdrhistogram = "7.5"
|
||||
wide = "0.7"
|
||||
```
|
||||
|
||||
### 0.3 Version Alignment
|
||||
|
||||
| Dependency | Current ruvector | agentic-robotics | Action |
|
||||
|-----------|-----------------|-----------------|--------|
|
||||
| tokio | 1.41 | 1.47 | Upgrade to 1.47 |
|
||||
| napi | 2.16 | 3.0 | Keep separate initially |
|
||||
| thiserror | 2.0 | 2.0 | No action |
|
||||
| rkyv | 0.8 | 0.8 | No action |
|
||||
|
||||
### 0.4 Crate Cargo.toml Adaptation
|
||||
|
||||
Each agentic-robotics crate needs its Cargo.toml updated to reference ruvector workspace dependencies instead of its own workspace:
|
||||
|
||||
```toml
|
||||
# Before (agentic-robotics workspace)
|
||||
[dependencies]
|
||||
tokio = { workspace = true }
|
||||
|
||||
# After (ruvector workspace - add explicit versions where needed)
|
||||
[dependencies]
|
||||
tokio = { version = "1.47", features = ["full", "rt-multi-thread", "time"] }
|
||||
```
|
||||
|
||||
Or better: add robotics-specific deps to the ruvector workspace `[workspace.dependencies]` section.
|
||||
|
||||
### 0.5 CI Pipeline Updates
|
||||
|
||||
Add to `.github/workflows/`:
|
||||
- Build agentic-robotics crates in CI
|
||||
- Run agentic-robotics tests
|
||||
- Feature-gate robotics builds to avoid slowing default CI
|
||||
|
||||
### Deliverables
|
||||
- [ ] All 6 crates compile within ruvector workspace
|
||||
- [ ] All existing ruvector tests still pass
|
||||
- [ ] All agentic-robotics tests pass
|
||||
- [ ] CI pipeline updated
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Bridge Layer (Week 3-4)
|
||||
|
||||
### Objective
|
||||
Create adapter crate that converts between agentic-robotics and ruvector data types.
|
||||
|
||||
### New Crate: `ruvector-robotics-bridge`
|
||||
|
||||
```toml
|
||||
[package]
|
||||
name = "ruvector-robotics-bridge"
|
||||
version.workspace = true
|
||||
edition.workspace = true
|
||||
|
||||
[dependencies]
|
||||
agentic-robotics-core = { path = "../agentic-robotics-core" }
|
||||
ruvector-core = { path = "../ruvector-core", features = ["storage"] }
|
||||
tokio = { workspace = true }
|
||||
serde = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
```
|
||||
|
||||
### 1.1 Data Type Converters
|
||||
|
||||
```rust
|
||||
//! src/converters.rs
|
||||
|
||||
use agentic_robotics_core::message::{PointCloud, Point3D, RobotState, Pose};
|
||||
|
||||
/// Convert PointCloud to Vec of 3D vectors for HNSW indexing
|
||||
pub fn pointcloud_to_vectors(cloud: &PointCloud) -> Vec<Vec<f32>> {
|
||||
cloud.points.iter()
|
||||
.map(|p| vec![p.x, p.y, p.z])
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Convert PointCloud to Vec of f64 vectors (ruvector native format)
|
||||
pub fn pointcloud_to_f64_vectors(cloud: &PointCloud) -> Vec<Vec<f64>> {
|
||||
cloud.points.iter()
|
||||
.map(|p| vec![p.x as f64, p.y as f64, p.z as f64])
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Convert RobotState to 7D feature vector [px, py, pz, vx, vy, vz, t]
|
||||
pub fn state_to_vector(state: &RobotState) -> Vec<f64> {
|
||||
let mut v = Vec::with_capacity(7);
|
||||
v.extend_from_slice(&state.position);
|
||||
v.extend_from_slice(&state.velocity);
|
||||
v.push(state.timestamp as f64);
|
||||
v
|
||||
}
|
||||
|
||||
/// Convert Pose to 7D feature vector [px, py, pz, qx, qy, qz, qw]
|
||||
pub fn pose_to_vector(pose: &Pose) -> Vec<f64> {
|
||||
let mut v = Vec::with_capacity(7);
|
||||
v.extend_from_slice(&pose.position);
|
||||
v.extend_from_slice(&pose.orientation);
|
||||
v
|
||||
}
|
||||
|
||||
/// Convert HNSW search results back to Point3D
|
||||
pub fn vectors_to_points(vectors: &[Vec<f64>]) -> Vec<Point3D> {
|
||||
vectors.iter()
|
||||
.map(|v| Point3D {
|
||||
x: v[0] as f32,
|
||||
y: v[1] as f32,
|
||||
z: v.get(2).copied().unwrap_or(0.0) as f32,
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
```
|
||||
|
||||
### 1.2 Auto-Indexing Subscriber
|
||||
|
||||
```rust
|
||||
//! src/indexing.rs
|
||||
|
||||
use agentic_robotics_core::subscriber::Subscriber;
|
||||
use agentic_robotics_core::message::PointCloud;
|
||||
use ruvector_core::vector_db::VectorDb;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::RwLock;
|
||||
|
||||
/// Subscriber that automatically indexes incoming PointCloud data
|
||||
pub struct IndexingSubscriber {
|
||||
subscriber: Subscriber<PointCloud>,
|
||||
db: Arc<RwLock<VectorDb>>,
|
||||
frame_count: u64,
|
||||
}
|
||||
|
||||
impl IndexingSubscriber {
|
||||
pub fn new(topic: &str, db: Arc<RwLock<VectorDb>>) -> Self {
|
||||
Self {
|
||||
subscriber: Subscriber::new(topic),
|
||||
db,
|
||||
frame_count: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Run the indexing loop (call from RT executor)
|
||||
pub async fn run(&mut self) {
|
||||
loop {
|
||||
match self.subscriber.recv_async().await {
|
||||
Ok(cloud) => {
|
||||
let vectors = super::converters::pointcloud_to_f64_vectors(&cloud);
|
||||
let mut db = self.db.write().await;
|
||||
for vector in &vectors {
|
||||
let _ = db.insert(vector);
|
||||
}
|
||||
self.frame_count += 1;
|
||||
tracing::debug!("Indexed frame {} ({} points)", self.frame_count, vectors.len());
|
||||
}
|
||||
Err(e) => {
|
||||
tracing::warn!("Subscriber error: {}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 1.3 Search Publisher
|
||||
|
||||
```rust
|
||||
//! src/search.rs
|
||||
|
||||
use agentic_robotics_core::publisher::Publisher;
|
||||
use agentic_robotics_core::message::Message;
|
||||
use ruvector_core::vector_db::VectorDb;
|
||||
use serde::{Serialize, Deserialize};
|
||||
|
||||
/// Search result message published back to robot topics
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct SearchResult {
|
||||
pub query_id: u64,
|
||||
pub neighbors: Vec<Neighbor>,
|
||||
pub latency_us: u64,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct Neighbor {
|
||||
pub id: u64,
|
||||
pub distance: f64,
|
||||
pub vector: Vec<f64>,
|
||||
}
|
||||
|
||||
impl Message for SearchResult {
|
||||
fn type_name() -> &'static str {
|
||||
"ruvector_msgs/SearchResult"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Deliverables
|
||||
- [ ] `ruvector-robotics-bridge` crate created
|
||||
- [ ] Converter functions with unit tests
|
||||
- [ ] IndexingSubscriber with integration test
|
||||
- [ ] SearchResult message type defined
|
||||
- [ ] Documentation with usage examples
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Perception Pipeline (Week 5-7)
|
||||
|
||||
### Objective
|
||||
Build GNN-based perception using ruvector ML modules with RT scheduling.
|
||||
|
||||
### New Crate: `ruvector-robotics-perception`
|
||||
|
||||
### 2.1 Scene Graph Builder
|
||||
|
||||
```rust
|
||||
//! Convert PointCloud + obstacles into ruvector graph for GNN processing
|
||||
|
||||
pub struct SceneGraphBuilder {
|
||||
adjacency_radius: f64,
|
||||
max_nodes: usize,
|
||||
}
|
||||
|
||||
impl SceneGraphBuilder {
|
||||
pub fn build(&self, cloud: &PointCloud, obstacles: &[Obstacle]) -> SceneGraph {
|
||||
let mut graph = SceneGraph::new();
|
||||
|
||||
// Add obstacle nodes with spatial features
|
||||
for (i, obs) in obstacles.iter().enumerate() {
|
||||
graph.add_node(i, obs.to_features());
|
||||
}
|
||||
|
||||
// Add edges based on spatial proximity
|
||||
for i in 0..obstacles.len() {
|
||||
for j in (i+1)..obstacles.len() {
|
||||
let dist = distance(&obstacles[i].position, &obstacles[j].position);
|
||||
if dist < self.adjacency_radius {
|
||||
graph.add_edge(i, j, dist);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
graph
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 RT-Scheduled Inference
|
||||
|
||||
```rust
|
||||
//! Schedule GNN inference on the high-priority runtime
|
||||
|
||||
use agentic_robotics_rt::{ROS3Executor, Priority, Deadline};
|
||||
|
||||
pub struct PerceptionEngine {
|
||||
executor: ROS3Executor,
|
||||
scene_builder: SceneGraphBuilder,
|
||||
}
|
||||
|
||||
impl PerceptionEngine {
|
||||
/// Process point cloud with deadline-aware inference
|
||||
pub fn process(&self, cloud: PointCloud, deadline_us: u64) {
|
||||
let builder = self.scene_builder.clone();
|
||||
let deadline = Duration::from_micros(deadline_us);
|
||||
|
||||
self.executor.spawn_rt(
|
||||
Priority(3),
|
||||
Deadline(deadline),
|
||||
async move {
|
||||
let scene = builder.build(&cloud, &[]);
|
||||
// GNN forward pass here
|
||||
tracing::info!("Scene processed: {} nodes", scene.node_count());
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Deliverables
|
||||
- [ ] SceneGraphBuilder from PointCloud data
|
||||
- [ ] GNN-based object classification pipeline
|
||||
- [ ] RT-scheduled inference with latency tracking
|
||||
- [ ] Attention-weighted decision fusion
|
||||
- [ ] Benchmark: sensor-to-decision < 2ms target
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: MCP Tool Exposure (Week 8-9)
|
||||
|
||||
### Objective
|
||||
Register ruvector capabilities as MCP tools accessible to AI assistants.
|
||||
|
||||
### 3.1 Tool Definitions
|
||||
|
||||
Register these 10 tools with the MCP server:
|
||||
|
||||
```rust
|
||||
use agentic_robotics_mcp::{McpServer, McpTool, ToolHandler, tool, text_response};
|
||||
use serde_json::json;
|
||||
|
||||
pub async fn register_ruvector_tools(server: &McpServer) -> anyhow::Result<()> {
|
||||
// 1. Vector Search
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "vector_search".into(),
|
||||
description: "Search for nearest vectors in HNSW index".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": { "type": "array", "items": { "type": "number" } },
|
||||
"k": { "type": "integer", "default": 10 },
|
||||
"metric": { "type": "string", "enum": ["l2", "cosine", "dot"] }
|
||||
},
|
||||
"required": ["query"]
|
||||
}),
|
||||
},
|
||||
tool(|args| {
|
||||
let query: Vec<f64> = serde_json::from_value(args["query"].clone())?;
|
||||
let k = args["k"].as_u64().unwrap_or(10) as usize;
|
||||
// Perform HNSW search
|
||||
Ok(text_response(format!("Found {} nearest neighbors", k)))
|
||||
}),
|
||||
).await?;
|
||||
|
||||
// 2. GNN Classify
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "gnn_classify".into(),
|
||||
description: "Classify a graph structure using GNN".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"nodes": { "type": "array" },
|
||||
"edges": { "type": "array" }
|
||||
}
|
||||
}),
|
||||
},
|
||||
tool(|args| Ok(text_response("Classification: obstacle"))),
|
||||
).await?;
|
||||
|
||||
// 3. Attention Focus
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "attention_focus".into(),
|
||||
description: "Apply attention to select important features".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"features": { "type": "array", "items": { "type": "array" } },
|
||||
"query": { "type": "array", "items": { "type": "number" } }
|
||||
}
|
||||
}),
|
||||
},
|
||||
tool(|args| Ok(text_response("Attention weights computed"))),
|
||||
).await?;
|
||||
|
||||
// 4. Trajectory Predict
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "trajectory_predict".into(),
|
||||
description: "Predict future trajectory from state history".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"states": { "type": "array" },
|
||||
"horizon": { "type": "integer", "default": 10 }
|
||||
}
|
||||
}),
|
||||
},
|
||||
tool(|args| Ok(text_response("Trajectory predicted: 10 waypoints"))),
|
||||
).await?;
|
||||
|
||||
// 5. Scene Graph Analyze
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "scene_analyze".into(),
|
||||
description: "Build and analyze scene graph from sensor data".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"points": { "type": "array" },
|
||||
"radius": { "type": "number", "default": 1.0 }
|
||||
}
|
||||
}),
|
||||
},
|
||||
tool(|args| Ok(text_response("Scene: 12 objects, 3 obstacles"))),
|
||||
).await?;
|
||||
|
||||
// 6. Anomaly Detect
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "anomaly_detect".into(),
|
||||
description: "Detect anomalies using sparse inference".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"data": { "type": "array", "items": { "type": "number" } },
|
||||
"threshold": { "type": "number", "default": 0.95 }
|
||||
}
|
||||
}),
|
||||
},
|
||||
tool(|args| Ok(text_response("No anomalies detected (score: 0.12)"))),
|
||||
).await?;
|
||||
|
||||
// 7. Memory Store
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "memory_store".into(),
|
||||
description: "Store an experience/episode in AgentDB memory".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"key": { "type": "string" },
|
||||
"content": { "type": "string" },
|
||||
"embedding": { "type": "array", "items": { "type": "number" } }
|
||||
},
|
||||
"required": ["key", "content"]
|
||||
}),
|
||||
},
|
||||
tool(|args| Ok(text_response("Episode stored successfully"))),
|
||||
).await?;
|
||||
|
||||
// 8. Memory Recall
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "memory_recall".into(),
|
||||
description: "Recall similar memories via semantic search".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": { "type": "string" },
|
||||
"k": { "type": "integer", "default": 5 }
|
||||
},
|
||||
"required": ["query"]
|
||||
}),
|
||||
},
|
||||
tool(|args| Ok(text_response("Recalled 5 similar episodes"))),
|
||||
).await?;
|
||||
|
||||
// 9. Cluster Status
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "cluster_status".into(),
|
||||
description: "Get distributed cluster health and status".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {}
|
||||
}),
|
||||
},
|
||||
tool(|_| Ok(text_response("Cluster: 3 nodes, leader: node-0, healthy"))),
|
||||
).await?;
|
||||
|
||||
// 10. Model Update
|
||||
server.register_tool(
|
||||
McpTool {
|
||||
name: "model_update".into(),
|
||||
description: "Trigger online model fine-tuning with new data".into(),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"model_id": { "type": "string" },
|
||||
"training_data": { "type": "array" }
|
||||
},
|
||||
"required": ["model_id"]
|
||||
}),
|
||||
},
|
||||
tool(|args| {
|
||||
let model_id = args["model_id"].as_str().unwrap_or("default");
|
||||
Ok(text_response(format!("Model {} update queued", model_id)))
|
||||
}),
|
||||
).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Deliverables
|
||||
- [ ] 10 MCP tools registered and functional
|
||||
- [ ] Each tool backed by actual ruvector operations
|
||||
- [ ] MCP tool integration tests
|
||||
- [ ] TypeScript example using tools via MCP client
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Cognitive Robotics (Week 10-13)
|
||||
|
||||
### Objective
|
||||
Integrate ruvector's cognitive architecture modules for autonomous robot intelligence.
|
||||
|
||||
### 4.1 Nervous System Integration
|
||||
|
||||
Connect `ruvector-nervous-system` to robot sensing/actuation cycle:
|
||||
- Sensory cortex: processes PointCloud and sensor data
|
||||
- Motor cortex: generates movement commands
|
||||
- Prefrontal cortex: planning and decision making
|
||||
- Hippocampus: spatial memory via HNSW index
|
||||
- Cerebellum: fine motor control calibration
|
||||
|
||||
### 4.2 SONA Self-Learning
|
||||
|
||||
Integrate `sona` crate for autonomous skill acquisition:
|
||||
- Robot experiences stored as episodes in AgentDB
|
||||
- Self-optimizing neural architecture learns from rewards
|
||||
- Policy improvement without manual retraining
|
||||
- Transfer learning across robot configurations
|
||||
|
||||
### 4.3 Economy System
|
||||
|
||||
Use `ruvector-economy-wasm` for resource-aware planning:
|
||||
- Energy budget for robot operations
|
||||
- Computational budget for inference tasks
|
||||
- Task prioritization based on resource availability
|
||||
- Multi-robot resource negotiation
|
||||
|
||||
### 4.4 Delta Consensus for Multi-Robot
|
||||
|
||||
Use `ruvector-delta-consensus` for fleet coordination:
|
||||
- Shared world model across robot fleet
|
||||
- Consistent task assignment via distributed consensus
|
||||
- Fault-tolerant operation with Raft leader election
|
||||
- Incremental state synchronization
|
||||
|
||||
### Deliverables
|
||||
- [ ] Nervous system integration crate
|
||||
- [ ] SONA learning pipeline for robot skills
|
||||
- [ ] Resource-aware task planner
|
||||
- [ ] Multi-robot consensus coordination
|
||||
- [ ] Demo: autonomous navigation with learning
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Edge Deployment (Week 14-16)
|
||||
|
||||
### Objective
|
||||
Optimize for constrained deployment targets.
|
||||
|
||||
### 5.1 WASM Builds
|
||||
- `ruvector-robotics-bridge-wasm` for browser-based simulation
|
||||
- Web-based robot control interface
|
||||
- WASM-compiled GNN for edge inference
|
||||
|
||||
### 5.2 Embedded Deployment
|
||||
- Integrate `agentic-robotics-embedded` with `rvlite`
|
||||
- Minimal vector search on ARM Cortex-M
|
||||
- `ruvector-sparse-inference` for compact models
|
||||
- Feature-flag everything non-essential
|
||||
|
||||
### 5.3 FPGA Acceleration
|
||||
- `ruvector-fpga-transformer` for hardware-accelerated inference
|
||||
- Custom attention kernels for real-time processing
|
||||
- FPGA + ARM SoC deployment profile
|
||||
|
||||
### 5.4 Resource Profiles
|
||||
|
||||
| Profile | Memory | CPU | Capabilities |
|
||||
|---------|--------|-----|-------------|
|
||||
| Full | 1GB+ | 4+ cores | All features |
|
||||
| Standard | 256MB | 2 cores | Core + GNN + search |
|
||||
| Lite | 64MB | 1 core | Search + sparse inference |
|
||||
| Embedded | 4MB | MCU | Minimal vector search |
|
||||
|
||||
### Deliverables
|
||||
- [ ] WASM build for browser simulation
|
||||
- [ ] Embedded build for ARM targets
|
||||
- [ ] FPGA deployment configuration
|
||||
- [ ] Resource profile documentation
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Benchmarking and Validation (Week 17-18)
|
||||
|
||||
### Objective
|
||||
Comprehensive performance validation of the integrated platform.
|
||||
|
||||
### 6.1 Latency Targets
|
||||
|
||||
| Pipeline | Target | Measurement Method |
|
||||
|----------|--------|-------------------|
|
||||
| Sensor ingestion | <1us | CDR deserialization benchmark |
|
||||
| PointCloud -> vectors | <10us | Conversion benchmark |
|
||||
| HNSW search (10K vectors) | <500us | Criterion benchmark |
|
||||
| GNN inference (small graph) | <1ms | RT-scheduled benchmark |
|
||||
| End-to-end sensor->decision | <2ms | Full pipeline benchmark |
|
||||
| MCP tool call | <5ms | JSON-RPC round-trip |
|
||||
|
||||
### 6.2 Throughput Targets
|
||||
|
||||
| Operation | Target | Conditions |
|
||||
|-----------|--------|-----------|
|
||||
| Message serialization | >1M/sec | CDR format |
|
||||
| Vector insertions | >100K/sec | During RT control |
|
||||
| Vector searches | >10K/sec | Concurrent with control |
|
||||
| GNN classifications | >500/sec | RT-scheduled |
|
||||
|
||||
### 6.3 Memory Targets
|
||||
|
||||
| Deployment | Target | Includes |
|
||||
|-----------|--------|---------|
|
||||
| Full platform | <500MB | All modules loaded |
|
||||
| Core + perception | <200MB | Without cognitive |
|
||||
| Edge deployment | <100MB | rvlite + sparse |
|
||||
| Embedded | <4MB | Minimal search |
|
||||
|
||||
### 6.4 Competitive Comparison
|
||||
|
||||
| Metric | ruvector+robotics | ROS2+PyTorch | Isaac Sim | Drake |
|
||||
|--------|------------------|-------------|-----------|-------|
|
||||
| Sensor->Decision | <2ms | 10-50ms | GPU-dependent | 5-20ms |
|
||||
| Memory (edge) | <100MB | >1GB | >4GB | >500MB |
|
||||
| ML native | Yes | Via bridge | CUDA only | No |
|
||||
| MCP support | Yes | No | No | No |
|
||||
| WASM deploy | Yes | No | No | No |
|
||||
| Language safety | Rust | C++/Python | Python | C++ |
|
||||
|
||||
### Deliverables
|
||||
- [ ] Complete benchmark suite (Criterion)
|
||||
- [ ] CI-gated performance regression tests
|
||||
- [ ] Competitive comparison report
|
||||
- [ ] Performance optimization guide
|
||||
|
||||
---
|
||||
|
||||
## Dependency Resolution Table
|
||||
|
||||
| Dependency | ruvector Version | agentic-robotics Version | Unified Version | Notes |
|
||||
|-----------|-----------------|-------------------------|----------------|-------|
|
||||
| tokio | 1.41 | 1.47 | 1.47 | Minor bump, compatible |
|
||||
| serde | 1.0 | 1.0 | 1.0 | Identical |
|
||||
| serde_json | 1.0 | 1.0 | 1.0 | Identical |
|
||||
| rkyv | 0.8 | 0.8 | 0.8 | Identical |
|
||||
| crossbeam | 0.8 | 0.8 | 0.8 | Identical |
|
||||
| rayon | 1.10 | 1.10 | 1.10 | Identical |
|
||||
| parking_lot | 0.12 | 0.12 | 0.12 | Identical |
|
||||
| nalgebra | 0.33 | 0.33 | 0.33 | Unify features |
|
||||
| napi | 2.16 | 3.0 | Separate | Coordinate upgrade later |
|
||||
| napi-derive | 2.16 | 3.0 | Separate | Coordinate upgrade later |
|
||||
| criterion | 0.5 | 0.5 | 0.5 | Identical |
|
||||
| thiserror | 2.0 | 2.0 | 2.0 | Identical |
|
||||
| anyhow | 1.0 | 1.0 | 1.0 | Identical |
|
||||
| tracing | 0.1 | 0.1 | 0.1 | Identical |
|
||||
| rand | 0.8 | 0.8 | 0.8 | Identical |
|
||||
| zenoh | N/A | 1.0 | 1.0 | New addition |
|
||||
| rustdds | N/A | 0.11 | 0.11 | New addition |
|
||||
| cdr | N/A | 0.2 | 0.2 | New addition |
|
||||
| hdrhistogram | N/A | 7.5 | 7.5 | New addition |
|
||||
| wide | N/A | 0.7 | 0.7 | New addition |
|
||||
|
||||
---
|
||||
|
||||
## Risk Register
|
||||
|
||||
| # | Risk | Probability | Impact | Mitigation |
|
||||
|---|------|------------|--------|------------|
|
||||
| 1 | Zenoh dependency tree adds 50+ transitive deps | High | Medium | Feature-gate behind `robotics` flag |
|
||||
| 2 | Tokio version mismatch causes runtime conflicts | Low | High | Upgrade to 1.47 in Phase 0 |
|
||||
| 3 | NAPI 2.16 vs 3.0 prevents unified Node.js package | Medium | Medium | Separate npm packages initially |
|
||||
| 4 | Combined workspace compile time exceeds CI limits | High | Medium | Incremental builds, feature flags, split CI |
|
||||
| 5 | Zenoh runtime conflicts with ruvector async code | Low | High | Isolate Zenoh in dedicated Tokio runtime |
|
||||
| 6 | GNN inference exceeds RT deadline budget | Medium | High | Model quantization, early exit, async fallback |
|
||||
| 7 | Memory pressure from combined modules on edge | Medium | Medium | rvlite profile, lazy module loading |
|
||||
| 8 | Benchmark API drift in agentic-robotics-benchmarks | High | Low | Fix benchmarks in Phase 0 |
|
||||
| 9 | MCP tool handlers need async but current API is sync | Medium | Medium | Add `AsyncToolHandler` variant |
|
||||
| 10 | CDR serialization overhead for large vector payloads | Low | Low | Use rkyv for internal paths, CDR only at boundary |
|
||||
| 11 | Embedded targets incompatible with ruvector std deps | Medium | Medium | Strict no_std boundary in rvlite |
|
||||
| 12 | Multi-robot consensus overhead exceeds latency budget | Low | Medium | Async consensus, eventual consistency |
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Phase 0
|
||||
- All 6 agentic-robotics crates compile in ruvector workspace
|
||||
- Zero test regressions in existing ruvector tests
|
||||
- CI pipeline includes robotics builds
|
||||
|
||||
### Phase 1
|
||||
- Bridge crate converts all 3 message types (PointCloud, RobotState, Pose)
|
||||
- IndexingSubscriber indexes 10K points/frame at >100 FPS
|
||||
- Search latency <500us for 10K indexed vectors
|
||||
|
||||
### Phase 2
|
||||
- End-to-end perception pipeline: sensor -> GNN -> decision
|
||||
- Inference latency <1ms on standard hardware
|
||||
- Attention mechanism reduces feature space by >50%
|
||||
|
||||
### Phase 3
|
||||
- 10 MCP tools registered and callable
|
||||
- Tool call round-trip <5ms
|
||||
- TypeScript client can invoke all tools
|
||||
|
||||
### Phase 4
|
||||
- Robot learns new navigation skill from 100 episodes
|
||||
- Multi-robot fleet maintains consistent world model
|
||||
- Resource-aware planner reduces energy usage by >20%
|
||||
|
||||
### Phase 5
|
||||
- WASM build under 5MB
|
||||
- Embedded build under 4MB
|
||||
- FPGA inference <100us
|
||||
|
||||
### Phase 6
|
||||
- All latency targets met
|
||||
- All throughput targets met
|
||||
- Performance regression CI gates active
|
||||
|
||||
---
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Zenoh vs custom transport**: Should we use Zenoh for all inter-module communication, or keep crossbeam channels for intra-process and Zenoh only for inter-process?
|
||||
|
||||
2. **NAPI version strategy**: Should we upgrade all ruvector -node crates to napi 3.0 in Phase 0, or maintain version separation?
|
||||
|
||||
3. **Workspace partitioning**: Should agentic-robotics crates live under `crates/agentic-robotics-*/` or be renamed to `crates/ruvector-robotics-*/` for consistency?
|
||||
|
||||
4. **Feature flag granularity**: One `robotics` feature flag or separate flags per capability (`robotics-core`, `robotics-rt`, `robotics-mcp`)?
|
||||
|
||||
5. **GNN model format**: What format for pre-trained GNN models? ONNX? Custom ruvector format? In-memory only?
|
||||
|
||||
6. **MCP async handlers**: The current `ToolHandler` type is synchronous. Should we extend to `AsyncToolHandler` for ruvector operations that are inherently async?
|
||||
|
||||
7. **Testing strategy**: Integration tests between robotics and ML modules -- how to mock sensor data realistically?
|
||||
|
||||
8. **Multi-robot consensus protocol**: Raft (agentic-robotics-core via Zenoh) vs Delta Consensus (ruvector-delta-consensus)? Or both for different consistency levels?
|
||||
|
||||
9. **WASM deployment scope**: Which ruvector modules should be available in the WASM robotics build? Full GNN or inference-only?
|
||||
|
||||
10. **Formal verification**: Can `lean-agentic` verify safety properties of the combined robotics+ML pipeline?
|
||||
436
vendor/ruvector/docs/research/agentic-robotics/user-guide.md
vendored
Normal file
436
vendor/ruvector/docs/research/agentic-robotics/user-guide.md
vendored
Normal file
@@ -0,0 +1,436 @@
|
||||
# ruvector-robotics User Guide
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Getting Started](#getting-started)
|
||||
2. [Bridge Module](#bridge-module)
|
||||
3. [Perception Module](#perception-module)
|
||||
4. [Cognitive Module](#cognitive-module)
|
||||
5. [MCP Module](#mcp-module)
|
||||
6. [Integration Patterns](#integration-patterns)
|
||||
7. [Advanced Usage](#advanced-usage)
|
||||
|
||||
---
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Installation
|
||||
|
||||
Add to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
ruvector-robotics = { path = "crates/ruvector-robotics" }
|
||||
```
|
||||
|
||||
### Minimal Example
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::bridge::{Point3D, PointCloud, SpatialIndex};
|
||||
use ruvector_robotics::perception::PerceptionPipeline;
|
||||
|
||||
fn main() {
|
||||
// 1. Create sensor data
|
||||
let points = vec![
|
||||
Point3D::new(1.0, 0.0, 0.0),
|
||||
Point3D::new(0.0, 1.0, 0.0),
|
||||
Point3D::new(5.0, 5.0, 5.0),
|
||||
];
|
||||
let cloud = PointCloud::new(points, 1000);
|
||||
|
||||
// 2. Index for spatial search
|
||||
let mut index = SpatialIndex::new(3);
|
||||
index.insert_point_cloud(&cloud);
|
||||
|
||||
// 3. Find nearest obstacles
|
||||
let nearest = index.search_nearest(&[0.0, 0.0, 0.0], 2).unwrap();
|
||||
println!("Nearest 2 points: {:?}", nearest);
|
||||
|
||||
// 4. Detect obstacles
|
||||
let pipeline = PerceptionPipeline::default();
|
||||
let obstacles = pipeline
|
||||
.detect_obstacles(&cloud, [0.0, 0.0, 0.0], 10.0)
|
||||
.unwrap();
|
||||
println!("Detected {} obstacles", obstacles.len());
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bridge Module
|
||||
|
||||
The bridge module provides core types shared across all robotics subsystems.
|
||||
|
||||
### Core Types
|
||||
|
||||
| Type | Description | Fields |
|
||||
|------|-------------|--------|
|
||||
| `Point3D` | 3D point (f32) | x, y, z |
|
||||
| `PointCloud` | Collection of points | points, intensities, normals, timestamp_us |
|
||||
| `RobotState` | Kinematic state | position, velocity, acceleration, timestamp_us |
|
||||
| `Pose` | 6-DOF pose | position, orientation (Quaternion) |
|
||||
| `SensorFrame` | Synchronized sensor bundle | cloud, state, pose |
|
||||
| `OccupancyGrid` | 2D occupancy map | width, height, resolution, data |
|
||||
| `SceneObject` | Detected object | id, center, extent, confidence, label |
|
||||
| `SceneGraph` | Object relationships | objects, edges |
|
||||
| `Trajectory` | Predicted path | waypoints, timestamps, confidence |
|
||||
|
||||
### SpatialIndex
|
||||
|
||||
A flat brute-force index for nearest-neighbor search:
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::bridge::{SpatialIndex, DistanceMetric};
|
||||
|
||||
// Create index with cosine distance
|
||||
let mut index = SpatialIndex::with_metric(128, DistanceMetric::Cosine);
|
||||
|
||||
// Insert vectors
|
||||
index.insert_vectors(&[vec![1.0; 128], vec![0.5; 128]]);
|
||||
|
||||
// k-NN search
|
||||
let results = index.search_nearest(&vec![0.9; 128], 5).unwrap();
|
||||
|
||||
// Radius search
|
||||
let within = index.search_radius(&vec![0.9; 128], 0.5).unwrap();
|
||||
```
|
||||
|
||||
### Converters
|
||||
|
||||
Convert between robotics types and flat vectors:
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::bridge::{PointCloud, Point3D, converters};
|
||||
|
||||
let cloud = PointCloud::new(vec![Point3D::new(1.0, 2.0, 3.0)], 0);
|
||||
|
||||
// To vectors for indexing
|
||||
let vecs = converters::point_cloud_to_vectors(&cloud);
|
||||
// -> [[1.0, 2.0, 3.0]]
|
||||
|
||||
// Back to point cloud
|
||||
let cloud2 = converters::vectors_to_point_cloud(&vecs, 0).unwrap();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Perception Module
|
||||
|
||||
### Obstacle Detection
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::perception::{ObstacleDetector, PerceptionConfig};
|
||||
|
||||
let config = PerceptionConfig::default();
|
||||
let detector = ObstacleDetector::new(config.obstacle);
|
||||
|
||||
// Detect from point cloud
|
||||
let obstacles = detector.detect(&cloud, &[0.0, 0.0, 0.0]);
|
||||
|
||||
// Classify obstacles
|
||||
let classified = detector.classify_obstacles(&obstacles);
|
||||
for c in &classified {
|
||||
println!("{:?}: {:?} (confidence: {:.2})", c.class, c.obstacle.center, c.confidence);
|
||||
}
|
||||
```
|
||||
|
||||
### Scene Graph Construction
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::perception::{SceneGraphBuilder, PerceptionConfig};
|
||||
use ruvector_robotics::bridge::SceneObject;
|
||||
|
||||
let config = PerceptionConfig::default();
|
||||
let builder = SceneGraphBuilder::new(config.scene_graph);
|
||||
|
||||
// From point cloud (clusters -> objects -> edges)
|
||||
let graph = builder.build_from_point_cloud(&cloud);
|
||||
|
||||
// From pre-detected objects
|
||||
let objects = vec![
|
||||
SceneObject::new(0, [0.0, 0.0, 0.0], [0.5, 0.5, 0.5]),
|
||||
SceneObject::new(1, [3.0, 0.0, 0.0], [0.5, 0.5, 0.5]),
|
||||
];
|
||||
let graph = builder.build_from_objects(&objects);
|
||||
```
|
||||
|
||||
### Full Perception Pipeline
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::perception::PerceptionPipeline;
|
||||
|
||||
let pipeline = PerceptionPipeline::default();
|
||||
|
||||
// Obstacle detection + clustering
|
||||
let obstacles = pipeline.detect_obstacles(&cloud, [0.0, 0.0, 0.0], 20.0).unwrap();
|
||||
|
||||
// Trajectory prediction
|
||||
let traj = pipeline.predict_trajectory([0.0, 0.0, 0.0], [1.0, 0.0, 0.0], 10, 0.1).unwrap();
|
||||
|
||||
// Attention focusing
|
||||
let focused = pipeline.focus_attention(&cloud, [5.0, 5.0, 0.0], 2.0).unwrap();
|
||||
|
||||
// Anomaly detection
|
||||
let anomalies = pipeline.detect_anomalies(&cloud).unwrap();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cognitive Module
|
||||
|
||||
### Behavior Trees
|
||||
|
||||
Composable reactive control structures:
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::cognitive::{BehaviorTree, BehaviorNode, BehaviorStatus};
|
||||
|
||||
// Build a patrol tree
|
||||
let root = BehaviorNode::Sequence(vec![
|
||||
BehaviorNode::Condition("is_battery_ok".into()),
|
||||
BehaviorNode::Selector(vec![
|
||||
BehaviorNode::Sequence(vec![
|
||||
BehaviorNode::Condition("obstacle_detected".into()),
|
||||
BehaviorNode::Action("avoid_obstacle".into()),
|
||||
]),
|
||||
BehaviorNode::Action("move_forward".into()),
|
||||
]),
|
||||
BehaviorNode::Action("update_map".into()),
|
||||
]);
|
||||
|
||||
let mut tree = BehaviorTree::new(root);
|
||||
|
||||
// Set condition and action states
|
||||
tree.set_condition("is_battery_ok", true);
|
||||
tree.set_condition("obstacle_detected", false);
|
||||
tree.set_action_result("move_forward", BehaviorStatus::Success);
|
||||
tree.set_action_result("update_map", BehaviorStatus::Success);
|
||||
|
||||
let status = tree.tick();
|
||||
assert_eq!(status, BehaviorStatus::Success);
|
||||
```
|
||||
|
||||
### Cognitive Core
|
||||
|
||||
The central perceive-think-act-learn loop:
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::cognitive::{
|
||||
CognitiveCore, CognitiveConfig, CognitiveMode, Percept, Outcome,
|
||||
};
|
||||
|
||||
let config = CognitiveConfig {
|
||||
mode: CognitiveMode::Deliberative,
|
||||
attention_threshold: 0.5,
|
||||
learning_rate: 0.01,
|
||||
max_percepts: 100,
|
||||
};
|
||||
let mut core = CognitiveCore::new(config);
|
||||
|
||||
// 1. Perceive
|
||||
let percept = Percept {
|
||||
source: "lidar".into(),
|
||||
data: vec![1.0, 2.0, 3.0],
|
||||
confidence: 0.95,
|
||||
timestamp: 1000,
|
||||
};
|
||||
core.perceive(percept);
|
||||
|
||||
// 2. Think -> Decision
|
||||
if let Some(decision) = core.think() {
|
||||
println!("Decision: {} (utility: {:.2})", decision.reasoning, decision.utility);
|
||||
|
||||
// 3. Act
|
||||
let cmd = core.act(decision);
|
||||
println!("Action: {:?}", cmd.action);
|
||||
|
||||
// 4. Learn
|
||||
core.learn(Outcome {
|
||||
success: true,
|
||||
reward: 1.0,
|
||||
description: "Obstacle avoided".into(),
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Memory System
|
||||
|
||||
Three-tier memory architecture:
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::cognitive::{WorkingMemory, EpisodicMemory, SemanticMemory, MemoryItem, Episode};
|
||||
|
||||
// Working memory (bounded buffer)
|
||||
let mut working = WorkingMemory::new(10);
|
||||
working.add(MemoryItem {
|
||||
key: "obstacle_1".into(),
|
||||
data: vec![1.0, 2.0, 3.0],
|
||||
importance: 0.8,
|
||||
timestamp: 1000,
|
||||
access_count: 0,
|
||||
});
|
||||
|
||||
// Episodic memory (experience replay)
|
||||
let mut episodic = EpisodicMemory::new(100);
|
||||
episodic.store(Episode {
|
||||
percepts: vec![vec![1.0, 2.0], vec![3.0, 4.0]],
|
||||
actions: vec!["move".into(), "turn".into()],
|
||||
reward: 1.0,
|
||||
timestamp: 1000,
|
||||
});
|
||||
let similar = episodic.recall_similar(&[1.0, 2.0], 3);
|
||||
|
||||
// Semantic memory (concept storage)
|
||||
let mut semantic = SemanticMemory::new();
|
||||
semantic.store("obstacle", vec![1.0, 0.0, 0.0]);
|
||||
semantic.store("goal", vec![0.0, 1.0, 0.0]);
|
||||
let nearest = semantic.find_similar(&[0.9, 0.1, 0.0], 1);
|
||||
```
|
||||
|
||||
### Swarm Coordination
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::cognitive::{
|
||||
SwarmCoordinator, SwarmConfig, RobotCapabilities, SwarmTask, Formation, FormationType,
|
||||
};
|
||||
|
||||
let mut swarm = SwarmCoordinator::new(SwarmConfig {
|
||||
max_robots: 10,
|
||||
communication_range: 50.0,
|
||||
consensus_threshold: 0.6,
|
||||
});
|
||||
|
||||
// Register robots
|
||||
swarm.register_robot(RobotCapabilities {
|
||||
id: 1,
|
||||
max_speed: 2.0,
|
||||
payload: 5.0,
|
||||
sensors: vec!["lidar".into(), "camera".into()],
|
||||
});
|
||||
|
||||
// Assign tasks
|
||||
let tasks = vec![SwarmTask {
|
||||
id: 1,
|
||||
description: "Survey area A".into(),
|
||||
location: [10.0, 20.0, 0.0],
|
||||
required_capabilities: vec!["camera".into()],
|
||||
priority: 5,
|
||||
}];
|
||||
let assignments = swarm.assign_tasks(&tasks);
|
||||
|
||||
// Compute formation
|
||||
let formation = Formation {
|
||||
formation_type: FormationType::Circle,
|
||||
spacing: 3.0,
|
||||
center: [0.0, 0.0, 0.0],
|
||||
};
|
||||
let positions = swarm.compute_formation(&formation);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MCP Module
|
||||
|
||||
### Tool Registry
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::mcp::{RoboticsToolRegistry, ToolCategory};
|
||||
|
||||
let registry = RoboticsToolRegistry::new();
|
||||
|
||||
// List all 15 tools
|
||||
for tool in registry.list_tools() {
|
||||
println!("{}: {}", tool.name, tool.description);
|
||||
}
|
||||
|
||||
// Filter by category
|
||||
let perception_tools = registry.list_by_category(ToolCategory::Perception);
|
||||
println!("Perception tools: {}", perception_tools.len());
|
||||
|
||||
// Get MCP schema
|
||||
let schema = registry.to_mcp_schema();
|
||||
println!("{}", serde_json::to_string_pretty(&schema).unwrap());
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration Patterns
|
||||
|
||||
### Sensor → Perception → Cognition → Action
|
||||
|
||||
```rust
|
||||
use ruvector_robotics::bridge::*;
|
||||
use ruvector_robotics::perception::*;
|
||||
use ruvector_robotics::cognitive::*;
|
||||
|
||||
// 1. Sensor data arrives
|
||||
let cloud = PointCloud::new(/* sensor points */, timestamp);
|
||||
|
||||
// 2. Perception processes it
|
||||
let pipeline = PerceptionPipeline::default();
|
||||
let obstacles = pipeline.detect_obstacles(&cloud, robot_pos, 20.0).unwrap();
|
||||
|
||||
// 3. Cognitive core makes decisions
|
||||
let mut core = CognitiveCore::new(CognitiveConfig::default());
|
||||
for obs in &obstacles {
|
||||
core.perceive(Percept {
|
||||
source: "perception".into(),
|
||||
data: obs.position.to_vec(),
|
||||
confidence: obs.confidence as f64,
|
||||
timestamp: 0,
|
||||
});
|
||||
}
|
||||
|
||||
// 4. Think and act
|
||||
if let Some(decision) = core.think() {
|
||||
let action = core.act(decision);
|
||||
// Send action to robot motors
|
||||
}
|
||||
```
|
||||
|
||||
### Multi-Robot Coordination
|
||||
|
||||
```rust
|
||||
// Each robot runs its own cognitive core
|
||||
// SwarmCoordinator manages task allocation across robots
|
||||
// ConsensusResult enables group decision-making
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Distance Metrics
|
||||
|
||||
The SpatialIndex supports Euclidean, Cosine, and Manhattan distances. Choose based on your data:
|
||||
- **Euclidean**: Best for spatial point clouds (default)
|
||||
- **Cosine**: Best for high-dimensional feature vectors
|
||||
- **Manhattan**: Best for grid-aligned environments
|
||||
|
||||
### Behavior Tree Patterns
|
||||
|
||||
Common patterns:
|
||||
- **Patrol**: `Sequence[CheckBattery, Selector[AvoidObstacle, MoveForward], UpdateMap]`
|
||||
- **Explore**: `Selector[GoToFrontier, RandomWalk, ReturnToBase]`
|
||||
- **Emergency**: `Sequence[StopMotors, SendAlert, WaitForHelp]`
|
||||
|
||||
### Memory Consolidation
|
||||
|
||||
The three-tier memory system models human memory:
|
||||
- **Working Memory**: Current sensor data, bounded to prevent overload
|
||||
- **Episodic Memory**: Past experiences for pattern matching
|
||||
- **Semantic Memory**: Learned concepts and relationships
|
||||
|
||||
### Performance Tuning
|
||||
|
||||
Key parameters to adjust:
|
||||
- `SceneGraphConfig::cluster_radius` — Smaller = more objects, slower
|
||||
- `ObstacleConfig::safety_margin` — Larger = more conservative
|
||||
- `CognitiveConfig::attention_threshold` — Higher = focus on important percepts
|
||||
- `SwarmConfig::consensus_threshold` — Higher = more agreement required
|
||||
|
||||
---
|
||||
|
||||
## API Reference
|
||||
|
||||
Full API documentation is generated with `cargo doc -p ruvector-robotics --open`.
|
||||
Reference in New Issue
Block a user