49 KiB
Agentic Robotics Crate-by-Crate Deep Review
Date: 2026-02-27
Reviewer: Research Agent
Source Location: /home/user/ruvector/crates/agentic-robotics-*/
Total Crates: 6
Total Lines of Rust: 2,635 (source + benchmarks)
Table of Contents
- Crate 1: agentic-robotics-core
- Crate 2: agentic-robotics-rt
- Crate 3: agentic-robotics-mcp
- Crate 4: agentic-robotics-embedded
- Crate 5: agentic-robotics-node
- Crate 6: agentic-robotics-benchmarks
- Cross-Crate Dependency Graph
- Overall Assessment
- Integration Roadmap for ruvector
Crate 1: agentic-robotics-core
Path: /home/user/ruvector/crates/agentic-robotics-core/
Line Count: 705 lines (669 source + 36 bench)
Complexity Estimate: Low-Medium
Code Quality Rating: B
File Inventory
| File | Lines | Purpose |
|---|---|---|
src/lib.rs |
45 | Root module, re-exports, init() function |
src/message.rs |
119 | Message trait and concrete message types |
src/publisher.rs |
85 | Generic typed publisher with stats tracking |
src/subscriber.rs |
91 | Generic typed subscriber with crossbeam channels |
src/service.rs |
127 | RPC service server (Queryable) and client (Service) |
src/middleware.rs |
66 | Zenoh middleware abstraction (placeholder) |
src/serialization.rs |
107 | CDR/JSON/rkyv serialization pipeline |
src/error.rs |
29 | Error types using thiserror |
benches/message_passing.rs |
36 | Criterion benchmark for publish + serialization |
Purpose
ROS3 Core provides the foundational pub/sub messaging layer modeled after ROS2 but rewritten in Rust. It targets microsecond-scale determinism with Zenoh as the middleware transport (currently stubbed) and supports CDR, JSON, and rkyv serialization formats.
API Surface
Traits
pub trait Message: Serialize + for<'de> Deserialize<'de> + Send + Sync + 'static {
fn type_name() -> &'static str;
fn version() -> &'static str { "1.0" }
}
The Message trait is the central abstraction. It requires serde bounds plus Send + Sync + 'static for async safety. A blanket implementation exists for serde_json::Value, enabling generic JSON message passing.
Public Types
| Type | Module | Description |
|---|---|---|
Message (trait) |
message |
Core message trait with type_name/version |
RobotState |
message |
position: [f64; 3], velocity: [f64; 3], timestamp: i64 |
Point3D |
message |
x/y/z as f32, Copy-able |
PointCloud |
message |
Vec + Vec intensities + timestamp |
Pose |
message |
position: [f64; 3], orientation: [f64; 4] (quaternion) |
Publisher<T: Message> |
publisher |
Generic typed publisher with stats |
Subscriber<T: Message> |
subscriber |
Generic typed subscriber with crossbeam channels |
Queryable<Req, Res> |
service |
RPC server with handler function |
Service<Req, Res> |
service |
RPC client (stub) |
ServiceHandler<Req, Res> |
service |
Arc<dyn Fn(Req) -> Result<Res> + Send + Sync> |
Zenoh |
middleware |
Zenoh session wrapper (placeholder) |
ZenohConfig |
middleware |
mode, connect, listen configuration |
Format |
serialization |
Enum: Cdr, Rkyv, Json |
Serializer |
serialization |
Format-aware serializer wrapper |
Error |
error |
Zenoh/Serialization/Connection/Timeout/Config/Io/Other |
Result<T> |
error |
std::result::Result<T, Error> |
Public Functions
| Function | Module | Signature |
|---|---|---|
init() |
lib |
fn init() -> Result<()> -- initializes tracing |
serialize_cdr<T: Serialize> |
serialization |
fn(msg: &T) -> Result<Vec<u8>> |
deserialize_cdr<T: Deserialize> |
serialization |
fn(data: &[u8]) -> Result<T> |
serialize_rkyv<T: Serialize> |
serialization |
fn(msg: &T) -> Result<Vec<u8>> -- STUB, returns Err |
serialize_json<T: Serialize> |
serialization |
fn(msg: &T) -> Result<String> |
deserialize_json<T: Deserialize> |
serialization |
fn(data: &str) -> Result<T> |
Architecture Analysis
Message Flow
Application Code
|
v
Publisher<T>.publish(&msg)
|
v
Serializer.serialize(msg) --> Format dispatch
| | |
v v v
CDR JSON rkyv (stub)
|
v
[Wire: Zenoh placeholder -- no actual network send]
|
v
Stats update (messages_sent++, bytes_sent += len)
The publish path is: Publisher::publish() -> Serializer::serialize() -> stats update. There is no actual Zenoh network transmission -- the middleware layer (middleware.rs) is a placeholder that creates an Arc<RwLock<()>>. The publisher simply serializes and tracks byte counts.
Serialization Pipeline
Three formats are supported but at different maturity levels:
- CDR (Common Data Representation): Fully functional via the
cdrcrate. Uses big-endian encoding (CdrBe). This is the default format and provides DDS-compatible wire representation. - JSON: Fully functional via
serde_json. Used primarily for debugging and for the NAPI boundary in the node crate. - rkyv (zero-copy): Declared in derives (
Archive, RkyvSerialize, RkyvDeserialize) on message types but theserialize_rkyv()function returnsErr("rkyv serialization not fully implemented"). The rkyv derives are present onRobotState,Point3D,PointCloud, andPose, so the infrastructure is prepared but the serialization function is incomplete.
Subscriber Architecture
The subscriber uses crossbeam::channel::unbounded() for message delivery. This is a multi-producer multi-consumer channel but in the current implementation, messages never actually arrive because there is no Zenoh transport connection. The recv_async() method wraps the blocking crossbeam::channel::recv() in tokio::task::spawn_blocking(), which is correct for bridging sync/async boundaries but adds thread pool overhead.
The subscriber holds both a Receiver and a shared Arc<Sender> (_sender), which keeps the channel alive. The Clone implementation shares both the receiver and sender, enabling multiple concurrent readers -- though crossbeam's Receiver::clone() actually creates another consumer on the same channel, meaning messages are load-balanced (each message goes to one reader), not broadcast.
Service/RPC
The Queryable struct is a synchronous handler wrapped as Arc<dyn Fn(Req) -> Result<Res>>. Despite handle() being async fn, the actual handler execution is synchronous -- there is no .await in the handler call itself. The Service client is fully stubbed, always returning an error.
Dependency Analysis
| Dependency | Purpose | Weight |
|---|---|---|
zenoh (workspace) |
Middleware transport | Heavy -- Zenoh pulls in many transitive deps |
rustdds (workspace) |
DDS compatibility | Heavy -- full DDS implementation |
tokio (workspace) |
Async runtime | Standard |
serde + serde_json |
Serialization | Standard |
cdr (workspace) |
CDR binary encoding | Light |
rkyv (workspace) |
Zero-copy archives | Medium |
anyhow + thiserror |
Error handling | Light |
tracing + tracing-subscriber |
Logging | Light |
parking_lot (workspace) |
Fast mutexes/rwlocks | Light |
crossbeam (workspace) |
Lock-free channels | Light |
Notable: Both zenoh and rustdds are listed as dependencies but neither is actually used in the source code. They are workspace-level declarations for future integration. This inflates compile time and binary size significantly for no runtime benefit.
Code Quality Assessment
Test Coverage: 8 tests across 6 modules. Tests cover:
init()success pathRobotStatedefault values and type_namePointClouddefault and type_name- Publisher publish + stats verification
- Subscriber creation and try_recv (empty)
- Queryable handler execution + stats
- Service client creation
- Zenoh session creation
Tests are present but shallow -- they only test the happy path and default construction. No tests for:
- Serialization round-trips across all formats
- Error paths (malformed data, channel disconnect)
- Concurrent publisher/subscriber interaction
- PointCloud with actual data
- Pose message operations
Error Handling: Uses thiserror with 7 variant Error enum. Error propagation is clean with ? operator. The anyhow integration via #[from] on the Other variant provides a catch-all. However, serialize_rkyv() returns a string error instead of a proper rkyv-specific error.
Documentation: Module-level doc comments on all files. Individual function docs are present on public API methods. No examples in doc comments.
Safety: No unsafe code. All synchronization uses parking_lot (which has well-audited unsafe internally) and crossbeam. PhantomData usage is correct for zero-sized type markers.
Concerns:
serialize_rkyv()is misleadingly typed -- it acceptsT: Serialize(serde) notT: rkyv::Serialize, so it could never actually perform rkyv serialization.Publisher::publish()is async but contains no actual await points (serialization is sync, stats update is sync). The method could be synchronous.- The
Zenohstruct holds_configand_innerwith underscore prefixes, indicating they are acknowledged as unused placeholders. tracing_subscriber::fmt().init()ininit()will panic if called twice (standard tracing limitation). No guard against double-init.
Integration Points with ruvector
-
PointCloud <-> Vector Data:
PointCloudstoresVec<Point3D>(3D f32 vectors) andVec<f32>intensities. This maps directly to ruvector's core vector storage. A thin adapter could exposePointCloud.pointsas a collection of 3-dimensional vectors for HNSW indexing, nearest-neighbor search, or GNN node features. -
Message Trait for Distributed Vectors: The
Messagetrait could be implemented on ruvector's core types (e.g., embedding vectors, search results) to enable pub/sub distribution of vector operations across nodes. -
Serialization Synergy: ruvector already has its own serialization needs. The CDR format could be used for DDS-compatible vector streaming (e.g., real-time sensor embeddings). The rkyv format, once completed, aligns with ruvector's zero-copy philosophy.
-
Publisher/Subscriber for Vector Streaming: Real-time embedding pipelines (sensor data -> encoder -> vector -> HNSW insert) could use the pub/sub pattern with typed publishers for specific vector dimensions.
Crate 2: agentic-robotics-rt
Path: /home/user/ruvector/crates/agentic-robotics-rt/
Line Count: 512 lines (483 source + 29 bench)
Complexity Estimate: Medium
Code Quality Rating: B-
File Inventory
| File | Lines | Purpose |
|---|---|---|
src/lib.rs |
60 | RTPriority enum, re-exports |
src/executor.rs |
157 | Dual-runtime executor (high/low priority) |
src/scheduler.rs |
121 | BinaryHeap priority scheduler |
src/latency.rs |
145 | HDR histogram latency tracking |
benches/latency.rs |
29 | Criterion benchmarks |
Purpose
Provides a dual-runtime real-time execution framework. The core idea is to maintain two separate Tokio runtimes: a high-priority runtime with 2 worker threads for sub-millisecond deadline tasks (control loops), and a low-priority runtime with 4 worker threads for relaxed-deadline tasks (planning, perception). Tasks are routed between runtimes based on their deadline requirements.
Architecture
Dual-Runtime Design
ROS3Executor
/ \
tokio_rt_high tokio_rt_low
(2 threads) (4 threads)
"ros3-rt-high" "ros3-rt-low"
| |
deadline < 1ms deadline >= 1ms
(control loops) (planning tasks)
The ROS3Executor creates two independent Tokio multi-threaded runtimes during construction. The routing decision in spawn_rt() is simple: if deadline.0 < Duration::from_millis(1), the task goes to the high-priority runtime; otherwise it goes to the low-priority runtime. This is a coarse-grained approach -- the actual Tokio scheduler within each runtime does not respect priorities, so the "high-priority" runtime is simply a smaller, dedicated thread pool.
Priority System
Two overlapping priority systems exist:
-
RTPriority (in
lib.rs): 5-level enum (Background=0, Low=1, Normal=2, High=3, Critical=4). SupportsFrom<u8>conversion with saturation at Critical for values >= 4. -
Priority (in
executor.rs): Simple newtype wrapperPriority(pub u8). Used by the executor'sspawn_rt().
The spawn_rt() method converts Priority(u8) to RTPriority via .into() for debug logging, but the priority value is never actually used for scheduling. Only the deadline threshold determines runtime assignment. The PriorityScheduler instance is stored in the executor but never consulted during spawn.
PriorityScheduler
The scheduler maintains a BinaryHeap<ScheduledTask> with ordering: higher RTPriority first, then earlier deadline (reverse chronological within same priority). The Ord implementation is correct for a max-heap with priority-first, deadline-second ordering.
However, the scheduler is entirely disconnected from the executor. The schedule() method creates ScheduledTask entries with Instant::now() + deadline and auto-incrementing task_id, but the executor never calls schedule() or next_task(). The scheduler exists as infrastructure for a future implementation.
LatencyTracker
The most complete component. Uses hdrhistogram::Histogram<u64> with 3 significant digits for microsecond-precision latency tracking. Key features:
- Thread-safe: Histogram wrapped in
Arc<Mutex<Histogram>>(parking_lot mutex) - Non-blocking record:
record()usestry_lock()-- measurements are silently dropped if the mutex is contended, preventing latency measurement from introducing latency - RAII measurement:
LatencyMeasurementguard records elapsed time on drop - Rich statistics:
LatencyStatsprovides min, max, mean, p50, p90, p99, p99.9 percentiles - Display implementation: Human-readable output with units
API Surface
Public Types
| Type | Module | Description |
|---|---|---|
RTPriority |
lib |
5-level priority enum (Background..Critical) |
ROS3Executor |
executor |
Dual-runtime task executor |
Priority |
executor |
Priority(pub u8) newtype |
Deadline |
executor |
Deadline(pub Duration) newtype with From<Duration> |
PriorityScheduler |
scheduler |
BinaryHeap-based priority task queue |
ScheduledTask |
scheduler |
Task entry with priority, deadline, task_id |
LatencyTracker |
latency |
HDR histogram-based latency tracker |
LatencyStats |
latency |
Statistics snapshot (count, min, max, mean, percentiles) |
LatencyMeasurement |
latency |
RAII drop guard for automatic timing |
Key Methods
// Executor
impl ROS3Executor {
pub fn new() -> Result<Self>
pub fn spawn_rt<F: Future>(&self, priority: Priority, deadline: Deadline, task: F)
pub fn spawn_high<F: Future>(&self, task: F) // Priority(3), 500us deadline
pub fn spawn_low<F: Future>(&self, task: F) // Priority(1), 100ms deadline
pub fn spawn_blocking<F, R>(&self, f: F) -> JoinHandle<R>
pub fn high_priority_runtime(&self) -> &Runtime
pub fn low_priority_runtime(&self) -> &Runtime
}
// Scheduler
impl PriorityScheduler {
pub fn new() -> Self
pub fn schedule(&mut self, priority: RTPriority, deadline: Duration) -> u64
pub fn next_task(&mut self) -> Option<ScheduledTask>
pub fn pending_tasks(&self) -> usize
pub fn clear(&mut self)
}
// Latency
impl LatencyTracker {
pub fn new(name: impl Into<String>) -> Self
pub fn record(&self, duration: Duration)
pub fn stats(&self) -> LatencyStats
pub fn reset(&self)
pub fn measure(&self) -> LatencyMeasurement
}
Dependency Analysis
| Dependency | Purpose | Weight |
|---|---|---|
agentic-robotics-core (path) |
Core types | Internal |
tokio (workspace) |
Async runtimes | Standard |
parking_lot (workspace) |
Fast mutexes | Light |
crossbeam (workspace) |
Lock-free primitives | Light -- unused in source |
rayon (workspace) |
Data parallelism | Medium -- unused in source |
anyhow (workspace) |
Error handling | Light |
thiserror (workspace) |
Error derives | Light -- unused in source |
tracing (workspace) |
Logging | Light |
hdrhistogram (workspace) |
Latency histograms | Light |
Notable: crossbeam, rayon, and thiserror are declared as dependencies but not used in any source file. The agentic-robotics-core dependency is declared but also not directly used -- no imports from it exist in the rt crate's source.
Code Quality Assessment
Test Coverage: 5 tests across 3 modules:
- RTPriority u8 conversion round-trip
- Executor creation success
- Spawn high priority (no completion verification -- uses
thread::sleepthen no assertion) - Scheduler priority ordering (3-task dequeue order)
- LatencyTracker record + stats verification
- LatencyMeasurement RAII guard
The test_spawn_high_priority test is effectively a no-op -- it spawns a task and sleeps but never checks the completed AtomicBool's final value.
Error Handling: The Default implementation for ROS3Executor calls .expect() which will panic on failure. This is appropriate for a default constructor but could be surprising.
Safety: No unsafe code. All thread safety via Arc<Mutex<>> and Arc<AtomicBool>.
Concerns:
- The
PriorityScheduleris completely disconnected from theROS3Executor. The scheduler is created and stored but never used for routing decisions. - The 1ms deadline threshold is hardcoded with no configuration mechanism.
- Creating two full Tokio runtimes (6 threads total) is heavyweight. On a system with few cores, this could cause contention.
- The
spawn_rt()return type is()-- callers cannot await completion or get results from spawned tasks. Onlyspawn_blocking()returns aJoinHandle. ScheduledTask.task_idis au64counter that will overflow after 2^64 tasks. Not practically concerning but worth noting the design assumes a monotonic non-wrapping counter.
Integration Points with ruvector
-
Attention Mechanism Scheduling: ruvector's attention computations (flash attention, multi-head attention) have different latency profiles. The dual-runtime pattern could route real-time inference (< 1ms deadline) to the high-priority pool while batch retraining goes to the low-priority pool.
-
GNN Inference RT: Graph neural network forward passes for time-sensitive applications (e.g., real-time recommendation) could use
spawn_high()to guarantee dedicated thread resources. -
LatencyTracker for Vector Search: The HDR histogram tracker would be valuable for monitoring HNSW search latency distributions in production. The RAII
measure()guard pattern integrates cleanly with ruvector's search functions. -
Priority-Based Query Routing: The scheduler design (once connected) could route vector queries by importance -- critical real-time queries to dedicated threads, background batch queries to the shared pool.
Crate 3: agentic-robotics-mcp
Path: /home/user/ruvector/crates/agentic-robotics-mcp/
Line Count: 506 lines
Complexity Estimate: Medium
Code Quality Rating: B+
File Inventory
| File | Lines | Purpose |
|---|---|---|
src/lib.rs |
349 | MCP types, McpServer, request handling, tests |
src/server.rs |
56 | ServerBuilder, helper functions |
src/transport.rs |
101 | StdioTransport + conditional SSE transport |
Purpose
Implements a Model Context Protocol (MCP) 2025-11 compliant server. MCP enables AI assistants (like Claude) to interact with external tools via a standardized JSON-RPC 2.0 protocol. This crate exposes robot capabilities as MCP tools, with both stdio and SSE (Server-Sent Events) transport options.
Architecture
Request Handling Pipeline
Transport (stdio or SSE)
|
v
JSON-RPC 2.0 Parse --> McpRequest
|
v
McpServer.handle_request()
|
+-- "initialize" --> protocol version + capabilities
+-- "tools/list" --> enumerate registered tools
+-- "tools/call" --> dispatch to registered handler
+-- <unknown> --> -32601 Method Not Found
The server maintains a HashMap<String, (McpTool, ToolHandler)> behind Arc<RwLock<>> (tokio RwLock). Tool registration is async due to the write lock. Request handling reads the tool map with a read lock for tool listing and invocation.
Transport Layer
Stdio Transport: Reads line-delimited JSON from stdin, writes responses to stdout. The main loop is:
- Read line from stdin via
AsyncBufReadExt - Parse as
McpRequest - Dispatch to
McpServer::handle_request() - Serialize response to JSON
- Write to stdout with newline delimiter and flush
SSE Transport (feature-gated behind sse): Uses axum with two routes:
POST /mcp: Accepts JSON McpRequest, returns JSON McpResponseGET /mcp/stream: Returns SSE stream (currently only sends a "connected" event)
The SSE implementation is minimal -- it does not implement bidirectional communication or event streaming for ongoing operations.
API Surface
Public Types
| Type | Module | Description |
|---|---|---|
McpTool |
lib |
name, description, input_schema (JSON Value) |
McpRequest |
lib |
jsonrpc, id, method, params -- JSON-RPC 2.0 request |
McpResponse |
lib |
jsonrpc, id, result, error -- JSON-RPC 2.0 response |
McpError |
lib |
code (i32), message, optional data |
ToolResult |
lib |
content: Vec, optional is_error flag |
ContentItem |
lib |
Tagged enum: Text, Resource, Image |
ToolHandler |
lib |
Arc<dyn Fn(Value) -> Result<ToolResult> + Send + Sync> |
McpServer |
lib |
Main server with tool registry |
ServerInfo |
lib |
name, version, description |
ServerBuilder |
server |
Builder pattern for McpServer |
StdioTransport |
transport |
Stdio-based transport |
Key Methods
impl McpServer {
pub fn new(name: impl Into<String>, version: impl Into<String>) -> Self
pub async fn register_tool(&self, tool: McpTool, handler: ToolHandler) -> Result<()>
pub async fn handle_request(&self, request: McpRequest) -> McpResponse
}
impl ServerBuilder {
pub fn new(name: impl Into<String>) -> Self
pub fn version(mut self, version: impl Into<String>) -> Self
pub fn build(self) -> McpServer
}
impl StdioTransport {
pub fn new(server: McpServer) -> Self
pub async fn run(&self) -> Result<()>
}
// Helper functions
pub fn tool<F>(f: F) -> ToolHandler
pub fn text_response(text: impl Into<String>) -> ToolResult
pub fn error_response(error: impl Into<String>) -> ToolResult
Constants
pub const MCP_VERSION: &str = "2025-11-15";
Protocol Compliance
The implementation covers the core MCP 2025-11 operations:
initialize: Returns protocol version, capabilities (tools + resources), server infotools/list: Returns all registered toolstools/call: Dispatches to handler by name with arguments
Missing MCP features:
resources/list,resources/read-- resources capability declared but not implementedprompts/list,prompts/get-- not implementednotifications/initialized-- server does not handle the post-init notificationsampling-- not implemented- Tool annotations (readOnlyHint, destructiveHint, openWorldHint)
Error codes used:
-32601: Method not found (standard JSON-RPC)-32602: Invalid params (standard JSON-RPC)-32000: Tool execution failure (server error range)
Dependency Analysis
| Dependency | Purpose | Weight |
|---|---|---|
agentic-robotics-core (path) |
Core types | Internal -- not imported in source |
tokio (workspace) |
Async runtime + IO | Standard |
serde + serde_json |
JSON-RPC serialization | Standard |
anyhow (workspace) |
Error handling | Light |
thiserror (workspace) |
Error derives | Light -- unused in source |
tracing (workspace) |
Logging | Light -- unused in source |
axum (optional, sse feature) |
HTTP server | Medium |
tokio-stream (optional, sse feature) |
Stream utilities | Light |
Notable: agentic-robotics-core, thiserror, and tracing are declared but not imported or used. The crate is functionally independent of the core crate.
Code Quality Assessment
Test Coverage: 3 async tests in lib.rs:
test_mcp_initialize: Verifies initialize response has result, no errortest_mcp_list_tools: Registers one tool, verifies list returns ittest_mcp_call_tool: Registers echo tool, calls it, verifies success
Tests are well-structured and test the full request/response cycle. No tests for:
- Error paths (missing tool, invalid params, malformed request)
- Transport layer (stdio, SSE)
- Concurrent tool registration and invocation
Error Handling: Uses JSON-RPC error codes correctly. The handle_call_tool method has proper null checks for params and tool name. The ToolHandler returns anyhow::Result which is caught and converted to MCP error responses.
Documentation: Good module-level docs. The lib.rs doc comment accurately describes the MCP version and transport options.
Safety: No unsafe code. Uses tokio::sync::RwLock (not parking_lot) for the tool registry, which is correct since the lock is held across .await points in handle_request.
Concerns:
ContentItem::ResourceusesmimeType(camelCase) as a Rust field name instead of idiomaticmime_typewith#[serde(rename = "mimeType")]. This works but violates Rust naming conventions.- The tool handler
Arc<dyn Fn(Value) -> Result<ToolResult>>is synchronous. Long-running tool operations will block the server's async runtime. Should beArc<dyn Fn(Value) -> BoxFuture<Result<ToolResult>>>for proper async support. serde_json::to_value(result).unwrap()inhandle_call_toolcan panic if ToolResult serialization fails. Should use?or map to an error response.- The stdio transport error handling on parse failure uses
eprintln!instead of returning a JSON-RPC error response to the caller.
Integration Points with ruvector
-
Vector Search as MCP Tool: ruvector's HNSW search could be exposed as an MCP tool:
{ "name": "vector_search", "description": "Search for nearest neighbors in vector space", "input_schema": { "type": "object", "properties": { "query": { "type": "array", "items": { "type": "number" } }, "k": { "type": "integer" }, "ef_search": { "type": "integer" } } } } -
GNN Inference as MCP Tool: Expose graph neural network forward passes for agentic reasoning about graph-structured data.
-
Attention Computation as MCP Tool: Multi-head attention or flash attention could be exposed for external AI systems to use ruvector's optimized attention kernels.
-
Embedding Generation: Wrap ruvector's encoding capabilities as MCP tools for real-time embedding generation from sensor data.
Crate 4: agentic-robotics-embedded
Path: /home/user/ruvector/crates/agentic-robotics-embedded/
Line Count: 41 lines
Complexity Estimate: Minimal
Code Quality Rating: C
File Inventory
| File | Lines | Purpose |
|---|---|---|
src/lib.rs |
41 | Priority enum + config struct |
Purpose
Intended to provide embedded systems support using Embassy and RTIC frameworks. Currently contains only configuration types and an enum -- no actual embedded runtime integration.
API Surface
Public Types
| Type | Description |
|---|---|
EmbeddedPriority |
4-level enum: Low=0, Normal=1, High=2, Critical=3 |
EmbeddedConfig |
tick_rate_hz: u32 (default 1000), stack_size: usize (default 4096) |
Current State
This crate is a skeleton. The Cargo.toml declares:
agentic-robotics-coreas a dependency (unused)serde,anyhow,thiserroras dependencies (unused)embassyandrticas feature flags that enable nothing (the actual embassy-executor and rtic dependencies are commented out)
The EmbeddedPriority enum overlaps with RTPriority from the rt crate but with only 4 levels instead of 5 (missing Background). There is no conversion between them.
Dependency Analysis
| Dependency | Purpose | Weight |
|---|---|---|
agentic-robotics-core (path) |
Core types | Unused |
serde (workspace) |
Serialization | Unused |
anyhow (workspace) |
Error handling | Unused |
thiserror (workspace) |
Error derives | Unused |
All 4 dependencies are declared but none are imported or used in source code.
Code Quality Assessment
Test Coverage: 1 test verifying EmbeddedConfig::default() values.
Concerns:
- This crate provides no functionality beyond two simple types.
- The Embassy and RTIC dependencies are commented out, meaning the
embassyandrticfeature flags are no-ops. - The
EmbeddedPriorityenum is redundant withRTPriorityfrom the rt crate. EmbeddedConfigfields are too generic --tick_rate_hzandstack_sizeare meaningful only in the context of a real embedded runtime.
Integration Points with ruvector
-
Edge Deployment: A completed embedded crate could enable deployment of ruvector-lite quantized models on embedded ARM/RISC-V devices. The config types would need to include memory constraints for vector storage, quantization levels, and inference batch sizes.
-
Limited Utility Currently: In its current state, this crate provides no integration value. It would need significant development to support actual embedded runtimes.
Crate 5: agentic-robotics-node
Path: /home/user/ruvector/crates/agentic-robotics-node/
Line Count: 236 lines (233 source + 3 build.rs)
Complexity Estimate: Medium
Code Quality Rating: B+
File Inventory
| File | Lines | Purpose |
|---|---|---|
src/lib.rs |
233 | NAPI bindings: AgenticNode, AgenticPublisher, AgenticSubscriber |
build.rs |
3 | napi-build setup |
Purpose
Provides Node.js/TypeScript bindings for the agentic-robotics-core pub/sub system via NAPI-RS. This enables JavaScript applications to create publishers and subscribers, publish JSON messages, and interact with the robotics middleware from Node.js.
Architecture
The crate uses the napi-derive macro system to generate N-API bindings. Three main types are exposed to JavaScript:
AgenticNode (factory)
|
+-- create_publisher(topic) --> AgenticPublisher
| |
| +-- publish(json_string)
| +-- get_topic()
| +-- get_stats() --> PublisherStats { messages, bytes }
|
+-- create_subscriber(topic) --> AgenticSubscriber
| |
| +-- try_recv() --> Option<String>
| +-- recv() --> String (blocking via spawn_blocking)
| +-- get_topic()
|
+-- list_publishers() --> Vec<String>
+-- list_subscribers() --> Vec<String>
+-- get_name() --> String
+-- get_version() --> String (static)
The NAPI boundary serializes all messages as JSON strings. The AgenticNode uses Publisher<serde_json::Value> with Format::Json explicitly (not CDR) because serde_json::Value cannot be CDR-serialized (CDR requires a fixed schema). This is a correct design decision for the JavaScript interop layer.
Internal state is managed with Arc<RwLock<HashMap<String, Arc<T>>>> for both publishers and subscribers, using tokio::sync::RwLock for async-safe access.
API Surface (NAPI-exported)
| Class | Method | Async | Returns |
|---|---|---|---|
AgenticNode |
new(name) |
No (constructor) | AgenticNode |
AgenticNode |
get_name() |
No | String |
AgenticNode |
create_publisher(topic) |
Yes | AgenticPublisher |
AgenticNode |
create_subscriber(topic) |
Yes | AgenticSubscriber |
AgenticNode |
get_version() |
No (static) | String |
AgenticNode |
list_publishers() |
Yes | Vec<String> |
AgenticNode |
list_subscribers() |
Yes | Vec<String> |
AgenticPublisher |
publish(data) |
Yes | void |
AgenticPublisher |
get_topic() |
No | String |
AgenticPublisher |
get_stats() |
No | PublisherStats |
AgenticSubscriber |
get_topic() |
No | String |
AgenticSubscriber |
try_recv() |
Yes | Option<String> |
AgenticSubscriber |
recv() |
Yes | String |
PublisherStats |
(object) | -- | { messages: i64, bytes: i64 } |
Dependency Analysis
| Dependency | Purpose | Weight |
|---|---|---|
agentic-robotics-core (path) |
Core pub/sub types | Internal -- actually used |
napi (workspace) |
N-API runtime | Medium |
napi-derive (workspace) |
Proc macros for #[napi] | Medium (compile-time) |
tokio (workspace) |
Async runtime | Standard |
serde + serde_json |
JSON at NAPI boundary | Standard |
anyhow (workspace) |
Error handling | Light |
napi-build (build-dep) |
Build script support | Light (build-time) |
This is the only non-benchmark crate that actually uses agentic-robotics-core types (Publisher, Subscriber) in its source code.
Code Quality Assessment
Test Coverage: 5 async tests:
- Node creation and name verification
- Publisher creation with topic verification
- Publish JSON and stats verification (messages count = 1)
- Subscriber creation with topic verification
- List publishers after creating 2 publishers
Tests are clean and verify the full NAPI-exposed API surface. The publish test verifies end-to-end JSON serialization through the publisher.
Error Handling: NAPI errors are constructed via Error::from_reason() with descriptive messages. Both JSON parse errors and publish/receive errors are mapped to NAPI Error types. This is the correct pattern for NAPI bindings.
Documentation: Module-level doc comment. Function-level comments on all #[napi] methods.
Safety: #![deny(clippy::all)] is enabled -- the only crate with an explicit clippy directive. No unsafe code (NAPI-RS generates the necessary unsafe FFI internally).
Concerns:
PublisherStatsusesi64instead ofu64because NAPI does not support unsigned 64-bit integers in JavaScript (BigInt would be needed). This means stats will overflow at 2^63 instead of 2^64. Acceptable for practical use.- The
try_recv()method is markedasyncbut the underlyingSubscriber::try_recv()is synchronous. The async wrapper adds unnecessary overhead for a non-blocking operation. - The
recv()method delegates toSubscriber::recv_async()which usesspawn_blockinginternally. This works but creates a double-async layering that could be simplified.
Integration Points with ruvector
-
Matches Existing NAPI Pattern: ruvector already has
ruvector-node,ruvector-gnn-node, etc. The agentic-robotics-node crate follows the same NAPI-RS pattern with#[napi]macros,cdylibcrate type, andnapi-buildbuild script. Integration would follow established conventions. -
Unified Node.js API: A combined NAPI module could expose both vector operations (search, insert, GNN inference) and robotics pub/sub in a single npm package, enabling Node.js applications to do real-time sensor data processing with ML inference.
-
JSON Bridge: The JSON serialization at the NAPI boundary is compatible with ruvector's existing JSON-based APIs. Vector data could be passed as JSON arrays, matching the pattern already used here.
Crate 6: agentic-robotics-benchmarks
Path: /home/user/ruvector/crates/agentic-robotics-benchmarks/
Line Count: 635 lines (all bench code)
Complexity Estimate: Low (benchmark harness code)
Code Quality Rating: B-
File Inventory
| File | Lines | Purpose |
|---|---|---|
benches/message_serialization.rs |
187 | CDR/JSON ser/deser benchmarks, size comparison, scaling |
benches/pubsub_latency.rs |
193 | Publisher/subscriber creation, latency, throughput |
benches/executor_performance.rs |
255 | Executor creation, task spawning, scheduling overhead |
Purpose
Comprehensive Criterion benchmark suite covering the core and rt crates. Provides performance characterization for serialization, pub/sub operations, and executor task management.
Benchmark Coverage
message_serialization.rs
| Benchmark Group | Benchmarks | Description |
|---|---|---|
| CDR Serialization | RobotState, Pose, PointCloud_1k | CDR encode with throughput tracking |
| CDR Deserialization | RobotState, Pose | CDR decode from pre-serialized bytes |
| JSON vs CDR | CDR_serialize, JSON_serialize, CDR_deserialize, JSON_deserialize | Head-to-head format comparison with size reporting |
| Message Size Scaling | PointCloud at 100, 1K, 10K, 100K points | Serialization scaling characteristics |
Note: This benchmark file references a Pose struct with frame_id: String and a PointCloud with points: Vec<[f32; 3]> and frame_id: String. These differ from the actual types in agentic-robotics-core/src/message.rs where Pose has no frame_id field and PointCloud uses Vec<Point3D> not Vec<[f32; 3]>. This benchmark will not compile against the current core crate without modifications.
pubsub_latency.rs
| Benchmark Group | Benchmarks | Description |
|---|---|---|
| Publisher Creation | create_publisher | Publisher construction overhead |
| Subscriber Creation | create_subscriber | Subscriber construction overhead |
| Publish Latency | single_publish | Single message publish time |
| Publish Throughput | batch_publish (10, 100, 1000) | Burst publishing at different batch sizes |
| End-to-End Latency | pubsub_roundtrip | Full publish cycle (no actual receive) |
| Serializer Comparison | CDR_publish, JSON_publish | Publish with different formats |
| Concurrent Publishers | concurrent (1, 2, 4, 8) | Multiple publisher scaling |
Note: This benchmark references Publisher::new(topic, Serializer::Cdr) with a two-argument constructor and Serializer::Cdr enum variant. The actual core crate uses Publisher::new(topic) (single argument, defaults to CDR) and Serializer::new(Format::Cdr) (struct, not enum). These API mismatches mean this benchmark will not compile against the current core crate.
executor_performance.rs
| Benchmark Group | Benchmarks | Description |
|---|---|---|
| Executor Creation | create_executor | Runtime initialization cost |
| Task Spawning | spawn_high_priority, spawn_low_priority | Per-task spawn overhead |
| Scheduler Overhead | priority_low/high, deadline_check_fast/slow | Scheduling decision cost |
| Task Distribution | spawn_tasks (10, 100, 1000) | Bulk task spawning with mixed priorities |
| Async Task Execution | execute_sync_task, execute_with_yield | Task execution overhead |
| Priority Handling | mixed_priorities | Interleaved priority levels |
| Deadline Distribution | tight_deadlines, loose_deadlines | High vs low priority runtime routing |
Note: This benchmark references Priority::High, Priority::Medium, Priority::Low as enum variants and PriorityScheduler::should_use_high_priority(). The actual rt crate uses Priority(pub u8) as a newtype (not an enum) and PriorityScheduler has no should_use_high_priority() method. These API mismatches mean this benchmark will not compile against the current rt crate.
Dependency Analysis
| Dependency | Purpose | Weight |
|---|---|---|
agentic-robotics-core (path) |
Core types for benchmarking | Internal |
agentic-robotics-rt (path) |
RT types for benchmarking | Internal |
criterion 0.5 |
Benchmark framework | Medium |
tokio 1.40 |
Async runtime | Standard |
serde 1.0 |
Serialization | Standard |
serde_json 1.0 |
JSON | Standard |
Notable: Dependencies are specified with explicit versions (not workspace), unlike the other crates. This means they could drift from workspace versions. The crate has publish = false.
Code Quality Assessment
Compilation Status: The benchmarks will NOT compile against the current source crates due to multiple API mismatches:
Posestruct fields differ (benchmarks expectframe_id, source has none)PointCloudfield types differ (Vec<[f32; 3]>vsVec<Point3D>)Publisherconstructor signature differs (2-arg vs 1-arg)Serializeris used as an enum variant, not a structPriorityis used as an enum, not a newtypePriorityScheduler::should_use_high_priority()does not exist
This suggests the benchmarks were written against a different (possibly planned or previous) version of the API.
Benchmark Design: Despite the compilation issues, the benchmark structure is well-designed:
- Uses
Throughput::Bytesfor size-aware benchmarking - Scaling benchmarks test across multiple orders of magnitude (100 to 100K points)
- Concurrent publisher benchmarks test scaling characteristics
iter_customis used correctly for measuring async operation latencyblack_boxis applied consistently to prevent dead code elimination
Concerns:
- The
benchmark_task_distributiontest creates a newROS3Executor(with 6 threads) per iteration -- this is extremely expensive and will dominate the benchmark. futures::executor::block_onis used in pubsub benchmarks instead of Criterion'sb.to_async()pattern. This adds overhead from blocking the thread.- No warm-up or steady-state verification for executor benchmarks.
Integration Points with ruvector
-
Combined Benchmark Suite: The benchmark patterns could be extended to test combined robotics + ML workloads: e.g., serialize PointCloud -> HNSW insert -> search -> publish results.
-
Latency Profiling: The serialization benchmarks provide a template for benchmarking ruvector's own serialization paths (vector encoding, index persistence).
-
Scaling Characterization: The message size scaling pattern (100 to 100K points) directly applies to benchmarking vector search with different collection sizes.
Cross-Crate Dependency Graph
agentic-robotics-benchmarks (publish=false)
|
+---> agentic-robotics-core
+---> agentic-robotics-rt
|
+---> agentic-robotics-core
agentic-robotics-node
|
+---> agentic-robotics-core (ACTUALLY USED)
agentic-robotics-mcp
|
+---> agentic-robotics-core (declared but unused)
agentic-robotics-embedded
|
+---> agentic-robotics-core (declared but unused)
agentic-robotics-rt
|
+---> agentic-robotics-core (declared but unused)
Key observation: Only agentic-robotics-node actually imports and uses types from agentic-robotics-core. The other 3 crates (rt, mcp, embedded) declare it as a dependency but do not use it. The benchmarks crate references core types by the old crate name (ros3_core, ros3_rt), which suggests the crates were renamed from ros3-* to agentic-robotics-* but the benchmarks were not updated.
Overall Assessment
Summary Table
| Crate | Lines | Tests | Quality | Compilable | Maturity |
|---|---|---|---|---|---|
| core | 705 | 8 | B | Yes | Alpha -- middleware stubbed |
| rt | 512 | 5 | B- | Yes | Alpha -- scheduler disconnected |
| mcp | 506 | 3 | B+ | Yes | Beta -- core protocol working |
| embedded | 41 | 1 | C | Yes | Skeleton -- no functionality |
| node | 236 | 5 | B+ | Yes (with napi) | Alpha -- working NAPI bindings |
| benchmarks | 635 | 0 | B- | No -- API mismatches | Broken -- needs API updates |
Total Metrics
- Total source lines: 2,635 (including benchmarks)
- Total tests: 22
- Unsafe code: None across all crates
- Broken crates: 1 (benchmarks -- API mismatches with current source)
- Unused dependencies: 12 instances across all crates
Strengths
- Clean type design: The
Messagetrait,Publisher<T>,Subscriber<T>generics are well-structured with proper Send/Sync/static bounds. - Serialization flexibility: CDR + JSON + rkyv (future) covers binary efficiency, debugging, and zero-copy use cases.
- LatencyTracker: The HDR histogram-based latency tracker is production-quality with RAII guards and non-blocking recording.
- MCP implementation: The MCP server is the most complete component, with proper JSON-RPC 2.0 handling and a clean tool registration API.
- NAPI bindings: Follow established patterns and work correctly with JSON serialization at the boundary.
Weaknesses
- Placeholder code: Zenoh middleware, rkyv serialization, Service client, and embedded support are all stubs.
- Disconnected components: The PriorityScheduler is never used by the executor. The core crate is declared as a dependency by 4 crates but actually used by only 1.
- API drift: The benchmarks reference a different API surface than what exists in the source, indicating either the API changed after benchmarks were written or the benchmarks target a planned future API.
- Shallow testing: Tests cover happy paths only. No error path testing, no concurrent access testing, no integration tests across crates.
- Dependency bloat:
zenohandrustddsin core are heavyweight dependencies that are never used. Multiple crates declarecrossbeam,rayon,thiserror,tracingwithout importing them.
Safety Assessment
| Concern | Status |
|---|---|
| Unsafe code | None -- all crates are safe Rust |
| Thread safety | Proper use of Arc, Mutex, RwLock throughout |
| Error handling | thiserror/anyhow pattern, but some .unwrap() in MCP server |
| Panic risk | Default::default() on ROS3Executor uses .expect() |
| Memory safety | No raw pointers, no manual memory management |
| Input validation | Minimal -- MCP server validates JSON structure but not content |
Integration Roadmap for ruvector
Phase 1: Direct Utility (No modification needed)
-
LatencyTracker adoption: Import
agentic-robotics-rt::LatencyTrackerinto ruvector's profiling infrastructure for HDR histogram-based latency monitoring of HNSW search, GNN inference, and attention computation. -
MCP tool exposure: Use
agentic-robotics-mcp::McpServerto expose ruvector capabilities as MCP tools:vector_search-- HNSW nearest-neighbor queriesvector_insert-- Add vectors to collectionsgnn_inference-- Graph neural network forward passattention_compute-- Multi-head/flash attentioncollection_stats-- Index statistics and health
Phase 2: Adapter Layer (Thin wrappers)
-
PointCloud <-> Vector adapter: Create a bidirectional conversion between
PointCloud(robotics 3D sensor data) and ruvector's vector types. This enables real-time HNSW indexing of LiDAR/depth sensor data:impl From<&PointCloud> for Vec<[f32; 3]> { ... } impl From<Vec<[f32; 3]>> for PointCloud { ... } -
Message trait for vector types: Implement
Messageon ruvector's core vector/embedding types to enable pub/sub distribution. -
NAPI unification: Combine
agentic-robotics-nodewithruvector-nodeinto a single npm package or create an interop layer.
Phase 3: Deep Integration (Architectural changes)
-
Dual-runtime for ML workloads: Adapt the ROS3Executor pattern for ruvector:
- High-priority runtime: Real-time inference queries (< 1ms SLA)
- Low-priority runtime: Batch indexing, model training, compaction
- Connect the PriorityScheduler to actually route tasks by deadline
-
Zenoh-based distributed vectors: When the Zenoh middleware is completed, use it for distributed vector index replication (similar to ruvector-replication but over Zenoh instead of custom protocols).
-
CDR serialization for vector wire format: Use CDR as the wire format for vector data in DDS-compatible robotics environments, enabling direct integration with ROS2 systems.
Recommended Priority Order
- LatencyTracker (immediate value, no risk)
- MCP tool exposure (high value for agentic workflows)
- PointCloud adapter (enables robotics use cases)
- NAPI unification (reduces maintenance burden)
- Dual-runtime (significant architectural benefit, higher risk)
- Zenoh distributed vectors (depends on Zenoh middleware completion)
Prerequisites Before Integration
- Fix benchmark compilation errors (update to current API)
- Remove unused dependencies from all crates (zenoh, rustdds, rayon, crossbeam where unused)
- Connect PriorityScheduler to ROS3Executor
- Complete rkyv serialization implementation or remove the stub
- Add error path tests across all crates
- Resolve the
ros3_core/ros3_rtcrate name references in benchmarks