Merge commit 'd803bfe2b1fe7f5e219e50ac20d6801a0a58ac75' as 'vendor/ruvector'

This commit is contained in:
ruv
2026-02-28 14:39:40 -05:00
7854 changed files with 3522914 additions and 0 deletions

View File

@@ -0,0 +1,232 @@
# @ruvector/learning-wasm - Ultra-Fast MicroLoRA for WebAssembly
[![npm version](https://img.shields.io/npm/v/ruvector-learning-wasm.svg)](https://www.npmjs.com/package/ruvector-learning-wasm)
[![License: MIT OR Apache-2.0](https://img.shields.io/badge/license-MIT%2FApache--2.0-blue.svg)](https://github.com/ruvnet/ruvector)
[![Bundle Size](https://img.shields.io/badge/bundle%20size-38KB%20gzip-green.svg)](https://www.npmjs.com/package/ruvector-learning-wasm)
[![WebAssembly](https://img.shields.io/badge/WebAssembly-654FF0?logo=webassembly&logoColor=white)](https://webassembly.org/)
Ultra-fast **Low-Rank Adaptation (LoRA)** for WebAssembly with sub-100 microsecond adaptation latency. Designed for real-time per-operator-type learning in query optimization systems, edge AI, and browser-based machine learning applications.
## Key Features
- **Rank-2 LoRA Architecture**: Minimal parameter count (2d parameters per adapter) for efficient edge deployment
- **Sub-100us Adaptation Latency**: Instant weight updates enabling real-time learning
- **Per-Operator Scoping**: Separate adapters for 17 different operator types (scan, filter, join, aggregate, etc.)
- **Zero-Allocation Forward Pass**: Direct memory access for maximum performance
- **Trajectory Buffer**: Track learning history with success rate analytics
- **WASM-Optimized**: no_std compatible with minimal allocations
## Installation
```bash
npm install ruvector-learning-wasm
# or
yarn add ruvector-learning-wasm
# or
pnpm add ruvector-learning-wasm
```
## Quick Start
### TypeScript/JavaScript
```typescript
import init, { WasmMicroLoRA, WasmScopedLoRA, WasmTrajectoryBuffer } from 'ruvector-learning-wasm';
// Initialize WASM module
await init();
// Create a MicroLoRA engine (256-dim embeddings)
const lora = new WasmMicroLoRA(256, 0.1, 0.01);
// Forward pass with typed arrays
const input = new Float32Array(256).fill(0.1);
const output = lora.forward_array(input);
// Adapt based on gradient
const gradient = new Float32Array(256);
gradient.fill(0.05);
lora.adapt_array(gradient);
// Or use reward-based adaptation
lora.adapt_with_reward(0.15); // 15% improvement
console.log(`Adaptations: ${lora.adapt_count()}`);
console.log(`Delta norm: ${lora.delta_norm()}`);
```
### Zero-Allocation Forward Pass
For maximum performance, use direct memory access:
```typescript
// Get buffer pointers
const inputPtr = lora.get_input_ptr();
const outputPtr = lora.get_output_ptr();
// Write directly to WASM memory
const memory = new Float32Array(wasmInstance.memory.buffer, inputPtr, 256);
memory.set(inputData);
// Execute forward pass (zero allocation)
lora.forward();
// Read output directly from WASM memory
const result = new Float32Array(wasmInstance.memory.buffer, outputPtr, 256);
```
### Per-Operator Scoped LoRA
```typescript
import { WasmScopedLoRA } from 'ruvector-learning-wasm';
const scopedLora = new WasmScopedLoRA(256, 0.1, 0.01);
// Operator types: 0=Scan, 1=Filter, 2=Join, 3=Aggregate, 4=Project, 5=Sort, ...
const SCAN_OP = 0;
const JOIN_OP = 2;
// Forward pass for specific operator
const scanOutput = scopedLora.forward_array(SCAN_OP, input);
// Adapt specific operator based on improvement
scopedLora.adapt_with_reward(JOIN_OP, 0.25);
// Get operator name
console.log(WasmScopedLoRA.scope_name(SCAN_OP)); // "Scan"
// Check per-operator statistics
console.log(`Scan adaptations: ${scopedLora.adapt_count(SCAN_OP)}`);
console.log(`Total adaptations: ${scopedLora.total_adapt_count()}`);
```
### Trajectory Tracking
```typescript
import { WasmTrajectoryBuffer } from 'ruvector-learning-wasm';
const buffer = new WasmTrajectoryBuffer(1000, 256);
// Record trajectories
buffer.record(
embedding, // Float32Array
2, // operator type (JOIN)
5, // attention mechanism used
45.2, // actual execution time (ms)
120.5 // baseline execution time (ms)
);
// Analyze learning progress
console.log(`Success rate: ${(buffer.success_rate() * 100).toFixed(1)}%`);
console.log(`Best improvement: ${buffer.best_improvement()}x`);
console.log(`Mean improvement: ${buffer.mean_improvement()}x`);
console.log(`Best attention mechanism: ${buffer.best_attention()}`);
// Filter high-quality trajectories
const topTrajectories = buffer.high_quality_count(0.5); // >50% improvement
```
## Architecture
```
Input Embedding (d-dim)
|
v
+---------+
| A: d x 2 | Down projection (d -> 2)
+---------+
|
v
+---------+
| B: 2 x d | Up projection (2 -> d)
+---------+
|
v
Delta W = alpha * (A @ B)
|
v
Output = Input + Delta W
```
## Performance Benchmarks
| Operation | Latency | Throughput |
|-----------|---------|------------|
| Forward (256-dim) | ~15us | 66K ops/sec |
| Adapt (gradient) | ~25us | 40K ops/sec |
| Forward (zero-alloc) | ~8us | 125K ops/sec |
| Scoped forward | ~20us | 50K ops/sec |
| Trajectory record | ~5us | 200K ops/sec |
Tested on Chrome 120+ / Node.js 20+ with WASM SIMD support.
## API Reference
### WasmMicroLoRA
| Method | Description |
|--------|-------------|
| `new(dim?, alpha?, learning_rate?)` | Create engine (defaults: 256, 0.1, 0.01) |
| `forward_array(input)` | Forward pass with Float32Array |
| `forward()` | Zero-allocation forward using buffers |
| `adapt_array(gradient)` | Adapt with gradient vector |
| `adapt_with_reward(improvement)` | Reward-based adaptation |
| `delta_norm()` | Get weight change magnitude |
| `adapt_count()` | Number of adaptations |
| `reset()` | Reset to initial state |
### WasmScopedLoRA
| Method | Description |
|--------|-------------|
| `new(dim?, alpha?, learning_rate?)` | Create scoped manager |
| `forward_array(op_type, input)` | Forward for operator |
| `adapt_with_reward(op_type, improvement)` | Operator-specific adaptation |
| `scope_name(op_type)` | Get operator name (static) |
| `total_adapt_count()` | Total adaptations across all operators |
| `set_category_fallback(enabled)` | Enable category fallback |
### WasmTrajectoryBuffer
| Method | Description |
|--------|-------------|
| `new(capacity?, embedding_dim?)` | Create buffer |
| `record(embedding, op_type, attention_type, exec_ms, baseline_ms)` | Record trajectory |
| `success_rate()` | Get success rate (0.0-1.0) |
| `best_improvement()` | Get best improvement ratio |
| `mean_improvement()` | Get mean improvement ratio |
| `high_quality_count(threshold)` | Count trajectories above threshold |
## Use Cases
- **Query Optimization**: Learn optimal attention mechanisms per SQL operator
- **Edge AI Personalization**: Real-time model adaptation on user devices
- **Browser ML**: In-browser fine-tuning without server round-trips
- **Federated Learning**: Lightweight local adaptation for aggregation
- **Reinforcement Learning**: Fast policy adaptation from rewards
## Bundle Size
- **WASM binary**: ~39KB (uncompressed)
- **Gzip compressed**: ~15KB
- **JavaScript glue**: ~5KB
## Related Packages
- [ruvector-attention-unified-wasm](https://www.npmjs.com/package/ruvector-attention-unified-wasm) - 18+ attention mechanisms
- [ruvector-nervous-system-wasm](https://www.npmjs.com/package/ruvector-nervous-system-wasm) - Bio-inspired neural components
- [ruvector-economy-wasm](https://www.npmjs.com/package/ruvector-economy-wasm) - CRDT credit economy
## License
MIT OR Apache-2.0
## Links
- [GitHub Repository](https://github.com/ruvnet/ruvector)
- [Full Documentation](https://ruv.io)
- [Bug Reports](https://github.com/ruvnet/ruvector/issues)
---
**Keywords**: LoRA, Low-Rank Adaptation, machine learning, WASM, WebAssembly, neural network, edge AI, adaptation, fine-tuning, query optimization, real-time learning, micro LoRA, rank-2, browser ML

View File

@@ -0,0 +1,43 @@
{
"name": "@ruvector/learning-wasm",
"type": "module",
"collaborators": [
"rUv <ruvnet@users.noreply.github.com>"
],
"author": "RuVector Team <ruvnet@users.noreply.github.com>",
"description": "Ultra-fast MicroLoRA adaptation for WASM - rank-2 LoRA with <100us latency for per-operator learning",
"version": "0.1.29",
"license": "MIT OR Apache-2.0",
"repository": {
"type": "git",
"url": "https://github.com/ruvnet/ruvector"
},
"bugs": {
"url": "https://github.com/ruvnet/ruvector/issues"
},
"files": [
"ruvector_learning_wasm_bg.wasm",
"ruvector_learning_wasm.js",
"ruvector_learning_wasm.d.ts",
"ruvector_learning_wasm_bg.wasm.d.ts",
"README.md"
],
"main": "ruvector_learning_wasm.js",
"homepage": "https://ruv.io",
"types": "ruvector_learning_wasm.d.ts",
"sideEffects": [
"./snippets/*"
],
"keywords": [
"lora",
"machine-learning",
"wasm",
"neural-network",
"adaptation",
"ruvector",
"webassembly",
"ai",
"deep-learning",
"micro-lora"
]
}

View File

@@ -0,0 +1,292 @@
/* tslint:disable */
/* eslint-disable */
export class WasmMicroLoRA {
free(): void;
[Symbol.dispose](): void;
/**
* Get delta norm (weight change magnitude)
*/
delta_norm(): number;
/**
* Adapt with typed array gradient
*/
adapt_array(gradient: Float32Array): void;
/**
* Get adaptation count
*/
adapt_count(): bigint;
/**
* Get parameter count
*/
param_count(): number;
/**
* Forward pass with typed array input (allocates output)
*/
forward_array(input: Float32Array): Float32Array;
/**
* Get forward pass count
*/
forward_count(): bigint;
/**
* Get pointer to input buffer for direct memory access
*/
get_input_ptr(): number;
/**
* Get pointer to output buffer for direct memory access
*/
get_output_ptr(): number;
/**
* Adapt with improvement reward using input buffer as gradient
*/
adapt_with_reward(improvement: number): void;
/**
* Get embedding dimension
*/
dim(): number;
/**
* Create a new MicroLoRA engine
*
* @param dim - Embedding dimension (default 256, max 256)
* @param alpha - Scaling factor (default 0.1)
* @param learning_rate - Learning rate (default 0.01)
*/
constructor(dim?: number | null, alpha?: number | null, learning_rate?: number | null);
/**
* Adapt using input buffer as gradient
*/
adapt(): void;
/**
* Reset the engine
*/
reset(): void;
/**
* Forward pass using internal buffers (zero-allocation)
*
* Write input to get_input_ptr(), call forward(), read from get_output_ptr()
*/
forward(): void;
}
export class WasmScopedLoRA {
free(): void;
[Symbol.dispose](): void;
/**
* Get delta norm for operator
*/
delta_norm(op_type: number): number;
/**
* Get operator scope name
*/
static scope_name(op_type: number): string;
/**
* Adapt with typed array
*/
adapt_array(op_type: number, gradient: Float32Array): void;
/**
* Get adapt count for operator
*/
adapt_count(op_type: number): bigint;
/**
* Reset specific operator adapter
*/
reset_scope(op_type: number): void;
/**
* Forward pass with typed array
*/
forward_array(op_type: number, input: Float32Array): Float32Array;
/**
* Get forward count for operator
*/
forward_count(op_type: number): bigint;
/**
* Get input buffer pointer
*/
get_input_ptr(): number;
/**
* Get output buffer pointer
*/
get_output_ptr(): number;
/**
* Adapt with improvement reward
*/
adapt_with_reward(op_type: number, improvement: number): void;
/**
* Get total adapt count
*/
total_adapt_count(): bigint;
/**
* Get total forward count
*/
total_forward_count(): bigint;
/**
* Enable/disable category fallback
*/
set_category_fallback(enabled: boolean): void;
/**
* Create a new scoped LoRA manager
*
* @param dim - Embedding dimension (max 256)
* @param alpha - Scaling factor (default 0.1)
* @param learning_rate - Learning rate (default 0.01)
*/
constructor(dim?: number | null, alpha?: number | null, learning_rate?: number | null);
/**
* Adapt for operator type using input buffer as gradient
*/
adapt(op_type: number): void;
/**
* Forward pass for operator type (uses internal buffers)
*
* @param op_type - Operator type (0-16)
*/
forward(op_type: number): void;
/**
* Reset all adapters
*/
reset_all(): void;
}
export class WasmTrajectoryBuffer {
free(): void;
[Symbol.dispose](): void;
/**
* Get total count
*/
total_count(): bigint;
/**
* Get success rate
*/
success_rate(): number;
/**
* Get best attention type
*/
best_attention(): number;
/**
* Get best improvement
*/
best_improvement(): number;
/**
* Get mean improvement
*/
mean_improvement(): number;
/**
* Get trajectory count for operator
*/
count_by_operator(op_type: number): number;
/**
* Get high quality trajectory count
*/
high_quality_count(threshold: number): number;
/**
* Get buffer length
*/
len(): number;
/**
* Create a new trajectory buffer
*
* @param capacity - Maximum number of trajectories to store
* @param embedding_dim - Dimension of embeddings (default 256)
*/
constructor(capacity?: number | null, embedding_dim?: number | null);
/**
* Reset buffer
*/
reset(): void;
/**
* Record a trajectory
*
* @param embedding - Embedding vector (Float32Array)
* @param op_type - Operator type (0-16)
* @param attention_type - Attention mechanism used
* @param execution_ms - Actual execution time
* @param baseline_ms - Baseline execution time
*/
record(embedding: Float32Array, op_type: number, attention_type: number, execution_ms: number, baseline_ms: number): void;
/**
* Check if empty
*/
is_empty(): boolean;
/**
* Get variance
*/
variance(): number;
}
export type InitInput = RequestInfo | URL | Response | BufferSource | WebAssembly.Module;
export interface InitOutput {
readonly memory: WebAssembly.Memory;
readonly __wbg_wasmmicrolora_free: (a: number, b: number) => void;
readonly __wbg_wasmscopedlora_free: (a: number, b: number) => void;
readonly __wbg_wasmtrajectorybuffer_free: (a: number, b: number) => void;
readonly wasmmicrolora_adapt: (a: number) => void;
readonly wasmmicrolora_adapt_array: (a: number, b: number, c: number) => void;
readonly wasmmicrolora_adapt_count: (a: number) => bigint;
readonly wasmmicrolora_adapt_with_reward: (a: number, b: number) => void;
readonly wasmmicrolora_delta_norm: (a: number) => number;
readonly wasmmicrolora_dim: (a: number) => number;
readonly wasmmicrolora_forward: (a: number) => void;
readonly wasmmicrolora_forward_array: (a: number, b: number, c: number, d: number) => void;
readonly wasmmicrolora_forward_count: (a: number) => bigint;
readonly wasmmicrolora_get_input_ptr: (a: number) => number;
readonly wasmmicrolora_get_output_ptr: (a: number) => number;
readonly wasmmicrolora_new: (a: number, b: number, c: number) => number;
readonly wasmmicrolora_param_count: (a: number) => number;
readonly wasmmicrolora_reset: (a: number) => void;
readonly wasmscopedlora_adapt: (a: number, b: number) => void;
readonly wasmscopedlora_adapt_array: (a: number, b: number, c: number, d: number) => void;
readonly wasmscopedlora_adapt_count: (a: number, b: number) => bigint;
readonly wasmscopedlora_adapt_with_reward: (a: number, b: number, c: number) => void;
readonly wasmscopedlora_delta_norm: (a: number, b: number) => number;
readonly wasmscopedlora_forward: (a: number, b: number) => void;
readonly wasmscopedlora_forward_array: (a: number, b: number, c: number, d: number, e: number) => void;
readonly wasmscopedlora_forward_count: (a: number, b: number) => bigint;
readonly wasmscopedlora_get_input_ptr: (a: number) => number;
readonly wasmscopedlora_get_output_ptr: (a: number) => number;
readonly wasmscopedlora_new: (a: number, b: number, c: number) => number;
readonly wasmscopedlora_reset_all: (a: number) => void;
readonly wasmscopedlora_reset_scope: (a: number, b: number) => void;
readonly wasmscopedlora_scope_name: (a: number, b: number) => void;
readonly wasmscopedlora_set_category_fallback: (a: number, b: number) => void;
readonly wasmscopedlora_total_adapt_count: (a: number) => bigint;
readonly wasmscopedlora_total_forward_count: (a: number) => bigint;
readonly wasmtrajectorybuffer_best_attention: (a: number) => number;
readonly wasmtrajectorybuffer_best_improvement: (a: number) => number;
readonly wasmtrajectorybuffer_count_by_operator: (a: number, b: number) => number;
readonly wasmtrajectorybuffer_high_quality_count: (a: number, b: number) => number;
readonly wasmtrajectorybuffer_is_empty: (a: number) => number;
readonly wasmtrajectorybuffer_len: (a: number) => number;
readonly wasmtrajectorybuffer_mean_improvement: (a: number) => number;
readonly wasmtrajectorybuffer_new: (a: number, b: number) => number;
readonly wasmtrajectorybuffer_record: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void;
readonly wasmtrajectorybuffer_reset: (a: number) => void;
readonly wasmtrajectorybuffer_success_rate: (a: number) => number;
readonly wasmtrajectorybuffer_total_count: (a: number) => bigint;
readonly wasmtrajectorybuffer_variance: (a: number) => number;
readonly __wbindgen_export: (a: number, b: number) => number;
readonly __wbindgen_add_to_stack_pointer: (a: number) => number;
readonly __wbindgen_export2: (a: number, b: number, c: number) => void;
}
export type SyncInitInput = BufferSource | WebAssembly.Module;
/**
* Instantiates the given `module`, which can either be bytes or
* a precompiled `WebAssembly.Module`.
*
* @param {{ module: SyncInitInput }} module - Passing `SyncInitInput` directly is deprecated.
*
* @returns {InitOutput}
*/
export function initSync(module: { module: SyncInitInput } | SyncInitInput): InitOutput;
/**
* If `module_or_path` is {RequestInfo} or {URL}, makes a request and
* for everything else, calls `WebAssembly.instantiate` directly.
*
* @param {{ module_or_path: InitInput | Promise<InitInput> }} module_or_path - Passing `InitInput` directly is deprecated.
*
* @returns {Promise<InitOutput>}
*/
export default function __wbg_init (module_or_path?: { module_or_path: InitInput | Promise<InitInput> } | InitInput | Promise<InitInput>): Promise<InitOutput>;

View File

@@ -0,0 +1,648 @@
let wasm;
function getArrayF32FromWasm0(ptr, len) {
ptr = ptr >>> 0;
return getFloat32ArrayMemory0().subarray(ptr / 4, ptr / 4 + len);
}
let cachedDataViewMemory0 = null;
function getDataViewMemory0() {
if (cachedDataViewMemory0 === null || cachedDataViewMemory0.buffer.detached === true || (cachedDataViewMemory0.buffer.detached === undefined && cachedDataViewMemory0.buffer !== wasm.memory.buffer)) {
cachedDataViewMemory0 = new DataView(wasm.memory.buffer);
}
return cachedDataViewMemory0;
}
let cachedFloat32ArrayMemory0 = null;
function getFloat32ArrayMemory0() {
if (cachedFloat32ArrayMemory0 === null || cachedFloat32ArrayMemory0.byteLength === 0) {
cachedFloat32ArrayMemory0 = new Float32Array(wasm.memory.buffer);
}
return cachedFloat32ArrayMemory0;
}
function getStringFromWasm0(ptr, len) {
ptr = ptr >>> 0;
return decodeText(ptr, len);
}
let cachedUint8ArrayMemory0 = null;
function getUint8ArrayMemory0() {
if (cachedUint8ArrayMemory0 === null || cachedUint8ArrayMemory0.byteLength === 0) {
cachedUint8ArrayMemory0 = new Uint8Array(wasm.memory.buffer);
}
return cachedUint8ArrayMemory0;
}
function isLikeNone(x) {
return x === undefined || x === null;
}
function passArrayF32ToWasm0(arg, malloc) {
const ptr = malloc(arg.length * 4, 4) >>> 0;
getFloat32ArrayMemory0().set(arg, ptr / 4);
WASM_VECTOR_LEN = arg.length;
return ptr;
}
let cachedTextDecoder = new TextDecoder('utf-8', { ignoreBOM: true, fatal: true });
cachedTextDecoder.decode();
const MAX_SAFARI_DECODE_BYTES = 2146435072;
let numBytesDecoded = 0;
function decodeText(ptr, len) {
numBytesDecoded += len;
if (numBytesDecoded >= MAX_SAFARI_DECODE_BYTES) {
cachedTextDecoder = new TextDecoder('utf-8', { ignoreBOM: true, fatal: true });
cachedTextDecoder.decode();
numBytesDecoded = len;
}
return cachedTextDecoder.decode(getUint8ArrayMemory0().subarray(ptr, ptr + len));
}
let WASM_VECTOR_LEN = 0;
const WasmMicroLoRAFinalization = (typeof FinalizationRegistry === 'undefined')
? { register: () => {}, unregister: () => {} }
: new FinalizationRegistry(ptr => wasm.__wbg_wasmmicrolora_free(ptr >>> 0, 1));
const WasmScopedLoRAFinalization = (typeof FinalizationRegistry === 'undefined')
? { register: () => {}, unregister: () => {} }
: new FinalizationRegistry(ptr => wasm.__wbg_wasmscopedlora_free(ptr >>> 0, 1));
const WasmTrajectoryBufferFinalization = (typeof FinalizationRegistry === 'undefined')
? { register: () => {}, unregister: () => {} }
: new FinalizationRegistry(ptr => wasm.__wbg_wasmtrajectorybuffer_free(ptr >>> 0, 1));
/**
* WASM-exposed MicroLoRA engine
*/
export class WasmMicroLoRA {
__destroy_into_raw() {
const ptr = this.__wbg_ptr;
this.__wbg_ptr = 0;
WasmMicroLoRAFinalization.unregister(this);
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_wasmmicrolora_free(ptr, 0);
}
/**
* Get delta norm (weight change magnitude)
* @returns {number}
*/
delta_norm() {
const ret = wasm.wasmmicrolora_delta_norm(this.__wbg_ptr);
return ret;
}
/**
* Adapt with typed array gradient
* @param {Float32Array} gradient
*/
adapt_array(gradient) {
const ptr0 = passArrayF32ToWasm0(gradient, wasm.__wbindgen_export);
const len0 = WASM_VECTOR_LEN;
wasm.wasmmicrolora_adapt_array(this.__wbg_ptr, ptr0, len0);
}
/**
* Get adaptation count
* @returns {bigint}
*/
adapt_count() {
const ret = wasm.wasmmicrolora_adapt_count(this.__wbg_ptr);
return BigInt.asUintN(64, ret);
}
/**
* Get parameter count
* @returns {number}
*/
param_count() {
const ret = wasm.wasmmicrolora_param_count(this.__wbg_ptr);
return ret >>> 0;
}
/**
* Forward pass with typed array input (allocates output)
* @param {Float32Array} input
* @returns {Float32Array}
*/
forward_array(input) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
const ptr0 = passArrayF32ToWasm0(input, wasm.__wbindgen_export);
const len0 = WASM_VECTOR_LEN;
wasm.wasmmicrolora_forward_array(retptr, this.__wbg_ptr, ptr0, len0);
var r0 = getDataViewMemory0().getInt32(retptr + 4 * 0, true);
var r1 = getDataViewMemory0().getInt32(retptr + 4 * 1, true);
var v2 = getArrayF32FromWasm0(r0, r1).slice();
wasm.__wbindgen_export2(r0, r1 * 4, 4);
return v2;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Get forward pass count
* @returns {bigint}
*/
forward_count() {
const ret = wasm.wasmmicrolora_forward_count(this.__wbg_ptr);
return BigInt.asUintN(64, ret);
}
/**
* Get pointer to input buffer for direct memory access
* @returns {number}
*/
get_input_ptr() {
const ret = wasm.wasmmicrolora_get_input_ptr(this.__wbg_ptr);
return ret >>> 0;
}
/**
* Get pointer to output buffer for direct memory access
* @returns {number}
*/
get_output_ptr() {
const ret = wasm.wasmmicrolora_get_output_ptr(this.__wbg_ptr);
return ret >>> 0;
}
/**
* Adapt with improvement reward using input buffer as gradient
* @param {number} improvement
*/
adapt_with_reward(improvement) {
wasm.wasmmicrolora_adapt_with_reward(this.__wbg_ptr, improvement);
}
/**
* Get embedding dimension
* @returns {number}
*/
dim() {
const ret = wasm.wasmmicrolora_dim(this.__wbg_ptr);
return ret >>> 0;
}
/**
* Create a new MicroLoRA engine
*
* @param dim - Embedding dimension (default 256, max 256)
* @param alpha - Scaling factor (default 0.1)
* @param learning_rate - Learning rate (default 0.01)
* @param {number | null} [dim]
* @param {number | null} [alpha]
* @param {number | null} [learning_rate]
*/
constructor(dim, alpha, learning_rate) {
const ret = wasm.wasmmicrolora_new(isLikeNone(dim) ? 0x100000001 : (dim) >>> 0, isLikeNone(alpha) ? 0x100000001 : Math.fround(alpha), isLikeNone(learning_rate) ? 0x100000001 : Math.fround(learning_rate));
this.__wbg_ptr = ret >>> 0;
WasmMicroLoRAFinalization.register(this, this.__wbg_ptr, this);
return this;
}
/**
* Adapt using input buffer as gradient
*/
adapt() {
wasm.wasmmicrolora_adapt(this.__wbg_ptr);
}
/**
* Reset the engine
*/
reset() {
wasm.wasmmicrolora_reset(this.__wbg_ptr);
}
/**
* Forward pass using internal buffers (zero-allocation)
*
* Write input to get_input_ptr(), call forward(), read from get_output_ptr()
*/
forward() {
wasm.wasmmicrolora_forward(this.__wbg_ptr);
}
}
if (Symbol.dispose) WasmMicroLoRA.prototype[Symbol.dispose] = WasmMicroLoRA.prototype.free;
/**
* WASM-exposed Scoped LoRA manager
*/
export class WasmScopedLoRA {
__destroy_into_raw() {
const ptr = this.__wbg_ptr;
this.__wbg_ptr = 0;
WasmScopedLoRAFinalization.unregister(this);
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_wasmscopedlora_free(ptr, 0);
}
/**
* Get delta norm for operator
* @param {number} op_type
* @returns {number}
*/
delta_norm(op_type) {
const ret = wasm.wasmscopedlora_delta_norm(this.__wbg_ptr, op_type);
return ret;
}
/**
* Get operator scope name
* @param {number} op_type
* @returns {string}
*/
static scope_name(op_type) {
let deferred1_0;
let deferred1_1;
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.wasmscopedlora_scope_name(retptr, op_type);
var r0 = getDataViewMemory0().getInt32(retptr + 4 * 0, true);
var r1 = getDataViewMemory0().getInt32(retptr + 4 * 1, true);
deferred1_0 = r0;
deferred1_1 = r1;
return getStringFromWasm0(r0, r1);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
wasm.__wbindgen_export2(deferred1_0, deferred1_1, 1);
}
}
/**
* Adapt with typed array
* @param {number} op_type
* @param {Float32Array} gradient
*/
adapt_array(op_type, gradient) {
const ptr0 = passArrayF32ToWasm0(gradient, wasm.__wbindgen_export);
const len0 = WASM_VECTOR_LEN;
wasm.wasmscopedlora_adapt_array(this.__wbg_ptr, op_type, ptr0, len0);
}
/**
* Get adapt count for operator
* @param {number} op_type
* @returns {bigint}
*/
adapt_count(op_type) {
const ret = wasm.wasmscopedlora_adapt_count(this.__wbg_ptr, op_type);
return BigInt.asUintN(64, ret);
}
/**
* Reset specific operator adapter
* @param {number} op_type
*/
reset_scope(op_type) {
wasm.wasmscopedlora_reset_scope(this.__wbg_ptr, op_type);
}
/**
* Forward pass with typed array
* @param {number} op_type
* @param {Float32Array} input
* @returns {Float32Array}
*/
forward_array(op_type, input) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
const ptr0 = passArrayF32ToWasm0(input, wasm.__wbindgen_export);
const len0 = WASM_VECTOR_LEN;
wasm.wasmscopedlora_forward_array(retptr, this.__wbg_ptr, op_type, ptr0, len0);
var r0 = getDataViewMemory0().getInt32(retptr + 4 * 0, true);
var r1 = getDataViewMemory0().getInt32(retptr + 4 * 1, true);
var v2 = getArrayF32FromWasm0(r0, r1).slice();
wasm.__wbindgen_export2(r0, r1 * 4, 4);
return v2;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Get forward count for operator
* @param {number} op_type
* @returns {bigint}
*/
forward_count(op_type) {
const ret = wasm.wasmscopedlora_forward_count(this.__wbg_ptr, op_type);
return BigInt.asUintN(64, ret);
}
/**
* Get input buffer pointer
* @returns {number}
*/
get_input_ptr() {
const ret = wasm.wasmscopedlora_get_input_ptr(this.__wbg_ptr);
return ret >>> 0;
}
/**
* Get output buffer pointer
* @returns {number}
*/
get_output_ptr() {
const ret = wasm.wasmscopedlora_get_output_ptr(this.__wbg_ptr);
return ret >>> 0;
}
/**
* Adapt with improvement reward
* @param {number} op_type
* @param {number} improvement
*/
adapt_with_reward(op_type, improvement) {
wasm.wasmscopedlora_adapt_with_reward(this.__wbg_ptr, op_type, improvement);
}
/**
* Get total adapt count
* @returns {bigint}
*/
total_adapt_count() {
const ret = wasm.wasmscopedlora_total_adapt_count(this.__wbg_ptr);
return BigInt.asUintN(64, ret);
}
/**
* Get total forward count
* @returns {bigint}
*/
total_forward_count() {
const ret = wasm.wasmscopedlora_total_forward_count(this.__wbg_ptr);
return BigInt.asUintN(64, ret);
}
/**
* Enable/disable category fallback
* @param {boolean} enabled
*/
set_category_fallback(enabled) {
wasm.wasmscopedlora_set_category_fallback(this.__wbg_ptr, enabled);
}
/**
* Create a new scoped LoRA manager
*
* @param dim - Embedding dimension (max 256)
* @param alpha - Scaling factor (default 0.1)
* @param learning_rate - Learning rate (default 0.01)
* @param {number | null} [dim]
* @param {number | null} [alpha]
* @param {number | null} [learning_rate]
*/
constructor(dim, alpha, learning_rate) {
const ret = wasm.wasmscopedlora_new(isLikeNone(dim) ? 0x100000001 : (dim) >>> 0, isLikeNone(alpha) ? 0x100000001 : Math.fround(alpha), isLikeNone(learning_rate) ? 0x100000001 : Math.fround(learning_rate));
this.__wbg_ptr = ret >>> 0;
WasmScopedLoRAFinalization.register(this, this.__wbg_ptr, this);
return this;
}
/**
* Adapt for operator type using input buffer as gradient
* @param {number} op_type
*/
adapt(op_type) {
wasm.wasmscopedlora_adapt(this.__wbg_ptr, op_type);
}
/**
* Forward pass for operator type (uses internal buffers)
*
* @param op_type - Operator type (0-16)
* @param {number} op_type
*/
forward(op_type) {
wasm.wasmscopedlora_forward(this.__wbg_ptr, op_type);
}
/**
* Reset all adapters
*/
reset_all() {
wasm.wasmscopedlora_reset_all(this.__wbg_ptr);
}
}
if (Symbol.dispose) WasmScopedLoRA.prototype[Symbol.dispose] = WasmScopedLoRA.prototype.free;
/**
* WASM-exposed trajectory buffer
*/
export class WasmTrajectoryBuffer {
__destroy_into_raw() {
const ptr = this.__wbg_ptr;
this.__wbg_ptr = 0;
WasmTrajectoryBufferFinalization.unregister(this);
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_wasmtrajectorybuffer_free(ptr, 0);
}
/**
* Get total count
* @returns {bigint}
*/
total_count() {
const ret = wasm.wasmtrajectorybuffer_total_count(this.__wbg_ptr);
return BigInt.asUintN(64, ret);
}
/**
* Get success rate
* @returns {number}
*/
success_rate() {
const ret = wasm.wasmtrajectorybuffer_success_rate(this.__wbg_ptr);
return ret;
}
/**
* Get best attention type
* @returns {number}
*/
best_attention() {
const ret = wasm.wasmtrajectorybuffer_best_attention(this.__wbg_ptr);
return ret;
}
/**
* Get best improvement
* @returns {number}
*/
best_improvement() {
const ret = wasm.wasmtrajectorybuffer_best_improvement(this.__wbg_ptr);
return ret;
}
/**
* Get mean improvement
* @returns {number}
*/
mean_improvement() {
const ret = wasm.wasmtrajectorybuffer_mean_improvement(this.__wbg_ptr);
return ret;
}
/**
* Get trajectory count for operator
* @param {number} op_type
* @returns {number}
*/
count_by_operator(op_type) {
const ret = wasm.wasmtrajectorybuffer_count_by_operator(this.__wbg_ptr, op_type);
return ret >>> 0;
}
/**
* Get high quality trajectory count
* @param {number} threshold
* @returns {number}
*/
high_quality_count(threshold) {
const ret = wasm.wasmtrajectorybuffer_high_quality_count(this.__wbg_ptr, threshold);
return ret >>> 0;
}
/**
* Get buffer length
* @returns {number}
*/
len() {
const ret = wasm.wasmtrajectorybuffer_len(this.__wbg_ptr);
return ret >>> 0;
}
/**
* Create a new trajectory buffer
*
* @param capacity - Maximum number of trajectories to store
* @param embedding_dim - Dimension of embeddings (default 256)
* @param {number | null} [capacity]
* @param {number | null} [embedding_dim]
*/
constructor(capacity, embedding_dim) {
const ret = wasm.wasmtrajectorybuffer_new(isLikeNone(capacity) ? 0x100000001 : (capacity) >>> 0, isLikeNone(embedding_dim) ? 0x100000001 : (embedding_dim) >>> 0);
this.__wbg_ptr = ret >>> 0;
WasmTrajectoryBufferFinalization.register(this, this.__wbg_ptr, this);
return this;
}
/**
* Reset buffer
*/
reset() {
wasm.wasmtrajectorybuffer_reset(this.__wbg_ptr);
}
/**
* Record a trajectory
*
* @param embedding - Embedding vector (Float32Array)
* @param op_type - Operator type (0-16)
* @param attention_type - Attention mechanism used
* @param execution_ms - Actual execution time
* @param baseline_ms - Baseline execution time
* @param {Float32Array} embedding
* @param {number} op_type
* @param {number} attention_type
* @param {number} execution_ms
* @param {number} baseline_ms
*/
record(embedding, op_type, attention_type, execution_ms, baseline_ms) {
const ptr0 = passArrayF32ToWasm0(embedding, wasm.__wbindgen_export);
const len0 = WASM_VECTOR_LEN;
wasm.wasmtrajectorybuffer_record(this.__wbg_ptr, ptr0, len0, op_type, attention_type, execution_ms, baseline_ms);
}
/**
* Check if empty
* @returns {boolean}
*/
is_empty() {
const ret = wasm.wasmtrajectorybuffer_is_empty(this.__wbg_ptr);
return ret !== 0;
}
/**
* Get variance
* @returns {number}
*/
variance() {
const ret = wasm.wasmtrajectorybuffer_variance(this.__wbg_ptr);
return ret;
}
}
if (Symbol.dispose) WasmTrajectoryBuffer.prototype[Symbol.dispose] = WasmTrajectoryBuffer.prototype.free;
const EXPECTED_RESPONSE_TYPES = new Set(['basic', 'cors', 'default']);
async function __wbg_load(module, imports) {
if (typeof Response === 'function' && module instanceof Response) {
if (typeof WebAssembly.instantiateStreaming === 'function') {
try {
return await WebAssembly.instantiateStreaming(module, imports);
} catch (e) {
const validResponse = module.ok && EXPECTED_RESPONSE_TYPES.has(module.type);
if (validResponse && module.headers.get('Content-Type') !== 'application/wasm') {
console.warn("`WebAssembly.instantiateStreaming` failed because your server does not serve Wasm with `application/wasm` MIME type. Falling back to `WebAssembly.instantiate` which is slower. Original error:\n", e);
} else {
throw e;
}
}
}
const bytes = await module.arrayBuffer();
return await WebAssembly.instantiate(bytes, imports);
} else {
const instance = await WebAssembly.instantiate(module, imports);
if (instance instanceof WebAssembly.Instance) {
return { instance, module };
} else {
return instance;
}
}
}
function __wbg_get_imports() {
const imports = {};
imports.wbg = {};
imports.wbg.__wbg___wbindgen_throw_dd24417ed36fc46e = function(arg0, arg1) {
throw new Error(getStringFromWasm0(arg0, arg1));
};
return imports;
}
function __wbg_finalize_init(instance, module) {
wasm = instance.exports;
__wbg_init.__wbindgen_wasm_module = module;
cachedDataViewMemory0 = null;
cachedFloat32ArrayMemory0 = null;
cachedUint8ArrayMemory0 = null;
return wasm;
}
function initSync(module) {
if (wasm !== undefined) return wasm;
if (typeof module !== 'undefined') {
if (Object.getPrototypeOf(module) === Object.prototype) {
({module} = module)
} else {
console.warn('using deprecated parameters for `initSync()`; pass a single object instead')
}
}
const imports = __wbg_get_imports();
if (!(module instanceof WebAssembly.Module)) {
module = new WebAssembly.Module(module);
}
const instance = new WebAssembly.Instance(module, imports);
return __wbg_finalize_init(instance, module);
}
async function __wbg_init(module_or_path) {
if (wasm !== undefined) return wasm;
if (typeof module_or_path !== 'undefined') {
if (Object.getPrototypeOf(module_or_path) === Object.prototype) {
({module_or_path} = module_or_path)
} else {
console.warn('using deprecated parameters for the initialization function; pass a single object instead')
}
}
if (typeof module_or_path === 'undefined') {
module_or_path = new URL('ruvector_learning_wasm_bg.wasm', import.meta.url);
}
const imports = __wbg_get_imports();
if (typeof module_or_path === 'string' || (typeof Request === 'function' && module_or_path instanceof Request) || (typeof URL === 'function' && module_or_path instanceof URL)) {
module_or_path = fetch(module_or_path);
}
const { instance, module } = await __wbg_load(await module_or_path, imports);
return __wbg_finalize_init(instance, module);
}
export { initSync };
export default __wbg_init;

View File

@@ -0,0 +1,53 @@
/* tslint:disable */
/* eslint-disable */
export const memory: WebAssembly.Memory;
export const __wbg_wasmmicrolora_free: (a: number, b: number) => void;
export const __wbg_wasmscopedlora_free: (a: number, b: number) => void;
export const __wbg_wasmtrajectorybuffer_free: (a: number, b: number) => void;
export const wasmmicrolora_adapt: (a: number) => void;
export const wasmmicrolora_adapt_array: (a: number, b: number, c: number) => void;
export const wasmmicrolora_adapt_count: (a: number) => bigint;
export const wasmmicrolora_adapt_with_reward: (a: number, b: number) => void;
export const wasmmicrolora_delta_norm: (a: number) => number;
export const wasmmicrolora_dim: (a: number) => number;
export const wasmmicrolora_forward: (a: number) => void;
export const wasmmicrolora_forward_array: (a: number, b: number, c: number, d: number) => void;
export const wasmmicrolora_forward_count: (a: number) => bigint;
export const wasmmicrolora_get_input_ptr: (a: number) => number;
export const wasmmicrolora_get_output_ptr: (a: number) => number;
export const wasmmicrolora_new: (a: number, b: number, c: number) => number;
export const wasmmicrolora_param_count: (a: number) => number;
export const wasmmicrolora_reset: (a: number) => void;
export const wasmscopedlora_adapt: (a: number, b: number) => void;
export const wasmscopedlora_adapt_array: (a: number, b: number, c: number, d: number) => void;
export const wasmscopedlora_adapt_count: (a: number, b: number) => bigint;
export const wasmscopedlora_adapt_with_reward: (a: number, b: number, c: number) => void;
export const wasmscopedlora_delta_norm: (a: number, b: number) => number;
export const wasmscopedlora_forward: (a: number, b: number) => void;
export const wasmscopedlora_forward_array: (a: number, b: number, c: number, d: number, e: number) => void;
export const wasmscopedlora_forward_count: (a: number, b: number) => bigint;
export const wasmscopedlora_get_input_ptr: (a: number) => number;
export const wasmscopedlora_get_output_ptr: (a: number) => number;
export const wasmscopedlora_new: (a: number, b: number, c: number) => number;
export const wasmscopedlora_reset_all: (a: number) => void;
export const wasmscopedlora_reset_scope: (a: number, b: number) => void;
export const wasmscopedlora_scope_name: (a: number, b: number) => void;
export const wasmscopedlora_set_category_fallback: (a: number, b: number) => void;
export const wasmscopedlora_total_adapt_count: (a: number) => bigint;
export const wasmscopedlora_total_forward_count: (a: number) => bigint;
export const wasmtrajectorybuffer_best_attention: (a: number) => number;
export const wasmtrajectorybuffer_best_improvement: (a: number) => number;
export const wasmtrajectorybuffer_count_by_operator: (a: number, b: number) => number;
export const wasmtrajectorybuffer_high_quality_count: (a: number, b: number) => number;
export const wasmtrajectorybuffer_is_empty: (a: number) => number;
export const wasmtrajectorybuffer_len: (a: number) => number;
export const wasmtrajectorybuffer_mean_improvement: (a: number) => number;
export const wasmtrajectorybuffer_new: (a: number, b: number) => number;
export const wasmtrajectorybuffer_record: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void;
export const wasmtrajectorybuffer_reset: (a: number) => void;
export const wasmtrajectorybuffer_success_rate: (a: number) => number;
export const wasmtrajectorybuffer_total_count: (a: number) => bigint;
export const wasmtrajectorybuffer_variance: (a: number) => number;
export const __wbindgen_export: (a: number, b: number) => number;
export const __wbindgen_add_to_stack_pointer: (a: number) => number;
export const __wbindgen_export2: (a: number, b: number, c: number) => void;