Adds a persistent read-only HTTP listener (default port 8765, LAN-bound)
serving a dedicated subset of Numa's API for iOS/Android companion apps
and as a replacement for the one-shot server setup_phone used to spin up:
GET /health — enriched JSON with version, hostname, LAN IP,
SNI, DoT config, mobile API port, CA
fingerprint, features (shared handler with
the main API on port 5380)
GET /ca.pem — public CA certificate (shared handler)
GET /mobileconfig — full iOS profile (CA trust + DNS settings
pinned to current LAN IP)
GET /ca.mobileconfig — CA-only iOS profile (trust anchor without
DNS override — for the iOS companion app's
programmatic DNS flow via NEDNSSettingsManager)
All routes are idempotent GETs. The mobile API never serves the
state-mutating routes that live on the main API (overrides, blocking
toggle, service CRUD, cache flush), so it is safe to expose on the LAN
regardless of the main API's bind address. The CA private key is never
served by any route.
Opt-in via `[mobile] enabled = true`. Default is false so new installs
do not silently expose a LAN listener after upgrading; our committed
numa.toml template enables it explicitly for spike testing.
New modules:
- src/mobileconfig.rs — ProfileMode::{Full, CaOnly} enum with plist
builder lifted from setup_phone.rs. Full and CaOnly share the CA
payload UUID (same trust anchor) but have distinct top-level UUIDs
so they coexist as separate installable profiles on iOS.
- src/health.rs — HealthMeta cached metadata built once at startup
from config + CA fingerprint (SHA-256 of the PEM via ring), and the
HealthResponse JSON shape shared between the main and mobile APIs.
- src/mobile_api.rs — axum Router for the persistent listener. Reuses
api::health and api::serve_ca from the main API; owns the two
mobileconfig handlers.
Modified:
- src/api.rs — health() returns the enriched HealthResponse, now pub.
serve_ca is now pub so mobile_api can reuse it.
- src/config.rs — MobileConfig section (enabled, port, bind_addr).
- src/ctx.rs — health_meta: HealthMeta field on ServerCtx.
- src/main.rs — builds HealthMeta at startup, spawns mobile API
listener if enabled.
- src/lan.rs — build_announcement takes &HealthMeta and writes
enriched TXT records (version, api_port, proto, dot_port, ca_fp).
SRV port now reports the mobile API port; peer discovery still
reads TXT `services=` so this is backwards compatible. Always
announces even when no .numa services are registered, so the iOS
companion app can discover Numa via mDNS regardless of service
state.
- src/setup_phone.rs — reduced from 267 to 100 lines. The CLI is now
a thin QR wrapper over the persistent /mobileconfig endpoint; the
hand-rolled one-shot HTTP server (accept_loop, RUST_OK_HEADERS,
RUST_NOT_FOUND, download counter) is gone.
- src/dot.rs — test fixture updated with HealthMeta::test_fixture().
- numa.toml — commented [mobile] section, enabled = true for spike.
Tests: 136 unit tests passing (5 new in mobileconfig, 3 new in health).
cargo clippy clean. Integration sanity check: curl'd /health, /ca.pem,
/mobileconfig, /ca.mobileconfig against a running numa — all return
200 with correct content types and valid response bodies.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reverts the NUMA_DATA_DIR env var added in the previous commit and
replaces it with a [server] data_dir TOML field. Numa already has a
well-developed config system; adding a parallel env-var mechanism
for a single knob was wrong.
The principle: TOML is for application behavior configuration. Env
vars are for bootstrap values (HOME, SUDO_USER to discover paths
before config loads) and standard ecosystem conventions (RUST_LOG).
data_dir is neither — it's an app knob, so it belongs in the TOML.
Changes:
- lib.rs::data_dir() reverts to the platform-specific fallback only
- config.rs adds `data_dir: Option<PathBuf>` to ServerConfig
- main.rs resolves config.server.data_dir with fallback to
numa::data_dir() and passes it to build_tls_config, then stores the
resolved path on ctx.data_dir for downstream consumers
- tls.rs::build_tls_config takes `data_dir: &Path` as an explicit
parameter instead of calling crate::data_dir() behind the caller's
back. regenerate_tls and dot.rs self_signed_tls now pass
&ctx.data_dir, honoring whatever path the config resolved to
- tests/integration.sh Suite 6 uses `data_dir = "$NUMA_DATA"` in its
test TOML instead of the NUMA_DATA_DIR env var prefix
- numa.toml gains a commented-out data_dir example
No behavior change for existing production deployments (the default
path is unchanged). Test harness is now fully config-driven, and
containerized deploys can override data_dir via mount+config without
needing env var injection.
127/127 unit tests pass, Suite 6 passes end-to-end.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Refactor handle_query into transport-agnostic resolve_query that returns
a BytePacketBuffer, keeping the UDP path zero-alloc. Add a TLS listener
on port 853 with persistent connections, idle timeout, connection limits,
and coalesced writes. Supports user-provided certs or self-signed CA
fallback. Includes 5 integration tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: auto recursive mode, fix Linux install
Auto mode (new default): probes a root server on startup; uses
recursive resolution if outbound DNS works, falls back to Quad9 DoH
if blocked. Dashboard shows mode indicator (green/yellow).
Linux install fixes:
- Add DNSStubListener=no to resolved drop-in (frees port 53)
- Configure DNS before starting service (correct ordering)
- Skip 127.0.0.53 in upstream detection
- `numa install` now does everything (service + DNS + CA)
- `numa uninstall` mirrors install (stop service + restore DNS)
- Extract is_loopback_or_stub() for consistent filtering
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: enable DNSSEC validation by default
With recursive as the default mode, DNSSEC validation completes the
trustless resolution chain. Strict mode remains off by default.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: forward search domains to VPC resolver on Linux
Parse search/domain lines from resolv.conf and create conditional
forwarding rules to the original nameserver or AWS VPC resolver
(169.254.169.253). Fixes internal hostname resolution on cloud VMs
where recursive mode can't resolve private DNS zones.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: single-pass resolv.conf parsing, eliminate redundancies
Parse resolv.conf once for both upstream and search domains instead
of 2-3 reads. Extract CLOUD_VPC_RESOLVER constant. Use &'static str
for mode in StatsResponse. Remove dead read_upstream_from_file.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: macOS install health check, harden recursive probe
Verify numa is listening (API port) before redirecting system DNS on
macOS — if the service fails to start (e.g. port 53 in use), unload
the service and abort instead of breaking DNS. Probe up to 3 root
hints before declaring recursive mode unavailable. Validate IPs from
resolvectl to avoid IPv6 fragment extraction. Extract DEFAULT_API_PORT
constant.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: widen make_rule cfg gate to include Linux
make_rule was gated to macOS-only but discover_linux() calls it for
search domain forwarding rules. CI failed on Linux with E0425.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: forward mode as default, recursive opt-in
Forward mode (transparent proxy to system DNS) is now the default.
Recursive and auto modes are explicit opt-in via config. This avoids
bypassing corporate DNS policies, captive portals, VPC private zones,
and parental controls on first install.
- Move #[default] from Auto to Forward on UpstreamMode
- DNSSEC defaults to off (no-op in forward mode)
- 3-way match in main: Forward/Recursive/Auto with clean separation
- Post-install message suggests mode = "recursive" for sovereignty
- Update README, site, and launch drafts messaging
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: SRTT-based nameserver selection for recursive resolver
BIND-style Smoothed RTT (EWMA) tracking per NS IP address. The resolver
learns which nameservers respond fastest and prefers them, eliminating
cascading timeouts from slow/unreachable IPv6 servers.
- New src/srtt.rs: SrttCache with record_rtt, record_failure, sort_by_rtt
- EWMA formula: new = (old * 7 + sample) / 8, 5s failure penalty, 5min decay
- TCP penalty (+100ms) lets SRTT naturally deprioritize IPv6-over-TCP
- Enabled flag embedded in SrttCache (no-op when disabled)
- Batch eviction (64 entries) for O(1) amortized writes at capacity
- Configurable via [upstream] srtt = true/false (default: true)
- Benchmark script: scripts/benchmark.sh (full, cold, warm, compare-all)
- Benchmarks show 12x avg improvement, 0% queries >1s (was 58%)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: show DNSSEC and SRTT status in dashboard + API
Add dnssec and srtt boolean fields to /stats API response.
Display on/off indicators in the dashboard footer.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: apply SRTT decay before EWMA so recovered servers rehabilitate
Without decay-before-EWMA, a server penalized at 5000ms stayed near
that value even after recovery — the stale raw penalty was used as the
EWMA base instead of the decayed estimate. Extract decayed_srtt()
helper and call it in record_rtt() before the smoothing step.
Also restores removed "why" comments in send_query / resolve_recursive.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: add install/upgrade instructions, smarter benchmark priming
README: document `numa install`, `numa service`, Homebrew upgrade,
and `make deploy` workflows. Benchmark: replace fixed `sleep 4` with
`wait_for_priming` that polls cache entry count for stability.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- LAN discovery disabled by default (opt-in via [lan] enabled = true)
- Replace custom JSON multicast (239.255.70.78:5390) with standard mDNS
(_numa._tcp.local on 224.0.0.251:5353) using existing DNS parser
- Instance ID in TXT record for multi-instance self-filtering
- API and proxy bind to 127.0.0.1 by default (0.0.0.0 when LAN enabled)
- Path-based routing: longest prefix match with optional prefix stripping
via [[services]] routes = [{path, port, strip?}]
- REST API: GET/POST/DELETE /services/{name}/routes
- Dashboard shows route lines per service when configured
- Segment-boundary route matching (prevents /api matching /apiary)
- Route path validation (rejects path traversal)
Closes#11
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Numa instances on the same network auto-discover each other's .numa
services. No config, no cloud — just multicast on 239.255.70.78:5390.
- PeerStore with lazy expiry (90s timeout, 30s broadcast interval)
- DNS resolves remote .numa services to peer's LAN IP (not localhost)
- Proxy forwards to peer IP for remote services
- Graceful degradation if multicast bind fails
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
HTTP reverse proxy on port 80 lets developers use clean domain names
(frontend.numa, api.numa) instead of localhost:PORT. Includes WebSocket
upgrade support for HMR, TCP health checks, dashboard UI panel, and
REST API for service management. numa.numa is preconfigured for the
dashboard itself.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Default upstream auto-detected from system resolver (scutil/resolv.conf)
instead of hardcoding Google 8.8.8.8. Falls back to Quad9 (9.9.9.9).
- Single scutil --dns pass for both upstream detection and forwarding rules
- Linux: reads backup resolv.conf if current only has loopback
- Service start/stop now couples DNS config (install on start, uninstall on stop)
- Install script for one-line binary install from GitHub Releases
- GitHub Actions release workflow: builds for macOS/Linux x86_64/aarch64
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- DNS-level ad blocking: 385K+ domains via Hagezi Pro blocklist, subdomain
matching, one-click allowlist, pause/toggle, background refresh every 24h
- Live dashboard at :5380 with real-time stats, query log, override
management (create/edit/delete), blocking controls
- System DNS auto-discovery: parses scutil --dns on macOS to find
conditional forwarding rules (Tailscale, VPN split-DNS)
- REST API expanded to 18 endpoints (blocking, overrides, diagnostics)
- Startup banner with colored system info
- Performance benchmarks (bench/dns-bench.sh)
- Landing page updated with new positioning and comparison table
- CI, Dockerfile, LICENSE, development plan docs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>