Mirrors `[upstream] address` — `upstream` accepts string or array
of strings, builds an `UpstreamPool` and routes queries through
`forward_with_failover_raw` so SRTT ordering and failover apply to
matched `[[forwarding]]` rules the same way they do for the default
pool.
Single-string rules keep their current behavior (one-element pool,
equivalent single-upstream path). Empty array errors at config load.
Addresses item 1 of issue #102. Plan: docs/102_item1.md.
Config-level forwarding rules were parsed with the UDP-only
`parse_upstream_addr` helper, silently rejecting the DoT/DoH schemes
that the rest of the forwarding pipeline already supports.
Widen `ForwardingRule.upstream` from `SocketAddr` to `Upstream` so
config rules reuse the same parser as `[upstream].address` and
`fallback`. Demote `parse_upstream_addr` to `pub(crate)` to prevent
the same mistake recurring.
Closes#100.
Test each pipeline stage in isolation through resolve_query:
- override takes precedence over all other paths
- localhost and *.localhost resolve to loopback
- local zone returns configured records
- .tld proxy resolves registered services to loopback
- blocklist sinkholes to 0.0.0.0
- cache hit returns stored response without upstream
resolve_query now returns (BytePacketBuffer, QueryPath) so callers
and tests can inspect the resolution path without reading the query
log. Production call sites (UDP, DoT, DoH) destructure and ignore it.
The forwarding test now uses a mock UDP upstream that replies with a
canned response, asserting QueryPath::Forwarded instead of != Local.
Explicit [[forwarding]] rules now take precedence over the RFC 6303
special-use domain intercept. Previously, PTR queries for private
ranges (e.g. 168.192.in-addr.arpa) always returned local NXDOMAIN
even when a forwarding rule pointed them at a corporate DNS server.
Add full-pipeline resolve_query test harness (test_ctx + resolve_in_test)
and two tests covering both the default behavior and the override.
Closes#94
Thread Transport enum through resolve pipeline, record per-query
transport in stats and query log. Dashboard gets bar chart panel
with encryption %, transport column in query log, and filter dropdown.
- Extract refresh_entry in ctx.rs — warm_domain in main.rs now delegates
to it instead of duplicating the resolve+cache logic (~40 lines removed)
- Eliminate unconditional .to_vec() of raw wire on every UDP/DoT query —
pass &buffer.buf[..len] directly (zero-cost for cache hits)
- Replace bare bool stale flag with Freshness enum (Fresh/NearExpiry/Stale)
making the three states self-documenting at every call site
Multiple stale queries for the same domain now spawn only one background
refresh. A HashSet<(String, QueryType)> on ServerCtx tracks in-flight
refreshes; subsequent stale hits for the same key skip the spawn.
When a cached entry is expired but within the 1-hour stale window,
serve it immediately with TTL=1 AND spawn a background re-resolve.
The next query gets a fresh entry instead of another stale serve.
Without this, stale entries were served repeatedly for up to an hour
with no refresh — effectively ignoring TTL.
Wire-level forwarding path skips DnsPacket parse/serialize on the hot
path. Cache stores raw wire bytes with pre-scanned TTL offsets — patches
ID + TTLs in-place on lookup instead of cloning parsed packets.
Request hedging (Dean & Barroso "Tail at Scale") fires a second
parallel request after a configurable delay (default 10ms) when
the primary upstream stalls. DoH keepalive loop prevents idle
HTTP/2 + TLS connection teardown.
Recursive resolver now hedges across multiple NS addresses and
caches NS delegation records to skip TLD re-queries.
Integration test harness polls /blocking/stats instead of fixed
sleep, eliminating the blocklist-download race condition.
* chore: document multi-forwarder and cache warming in config and README
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: DNS-over-HTTPS server endpoint (RFC 8484)
Serve DoH at POST /dns-query on the existing HTTPS proxy (port 443).
Automatically enabled when proxy TLS is active — no config needed.
Also fix zone map priority so local zones override RFC 6762 .local
special-use handling.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style: cargo fmt
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: remove GoatCounter analytics from site
GoatCounter domains (goatcounter.com, gc.zgo.at) are blocked by
Hagezi Pro, which is Numa's default blocklist. A DNS privacy tool
should not embed analytics that its own resolver blocks.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: enable DoT listener by default
DoT now starts automatically with `sudo numa`, matching the proxy and
DoH which are already on by default. The self-signed CA infrastructure
is shared with the proxy, so there is no additional setup. This makes
`numa setup-phone` work out of the box.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: multi-forwarder with SRTT-based failover
address accepts string or array, with optional per-server port override.
New fallback pool tried only when all primaries fail. Sequential failover
with SRTT ranking ensures fastest upstream is tried first.
Closes#34 (items 1, 2, 3)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: simplify failover candidate list and deduplicate recursive pool
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: extract maybe_update_primary for testable upstream re-detection
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: rustfmt
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: scope mobileconfig DNS to Wi-Fi only via OnDemandRules
Without OnDemandRules, iOS applies the DoT profile globally —
cellular DNS breaks when the phone leaves the LAN.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: phone setup QR code in dashboard header
- Add /qr endpoint serving SVG (uses existing qrcode crate, svg feature)
- Header popover: QR on desktop, direct download link on mobile viewports
- Only visible when [mobile] enabled = true in config
- Expose mobile.enabled and mobile.port in /stats response
- Lazy-load QR on first click, dismiss on outside click
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: add Cache-Control to /qr, re-fetch QR on each popover open
Cache-Control: no-store prevents stale QR after LAN IP change.
Remove qrLoaded flag so the QR always reflects the current IP.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style: rustfmt serve_qr response tuple
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: add iOS install steps to phone setup popover
iOS shows "Profile Downloaded" with no guidance. The popover
now includes the 3-step install flow including the buried
Certificate Trust Settings toggle.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: numa setup-phone — QR-based mobile DoT onboarding
Adds a CLI subcommand that generates a one-time mobileconfig profile
containing both the Numa local CA (as a com.apple.security.root payload)
and the DoT DNS settings, then serves it via a temporary HTTP server
and prints a scannable QR code in the terminal.
Flow:
1. User runs `numa setup-phone` (no sudo needed)
2. Detects current LAN IP, reads CA from /usr/local/var/numa/ca.pem
3. Builds combined mobileconfig (CA trust + DoT)
4. Renders QR code with qrcode crate (Unicode block characters)
5. Serves the profile on port 8765, stays open until Ctrl+C
6. Counts successful downloads (multi-device households)
Important caveat documented in instructions: even with the CA bundled
in the profile, iOS still requires the user to manually enable trust
in Settings → General → About → Certificate Trust Settings. Verified
on a real iPhone.
Stable PayloadIdentifiers/UUIDs ensure re-running replaces the
existing profile on iOS rather than accumulating duplicates.
- New module: src/setup_phone.rs (~270 lines)
- New CLI subcommand: `numa setup-phone`
- New dependency: qrcode = "0.14" (default-features = false)
- tokio "signal" feature added for Ctrl+C handling
- 3 unit tests: PEM stripping, mobileconfig generation, QR rendering
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: mobile API, enriched /health, mobileconfig module
Adds a persistent read-only HTTP listener (default port 8765, LAN-bound)
serving a dedicated subset of Numa's API for iOS/Android companion apps
and as a replacement for the one-shot server setup_phone used to spin up:
GET /health — enriched JSON with version, hostname, LAN IP,
SNI, DoT config, mobile API port, CA
fingerprint, features (shared handler with
the main API on port 5380)
GET /ca.pem — public CA certificate (shared handler)
GET /mobileconfig — full iOS profile (CA trust + DNS settings
pinned to current LAN IP)
GET /ca.mobileconfig — CA-only iOS profile (trust anchor without
DNS override — for the iOS companion app's
programmatic DNS flow via NEDNSSettingsManager)
All routes are idempotent GETs. The mobile API never serves the
state-mutating routes that live on the main API (overrides, blocking
toggle, service CRUD, cache flush), so it is safe to expose on the LAN
regardless of the main API's bind address. The CA private key is never
served by any route.
Opt-in via `[mobile] enabled = true`. Default is false so new installs
do not silently expose a LAN listener after upgrading; our committed
numa.toml template enables it explicitly for spike testing.
New modules:
- src/mobileconfig.rs — ProfileMode::{Full, CaOnly} enum with plist
builder lifted from setup_phone.rs. Full and CaOnly share the CA
payload UUID (same trust anchor) but have distinct top-level UUIDs
so they coexist as separate installable profiles on iOS.
- src/health.rs — HealthMeta cached metadata built once at startup
from config + CA fingerprint (SHA-256 of the PEM via ring), and the
HealthResponse JSON shape shared between the main and mobile APIs.
- src/mobile_api.rs — axum Router for the persistent listener. Reuses
api::health and api::serve_ca from the main API; owns the two
mobileconfig handlers.
Modified:
- src/api.rs — health() returns the enriched HealthResponse, now pub.
serve_ca is now pub so mobile_api can reuse it.
- src/config.rs — MobileConfig section (enabled, port, bind_addr).
- src/ctx.rs — health_meta: HealthMeta field on ServerCtx.
- src/main.rs — builds HealthMeta at startup, spawns mobile API
listener if enabled.
- src/lan.rs — build_announcement takes &HealthMeta and writes
enriched TXT records (version, api_port, proto, dot_port, ca_fp).
SRV port now reports the mobile API port; peer discovery still
reads TXT `services=` so this is backwards compatible. Always
announces even when no .numa services are registered, so the iOS
companion app can discover Numa via mDNS regardless of service
state.
- src/setup_phone.rs — reduced from 267 to 100 lines. The CLI is now
a thin QR wrapper over the persistent /mobileconfig endpoint; the
hand-rolled one-shot HTTP server (accept_loop, RUST_OK_HEADERS,
RUST_NOT_FOUND, download counter) is gone.
- src/dot.rs — test fixture updated with HealthMeta::test_fixture().
- numa.toml — commented [mobile] section, enabled = true for spike.
Tests: 136 unit tests passing (5 new in mobileconfig, 3 new in health).
cargo clippy clean. Integration sanity check: curl'd /health, /ca.pem,
/mobileconfig, /ca.mobileconfig against a running numa — all return
200 with correct content types and valid response bodies.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: setup-phone probe, unknown command error, query source in dashboard
- setup-phone now probes the mobile API before printing the QR code
and shows an actionable error if [mobile] is not enabled
- Unknown CLI subcommands print an error instead of silently
attempting to start a full server
- Dashboard query log shows source IP under timestamp (localhost
for loopback, full IP for LAN devices) with full addr on hover
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Address review findings on PR #25:
- Refactor resolve_query to take a pre-parsed DnsPacket. Parse-error
handling moves to the UDP caller, eliminating the double warn! line
on malformed UDP queries.
- Enforce MIN_MSG_LEN=12 (DNS header) in handle_dot_connection so
query_id extraction is always reading client-sent bytes, not the
zeroed buffer tail.
- Parse the DoT query before calling resolve_query and retain it, so
SERVFAIL responses can echo the original question section via
response_from(). Parse failures send FORMERR with the client id.
- Extract write_framed() helper for length-prefix + flush, reused by
success, SERVFAIL, and FORMERR paths.
- Back off 100ms on listener.accept() errors to avoid tight-looping
on fd exhaustion.
- Replace the hardcoded 127.0.0.1:53 upstream in dot_nxdomain_for_unknown
with a bound-but-unresponsive UDP socket owned by the test, making it
independent of the host's local resolver. Test now runs in ~220ms
(timeout lowered to 200ms) instead of 3s and asserts the question is
echoed in the SERVFAIL response.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Refactor handle_query into transport-agnostic resolve_query that returns
a BytePacketBuffer, keeping the UDP path zero-alloc. Add a TLS listener
on port 853 with persistent connections, idle timeout, connection limits,
and coalesced writes. Supports user-provided certs or self-signed CA
fallback. Includes 5 integration tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Conditional forwarding (Tailscale .ts.net, VPC private zones) was
only checked in the forward mode branch. In recursive mode, queries
for forwarding-rule domains went to root servers instead of the
configured upstream, returning NXDOMAIN for private domains.
Move the forwarding rule check before the recursive/forward branch
so it takes priority regardless of mode.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: resolve .numa services to LAN IP for remote clients
Remote DNS clients (e.g. phones on same WiFi) received 127.0.0.1 for
local .numa services, which is unreachable from their perspective.
Now returns the host's LAN IP when the query originates from a
non-loopback address. Also auto-widens proxy bind to 0.0.0.0 when
DNS is already public, and adds a startup warning when the proxy
remains localhost-only.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: respect proxy bind_addr config, don't auto-widen
The auto-widen silently overrode an explicit config value — the user's
config should be the source of truth. Now the proxy always uses the
configured bind_addr, and the warning fires whenever it's 127.0.0.1.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: update proxy bind_addr comment in example config
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add DnsPacket::query(id, domain, qtype) constructor; replace mock_query,
make_query, and 4 inline constructions across ctx/forward/recursive/api
- Add record_to_addr() in recursive.rs; replace 4 identical A/AAAA match
blocks with filter_map one-liners
- Add sinkhole_record() in ctx.rs; consolidate localhost and blocklist
A/AAAA branching into single calls
- Remove now-unused DnsQuestion imports
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: extract resolve_coalesced, rewrite tests against real code
Extract Disposition enum, acquire_inflight(), and resolve_coalesced()
from handle_query so coalescing logic is independently testable. Rewrite
integration tests to call resolve_coalesced directly with mock futures
instead of fighting the iterative resolver's NS chain. All 12 coalescing
tests now exercise production code paths, not tokio primitives.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: SERVFAIL echoes question section, preserve error messages
resolve_coalesced now takes &DnsPacket instead of query_id so SERVFAIL
responses use response_from (echoing question section per RFC). Error
messages preserved via Option<String> return for upstream error logging.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: in-flight query coalescing for recursive resolver
When multiple queries for the same (domain, qtype) arrive concurrently
and all miss the cache, only the first triggers recursive resolution.
Subsequent queries wait on a broadcast channel for the result.
Prevents thundering herd where N concurrent cache misses each
independently walk the full NS chain, compounding timeouts.
Uses InflightGuard (Drop impl) to guarantee map cleanup on
panic/cancellation — prevents permanent SERVFAIL poisoning.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style: add InflightMap type alias for clippy
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add COALESCED query path and coalescing tests
Followers in the inflight coalescing path now log as COALESCED instead
of RECURSIVE, making it visible in the dashboard when queries were
deduplicated vs independently resolved. Adds 10 tests covering
InflightGuard cleanup, broadcast mechanics, and concurrent handle_query
coalescing through a mock TCP DNS server.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: cargo fmt
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: extract acquire_inflight, rewrite tests against real code
Move Disposition enum and inflight acquisition logic into a standalone
acquire_inflight() function. Rewrite 4 tests that were exercising tokio
primitives to call the real coalescing code path instead.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: SRTT-based nameserver selection for recursive resolver
BIND-style Smoothed RTT (EWMA) tracking per NS IP address. The resolver
learns which nameservers respond fastest and prefers them, eliminating
cascading timeouts from slow/unreachable IPv6 servers.
- New src/srtt.rs: SrttCache with record_rtt, record_failure, sort_by_rtt
- EWMA formula: new = (old * 7 + sample) / 8, 5s failure penalty, 5min decay
- TCP penalty (+100ms) lets SRTT naturally deprioritize IPv6-over-TCP
- Enabled flag embedded in SrttCache (no-op when disabled)
- Batch eviction (64 entries) for O(1) amortized writes at capacity
- Configurable via [upstream] srtt = true/false (default: true)
- Benchmark script: scripts/benchmark.sh (full, cold, warm, compare-all)
- Benchmarks show 12x avg improvement, 0% queries >1s (was 58%)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: show DNSSEC and SRTT status in dashboard + API
Add dnssec and srtt boolean fields to /stats API response.
Display on/off indicators in the dashboard footer.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: apply SRTT decay before EWMA so recovered servers rehabilitate
Without decay-before-EWMA, a server penalized at 5000ms stayed near
that value even after recovery — the stale raw penalty was used as the
EWMA base instead of the decayed estimate. Extract decayed_srtt()
helper and call it in record_rtt() before the smoothing step.
Also restores removed "why" comments in send_query / resolve_recursive.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: add install/upgrade instructions, smarter benchmark priming
README: document `numa install`, `numa service`, Homebrew upgrade,
and `make deploy` workflows. Benchmark: replace fixed `sleep 4` with
`wait_for_priming` that polls cache entry count for stability.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
.local is reserved for mDNS (RFC 6762) and cannot be resolved by
upstream DNS servers. Add it to is_special_use_domain() so queries
like _grpc_config.localhost.local get an immediate NXDOMAIN instead
of timing out and returning SERVFAIL.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: DNS-over-HTTPS upstream forwarding
Encrypt upstream queries via DoH — ISPs see HTTPS traffic on port 443,
not plaintext DNS on port 53. URL scheme determines transport:
https:// = DoH, bare IP = plain UDP. Falls back to Quad9 DoH when
system resolver cannot be detected.
- Upstream enum (Udp/Doh) with Display and PartialEq
- BytePacketBuffer::from_bytes constructor
- reqwest http2 feature for DoH server compatibility
- network_watch_loop guards against DoH→UDP silent downgrade
- 5 new tests (mock DoH server, HTTP errors, timeout)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: cargo fmt
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs: add DoH to README — Why Numa, comparison table, roadmap
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
HTTPS proxy certs were generated once at startup. Services added at
runtime via API or LAN discovery got "not secure" in the browser
because their SAN wasn't in the cert. Now the cert is regenerated
on every service add/remove and swapped atomically via ArcSwap.
In-flight connections are unaffected.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add lan_enabled to ServerCtx
- Add lan field to /stats API (enabled, peer count)
- Dashboard shows "LAN off" (dim) or "LAN on · N peers" (green)
- Tooltip shows enable command or mDNS service type
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Always detect network changes (LAN IP, upstream, peers) regardless
of upstream config. LAN IP is now tracked in ServerCtx and updated
every 30s — multicast announcements use the current IP instead of
the startup IP. Upstream re-detection still only runs when
auto-detected. Peer flush triggers on any network change.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Upstream DNS was resolved once at startup and never updated. Switching
Wi-Fi networks made all queries fail until restart.
Now spawns a background task (every 30s) that re-runs system DNS
discovery and swaps the upstream atomically if it changed. Also flushes
stale LAN peers from the old network on change.
Only activates when upstream is auto-detected (not explicitly configured).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Numa instances on the same network auto-discover each other's .numa
services. No config, no cloud — just multicast on 239.255.70.78:5390.
- PeerStore with lazy expiry (90s timeout, 30s broadcast interval)
- DNS resolves remote .numa services to peer's LAN IP (not localhost)
- Proxy forwards to peer IP for remote services
- Graceful degradation if multicast bind fails
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Set TC (truncation) bit when response exceeds 4096-byte buffer
instead of dropping the response silently. Clients can retry via TCP.
- Log when upstream response is truncated in forward.rs.
- Dockerfile: bump to Rust 1.88, include site/service files, use
alpine runtime instead of scratch, add cmake/perl for aws-lc-sys.
- Makefile deploy: platform-aware — codesign on macOS, systemctl on Linux.
- README: trim roadmap to near-term items only.
- Verified: Docker build + smoke test passes on Linux (Alpine musl).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
HTTP reverse proxy on port 80 lets developers use clean domain names
(frontend.numa, api.numa) instead of localhost:PORT. Includes WebSocket
upgrade support for HMR, TCP health checks, dashboard UI panel, and
REST API for service management. numa.numa is preconfigured for the
dashboard itself.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- DNS-level ad blocking: 385K+ domains via Hagezi Pro blocklist, subdomain
matching, one-click allowlist, pause/toggle, background refresh every 24h
- Live dashboard at :5380 with real-time stats, query log, override
management (create/edit/delete), blocking controls
- System DNS auto-discovery: parses scutil --dns on macOS to find
conditional forwarding rules (Tailscale, VPN split-DNS)
- REST API expanded to 18 endpoints (blocking, overrides, diagnostics)
- Startup banner with colored system info
- Performance benchmarks (bench/dns-bench.sh)
- Landing page updated with new positioning and comparison table
- CI, Dockerfile, LICENSE, development plan docs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>