51 Commits

Author SHA1 Message Date
Razvan Dimescu
78711f516e chore: bump version to 0.8.0
Breaking: default mode changed from auto to forward.
New: memory footprint stats + dashboard panel.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 09:11:34 +03:00
Razvan Dimescu
64d85ce770 feat: add memory footprint to /stats and dashboard (#26)
* feat: add memory footprint to /stats and dashboard

Per-structure heap estimation (cache, blocklist, query log, SRTT,
overrides) with process RSS via mach_task_basic_info / sysconf.
Dashboard gets a 6th stat card and a sidebar breakdown panel with
stacked bar visualization.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: use phys_footprint on macOS to match Activity Monitor

Switch from MACH_TASK_BASIC_INFO (resident_size) to TASK_VM_INFO
(phys_footprint) which matches Activity Monitor's Memory column.
Also: capacity-aware heap estimation, entry counts in memory payload,
heap_bytes tests for all stores.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: remove redundant fields and fix naming in memory stats

Remove duplicate entry counts from MemoryStats (already in parent
StatsResponse), rename process_rss_bytes to process_memory_bytes
to match macOS phys_footprint semantics, drop restating comments.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 09:09:44 +03:00
Razvan Dimescu
8791198d10 feat: forward-by-default, auto recursive mode, Linux install fixes (#27)
* feat: auto recursive mode, fix Linux install

Auto mode (new default): probes a root server on startup; uses
recursive resolution if outbound DNS works, falls back to Quad9 DoH
if blocked. Dashboard shows mode indicator (green/yellow).

Linux install fixes:
- Add DNSStubListener=no to resolved drop-in (frees port 53)
- Configure DNS before starting service (correct ordering)
- Skip 127.0.0.53 in upstream detection
- `numa install` now does everything (service + DNS + CA)
- `numa uninstall` mirrors install (stop service + restore DNS)
- Extract is_loopback_or_stub() for consistent filtering

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: enable DNSSEC validation by default

With recursive as the default mode, DNSSEC validation completes the
trustless resolution chain. Strict mode remains off by default.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: forward search domains to VPC resolver on Linux

Parse search/domain lines from resolv.conf and create conditional
forwarding rules to the original nameserver or AWS VPC resolver
(169.254.169.253). Fixes internal hostname resolution on cloud VMs
where recursive mode can't resolve private DNS zones.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: single-pass resolv.conf parsing, eliminate redundancies

Parse resolv.conf once for both upstream and search domains instead
of 2-3 reads. Extract CLOUD_VPC_RESOLVER constant. Use &'static str
for mode in StatsResponse. Remove dead read_upstream_from_file.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: macOS install health check, harden recursive probe

Verify numa is listening (API port) before redirecting system DNS on
macOS — if the service fails to start (e.g. port 53 in use), unload
the service and abort instead of breaking DNS. Probe up to 3 root
hints before declaring recursive mode unavailable. Validate IPs from
resolvectl to avoid IPv6 fragment extraction. Extract DEFAULT_API_PORT
constant.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: widen make_rule cfg gate to include Linux

make_rule was gated to macOS-only but discover_linux() calls it for
search domain forwarding rules. CI failed on Linux with E0425.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: forward mode as default, recursive opt-in

Forward mode (transparent proxy to system DNS) is now the default.
Recursive and auto modes are explicit opt-in via config. This avoids
bypassing corporate DNS policies, captive portals, VPC private zones,
and parental controls on first install.

- Move #[default] from Auto to Forward on UpstreamMode
- DNSSEC defaults to off (no-op in forward mode)
- 3-way match in main: Forward/Recursive/Auto with clean separation
- Post-install message suggests mode = "recursive" for sovereignty
- Update README, site, and launch drafts messaging

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 08:49:16 +03:00
Razvan Dimescu
f9b503ab96 fix: include recursive and coalesced queries in cache hit rate denominator (#24)
The cache hit rate was computed as cached/(cached+forwarded+local+overridden),
excluding recursive and coalesced queries from the denominator. This inflated
the displayed rate (e.g. 57.9%) far above the actual cache proportion (20.9%).

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 00:17:40 +03:00
Razvan Dimescu
2b99b39bcc chore: updated install methods 2026-03-29 23:33:45 +03:00
Razvan Dimescu
7ab97f4cdc chore: bump version to 0.7.3 2026-03-29 23:16:46 +03:00
Razvan Dimescu
65dcd9a9c5 feat: resolve .numa services to LAN IP for remote clients (#23)
* feat: resolve .numa services to LAN IP for remote clients

Remote DNS clients (e.g. phones on same WiFi) received 127.0.0.1 for
local .numa services, which is unreachable from their perspective.
Now returns the host's LAN IP when the query originates from a
non-loopback address. Also auto-widens proxy bind to 0.0.0.0 when
DNS is already public, and adds a startup warning when the proxy
remains localhost-only.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: respect proxy bind_addr config, don't auto-widen

The auto-widen silently overrode an explicit config value — the user's
config should be the source of truth. Now the proxy always uses the
configured bind_addr, and the warning fires whenever it's 127.0.0.1.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: update proxy bind_addr comment in example config

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 23:15:42 +03:00
Razvan Dimescu
32cd8624b4 refactor: deduplicate query builders, record extraction, sinkhole records (#22)
- Add DnsPacket::query(id, domain, qtype) constructor; replace mock_query,
  make_query, and 4 inline constructions across ctx/forward/recursive/api
- Add record_to_addr() in recursive.rs; replace 4 identical A/AAAA match
  blocks with filter_map one-liners
- Add sinkhole_record() in ctx.rs; consolidate localhost and blocklist
  A/AAAA branching into single calls
- Remove now-unused DnsQuestion imports

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 14:22:07 +03:00
Razvan Dimescu
bea0affdde chore: bump version to 0.7.2 2026-03-29 11:44:10 +03:00
Razvan Dimescu
bad4f25d7d docs: streamline README for clarity and scannability
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 11:42:08 +03:00
Razvan Dimescu
5f45e23f55 refactor: extract resolve_coalesced, test real code (#21)
* refactor: extract resolve_coalesced, rewrite tests against real code

Extract Disposition enum, acquire_inflight(), and resolve_coalesced()
from handle_query so coalescing logic is independently testable. Rewrite
integration tests to call resolve_coalesced directly with mock futures
instead of fighting the iterative resolver's NS chain. All 12 coalescing
tests now exercise production code paths, not tokio primitives.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: SERVFAIL echoes question section, preserve error messages

resolve_coalesced now takes &DnsPacket instead of query_id so SERVFAIL
responses use response_from (echoing question section per RFC). Error
messages preserved via Option<String> return for upstream error logging.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-29 11:14:25 +03:00
Razvan Dimescu
882508297e chore: bump version to 0.7.1 2026-03-29 10:39:17 +03:00
Razvan Dimescu
2b241c5755 blog: add DNSSEC chain-of-trust SVG diagram
Replace text-based chain trace with a visual diagram showing the
verification flow from cloudflare.com through .com TLD to root
trust anchor. Matches site color palette and typography.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 10:38:47 +03:00
Razvan Dimescu
7510c8e068 feat: in-flight query coalescing with COALESCED path (#20)
* feat: in-flight query coalescing for recursive resolver

When multiple queries for the same (domain, qtype) arrive concurrently
and all miss the cache, only the first triggers recursive resolution.
Subsequent queries wait on a broadcast channel for the result.

Prevents thundering herd where N concurrent cache misses each
independently walk the full NS chain, compounding timeouts.

Uses InflightGuard (Drop impl) to guarantee map cleanup on
panic/cancellation — prevents permanent SERVFAIL poisoning.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: add InflightMap type alias for clippy

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add COALESCED query path and coalescing tests

Followers in the inflight coalescing path now log as COALESCED instead
of RECURSIVE, making it visible in the dashboard when queries were
deduplicated vs independently resolved. Adds 10 tests covering
InflightGuard cleanup, broadcast mechanics, and concurrent handle_query
coalescing through a mock TCP DNS server.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: cargo fmt

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: extract acquire_inflight, rewrite tests against real code

Move Disposition enum and inflight acquisition logic into a standalone
acquire_inflight() function. Rewrite 4 tests that were exercising tokio
primitives to call the real coalescing code path instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 10:36:02 +03:00
Razvan Dimescu
87c321f3d4 chore: add release script and make target
Usage: make release VERSION=0.8.0
Bumps Cargo.toml + Cargo.lock, commits, tags, pushes — triggers
the existing GitHub Actions release workflow.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 08:33:58 +03:00
Razvan Dimescu
edfccaa2b7 chore: update Cargo.lock for 0.7.0
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 08:22:32 +03:00
Razvan Dimescu
0c43240c01 chore: bump version to 0.7.0
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 08:16:26 +03:00
Razvan Dimescu
b615a56586 feat: SRTT-based nameserver selection (#19)
* feat: SRTT-based nameserver selection for recursive resolver

BIND-style Smoothed RTT (EWMA) tracking per NS IP address. The resolver
learns which nameservers respond fastest and prefers them, eliminating
cascading timeouts from slow/unreachable IPv6 servers.

- New src/srtt.rs: SrttCache with record_rtt, record_failure, sort_by_rtt
- EWMA formula: new = (old * 7 + sample) / 8, 5s failure penalty, 5min decay
- TCP penalty (+100ms) lets SRTT naturally deprioritize IPv6-over-TCP
- Enabled flag embedded in SrttCache (no-op when disabled)
- Batch eviction (64 entries) for O(1) amortized writes at capacity
- Configurable via [upstream] srtt = true/false (default: true)
- Benchmark script: scripts/benchmark.sh (full, cold, warm, compare-all)
- Benchmarks show 12x avg improvement, 0% queries >1s (was 58%)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: show DNSSEC and SRTT status in dashboard + API

Add dnssec and srtt boolean fields to /stats API response.
Display on/off indicators in the dashboard footer.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: apply SRTT decay before EWMA so recovered servers rehabilitate

Without decay-before-EWMA, a server penalized at 5000ms stayed near
that value even after recovery — the stale raw penalty was used as the
EWMA base instead of the decayed estimate. Extract decayed_srtt()
helper and call it in record_rtt() before the smoothing step.

Also restores removed "why" comments in send_query / resolve_recursive.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: add install/upgrade instructions, smarter benchmark priming

README: document `numa install`, `numa service`, Homebrew upgrade,
and `make deploy` workflows. Benchmark: replace fixed `sleep 4` with
`wait_for_priming` that polls cache entry count for stability.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-28 23:22:31 +02:00
Razvan Dimescu
7056766a84 fix: return NXDOMAIN for .local queries instead of SERVFAIL (#18)
.local is reserved for mDNS (RFC 6762) and cannot be resolved by
upstream DNS servers. Add it to is_special_use_domain() so queries
like _grpc_config.localhost.local get an immediate NXDOMAIN instead
of timing out and returning SERVFAIL.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-28 22:42:33 +02:00
Razvan Dimescu
ebfc31d793 chore: bump version to 0.6.0
Recursive DNS resolution, full DNSSEC validation, TCP fallback.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 04:12:28 +02:00
Razvan Dimescu
b6703b4315 feat: recursive DNS + DNSSEC + TCP fallback (#17)
* feat: recursive resolution + full DNSSEC validation

Numa becomes a true DNS resolver — resolves from root nameservers
with complete DNSSEC chain-of-trust verification.

Recursive resolution:
- Iterative RFC 1034 from configurable root hints (13 default)
- CNAME chasing (depth 8), referral following (depth 10)
- A+AAAA glue extraction, IPv6 nameserver support
- TLD priming: NS + DS + DNSKEY for 34 gTLDs + EU ccTLDs
- Config: mode = "recursive" in [upstream], root_hints, prime_tlds

DNSSEC (all 4 phases):
- EDNS0 OPT pseudo-record (DO bit, 1232 payload per DNS Flag Day 2020)
- DNSKEY, DS, RRSIG, NSEC, NSEC3 record types with wire read/write
- Signature verification via ring: RSA/SHA-256, ECDSA P-256, Ed25519
- Chain-of-trust: zone DNSKEY → parent DS → root KSK (key tag 20326)
- DNSKEY RRset self-signature verification (RRSIG(DNSKEY) by KSK)
- RRSIG expiration/inception time validation
- NSEC: NXDOMAIN gap proofs, NODATA type absence, wildcard denial
- NSEC3: SHA-1 iterated hashing, closest encloser proof, hash range
- Authority RRSIG verification for denial proofs
- Config: [dnssec] enabled/strict (default false, opt-in)
- AD bit on Secure, SERVFAIL on Bogus+strict
- DnssecStatus cached per entry, ValidationStats logging

Performance:
- TLD chain pre-warmed on startup (root DNSKEY + TLD DS/DNSKEY)
- Referral DS piggybacking from authority sections
- DNSKEY prefetch before validation loop
- Cold-cache validation: ~1 DNSKEY fetch (down from 5)
- Benchmarks: RSA 10.9µs, ECDSA 174ns, DS verify 257ns

Also:
- write_qname fix for root domain "." (was producing malformed queries)
- write_record_header() dedup, write_bytes() bulk writes
- DnsRecord::domain() + query_type() accessors
- UpstreamMode enum, DEFAULT_EDNS_PAYLOAD const
- Real glue TTL (was hardcoded 3600)
- DNSSEC restricted to recursive mode only

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: TCP fallback, query minimization, UDP auto-disable

Transport resilience for restrictive networks (ISPs blocking UDP:53):
- DNS-over-TCP fallback: UDP fail/truncation → automatic TCP retry
- UDP auto-disable: after 3 consecutive failures, switch to TCP-first
- IPv6 → TCP directly (UDP socket binds 0.0.0.0, can't reach IPv6)
- Network change resets UDP detection for re-probing
- Root hint rotation in TLD priming

Privacy:
- RFC 7816 query minimization: root servers see TLD only, not full name

Code quality:
- Merged find_starting_ns + find_starting_zone → find_closest_ns
- Extracted resolve_ns_addrs_from_glue shared helper
- Removed overall timeout wrapper (per-hop timeouts sufficient)
- forward_tcp for DNS-over-TCP (RFC 1035 §4.2.2)

Testing:
- Mock TCP-only DNS server for fallback tests (no network needed)
- tcp_fallback_resolves_when_udp_blocked
- tcp_only_iterative_resolution
- tcp_fallback_handles_nxdomain
- udp_auto_disable_resets
- Integration test suite (4 suites, 51 tests)
- Network probe script (tests/network-probe.sh)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: DNSSEC verified badge in dashboard query log

- Add dnssec field to QueryLogEntry, track validation status per query
- DnssecStatus::as_str() for API serialization
- Dashboard shows green checkmark next to DNSSEC-verified responses
- Blog post: add "How keys get there" section, transport resilience section,
  trim code blocks, update What's Next

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: use SVG shield for DNSSEC badge, update blog HTML

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: NS cache lookup from authorities, UDP re-probe, shield alignment

- find_closest_ns checks authorities (not just answers) for NS records,
  fixing TLD priming cache misses that caused redundant root queries
- Periodic UDP re-probe every 5min when disabled — re-enables UDP
  after switching from a restrictive network to an open one
- Dashboard DNSSEC shield uses fixed-width container for alignment
- Blog post: tuck key-tag into trust anchor paragraph

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: TCP single-write, mock server consistency, integration tests

- TCP single-write fix: combine length prefix + message to avoid split
  segments that Microsoft/Azure DNS servers reject
- Mock server (spawn_tcp_dns_server) updated to use single-write too
- Tests: forward_tcp_wire_format, forward_tcp_single_segment_write
- Integration: real-server checks for Microsoft/Office/Azure domains

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: recursive bar in dashboard, special-use domain interception

Dashboard:
- Add Recursive bar to resolution paths chart (cyan, distinct from Override)
- Add RECURSIVE path tag style in query log

Special-use domains (RFC 6761/6303/8880/9462):
- .localhost → 127.0.0.1 (RFC 6761)
- Private reverse PTR (10.x, 192.168.x, 172.16-31.x) → NXDOMAIN
- _dns.resolver.arpa (DDR) → NXDOMAIN
- ipv4only.arpa (NAT64) → 192.0.0.170/171
- mDNS service discovery for private ranges → NXDOMAIN

Eliminates ~900ms SERVFAILs for macOS system queries that were
hitting root servers unnecessarily.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: move generated blog HTML to site/blog/posts/, gitignore

- Generated HTML now in site/blog/posts/ (gitignored)
- CI workflow runs pandoc + make blog before deploy
- Updated all internal blog links to /blog/posts/ path
- blog/*.md remains the source of truth

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: review feedback — memory ordering, RRSIG time, NS resolution

- Ordering::Relaxed → Acquire/Release for UDP_DISABLED/UDP_FAILURES
  (ARM correctness for cross-thread coordination)
- RRSIG time validation: serial number arithmetic (RFC 4034 §3.1.5)
  + 300s clock skew fudge factor (matches BIND)
- resolve_ns_addrs_from_glue collects addresses from ALL NS names,
  not just the first with glue (improves failover)
- is_special_use_domain: eliminate 16 format! allocations per
  .in-addr.arpa query (parse octet instead)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: API endpoint tests, coverage target

- 8 new axum handler tests: health, stats, query-log, overrides CRUD,
  cache, blocking stats, services CRUD, dashboard HTML
- Tests use tower::oneshot — no network, no server startup
- test_ctx() builds minimal ServerCtx for isolated testing
- `make coverage` target (cargo-tarpaulin), separate from `make all`
- 82 total tests (was 74)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-28 04:03:47 +02:00
Razvan Dimescu
cc8d3c7a83 add Dev.to cover image (dashboard screenshot 1000x420)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 03:20:28 +02:00
Razvan Dimescu
4dec0c89b5 docs: update README — add numa.rs link, benchmarks, Windows support
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 02:28:38 +02:00
Razvan Dimescu
ea840f5a07 Change artifact upload path for GitHub Pages 2026-03-27 02:22:43 +02:00
Razvan Dimescu
df2856b57f feat: self-host fonts, styled block page, wildcard TLS (#16)
* perf: optimize hot path — RwLock, inline filtering, pre-allocated strings

- Mutex → RwLock for cache, blocklist, and overrides (concurrent read access)
- Make cache.lookup() and overrides.lookup() take &self (read-only)
- Eliminate 3 Vec allocations per DnsPacket::write() via inline filtering
- Pre-allocate domain strings with capacity 64 in parse path
- Add criterion micro-benchmarks (hot_path + throughput)
- Add bench README documenting both benchmark suites

Measured improvement: ~14% faster parsing, ~9% pipeline throughput,
round-trip cached 733ns → 698ns (~2.3M queries/sec).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: simplify benchmark code after review

- Remove redundant DnsHeader::new() (already set by DnsPacket::new())
- Remove unused DnsHeader import
- Change simulate_cached_pipeline to take &DnsCache (lookup is &self now)
- Remove unnecessary mut on cache in cache_lookup_miss bench

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* site: landing page overhaul, blog, benchmarks, numa.rs domain

Landing page:
- Split features into 3-layer card layout (Block & Protect, Developer Tools, Self-Sovereign DNS)
- Add DoH and conditional forwarding to comparison table
- Fix performance claim (2.3M → 2.0M qps to match benchmarks)
- Add all 3 install methods (brew, cargo, curl)
- Add OG tags + canonical URL for numa.rs
- Fix code block whitespace rendering
- Update roadmap with .onion bridge phase

Blog:
- Add "Building a DNS Resolver from Scratch in Rust" post
- Blog index + template for future posts

Other:
- CNAME for GitHub Pages (numa.rs)
- Benchmark results (bench/results.json)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: self-host fonts, styled block page, wildcard TLS

Fonts:
- Replace Google Fonts CDN with self-hosted woff2 (73KB, 5 files)
- Serve fonts from API server via include_bytes! (dashboard works offline)
- Proxy error pages use system fonts (zero external deps when DNS is broken)
- Fix Instrument Serif font-weight: use 400 (only available weight) instead of synthetic bold 600/700

Proxy:
- Styled "Blocked by Numa" page when blocked domain hits the proxy (was confusing "not a .numa domain" error)
- Extract shared error_page() template for 403 + 404 pages (deduplicate ~160 lines of CSS)

TLS:
- Add wildcard SAN *.numa to cert — unregistered .numa domains get valid HTTPS (styled 404 without cert warning)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 02:19:54 +02:00
Razvan Dimescu
236ef7b4f5 perf: optimize DNS query hot path (#15)
* perf: optimize hot path — RwLock, inline filtering, pre-allocated strings

- Mutex → RwLock for cache, blocklist, and overrides (concurrent read access)
- Make cache.lookup() and overrides.lookup() take &self (read-only)
- Eliminate 3 Vec allocations per DnsPacket::write() via inline filtering
- Pre-allocate domain strings with capacity 64 in parse path
- Add criterion micro-benchmarks (hot_path + throughput)
- Add bench README documenting both benchmark suites

Measured improvement: ~14% faster parsing, ~9% pipeline throughput,
round-trip cached 733ns → 698ns (~2.3M queries/sec).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: simplify benchmark code after review

- Remove redundant DnsHeader::new() (already set by DnsPacket::new())
- Remove unused DnsHeader import
- Change simulate_cached_pipeline to take &DnsCache (lookup is &self now)
- Remove unnecessary mut on cache in cache_lookup_miss bench

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 02:01:08 +02:00
Razvan Dimescu
5d454cbed5 update crate metadata + add deploy.sh release script
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 00:45:15 +02:00
Razvan Dimescu
c1d425069f bump version to 0.5.0
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 00:41:07 +02:00
Razvan Dimescu
d274500308 feat: DNS-over-HTTPS (DoH) upstream forwarding (#14)
* feat: DNS-over-HTTPS upstream forwarding

Encrypt upstream queries via DoH — ISPs see HTTPS traffic on port 443,
not plaintext DNS on port 53. URL scheme determines transport:
https:// = DoH, bare IP = plain UDP. Falls back to Quad9 DoH when
system resolver cannot be detected.

- Upstream enum (Udp/Doh) with Display and PartialEq
- BytePacketBuffer::from_bytes constructor
- reqwest http2 feature for DoH server compatibility
- network_watch_loop guards against DoH→UDP silent downgrade
- 5 new tests (mock DoH server, HTTP errors, timeout)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: cargo fmt

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: add DoH to README — Why Numa, comparison table, roadmap

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-24 00:39:58 +02:00
Razvan Dimescu
9c313ef06a docs: reorder README for launch — lead with unique features, add install methods
Comparison table and "Why Numa" reordered so unique capabilities (service proxy,
path routing, LAN discovery) appear first. Added brew/cargo install to Quick Start.
Removed unshipped "Self-sovereign DNS" row from comparison table. Named hickory-dns
and trust-dns in "How It Works" to signal deliberate architectural choice.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:16:50 +02:00
Razvan Dimescu
0d25fae4cf Merge pull request #13 from razvandimescu/fix/tls-hot-reload
fix: TLS cert hot-reload when services change
2026-03-23 19:46:05 +02:00
Razvan Dimescu
1ae2e23bb6 fix: regenerate TLS cert when services change (hot-reload via ArcSwap)
HTTPS proxy certs were generated once at startup. Services added at
runtime via API or LAN discovery got "not secure" in the browser
because their SAN wasn't in the cert. Now the cert is regenerated
on every service add/remove and swapped atomically via ArcSwap.
In-flight connections are unaffected.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 16:14:06 +02:00
Razvan Dimescu
fe784addd2 release: auto-publish to crates.io on tag push
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-23 14:41:21 +02:00
Razvan Dimescu
a3a218ba5e numa.toml: add commented [blocking] section for discoverability
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-23 14:02:43 +02:00
Razvan Dimescu
e4594c7955 bump version to 0.4.0
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-23 13:57:53 +02:00
Razvan Dimescu
b85f599b8f Merge pull request #12 from razvandimescu/feat/community-feedback-improvements
LAN opt-in, mDNS, security hardening, path routing
2026-03-23 13:55:19 +02:00
Razvan Dimescu
03c164e339 dynamic banner width, hoist HTML escaper, cache CA, restore log path
- banner box width adapts to longest value (fixes overflow with long paths)
- hoist h() HTML escape function to script top, remove 3 local copies
- serve_ca: add Cache-Control: public, max-age=86400
- restore log path in dashboard footer alongside new config/data fields

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-23 12:29:18 +02:00
Razvan Dimescu
2fce82e36c config visibility, PR review fixes, XSS hardening
Config visibility:
- startup banner shows config path, data dir, services path
- config search: ./numa.toml → ~/.config/numa/ → /usr/local/var/numa/
- /stats API exposes config_path and data_dir, dashboard footer renders them
- GET /ca.pem endpoint serves CA cert for cross-device TLS trust
- load_config returns ConfigLoad with found flag, warns on not-found
- ServerCtx stores PathBuf for config_dir/data_dir, string conversion at boundaries

PR review fixes:
- add explicit parens in resolve_route operator precedence (service_store.rs)
- hostname portability: drop -s flag, trim domain with split('.') (lan.rs)
- serve_ca uses spawn_blocking instead of sync fs::read in async handler
- load_config: remove TOCTOU exists() check, read directly and handle NotFound

XSS hardening:
- HTML-escape all user-controlled interpolations in dashboard (service names,
  route paths, ports, URLs, block check domain/reason)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-23 12:24:21 +02:00
Razvan Dimescu
53ae4d1404 address PR review: SRV port, drop spike, percent-encoded paths
- SRV record uses first service's port (was 0, confused dns-sd -L)
- Remove examples/mdns_coexist.rs (served its purpose as spike)
- Reject percent-encoding in route paths (defense-in-depth)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 11:21:09 +02:00
Razvan Dimescu
4748a4a4bb dashboard: show LAN status in Local Services panel header
- Add lan_enabled to ServerCtx
- Add lan field to /stats API (enabled, peer count)
- Dashboard shows "LAN off" (dim) or "LAN on · N peers" (green)
- Tooltip shows enable command or mDNS service type

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 11:16:52 +02:00
Razvan Dimescu
607470472d README: add numa lan on command to LAN discovery section
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 11:12:53 +02:00
Razvan Dimescu
0dd7700665 simplify set_lan_enabled: fix config path, TOCTOU, double iteration
- Accept config path parameter (consistent with main's resolution)
- Read first, match on NotFound (eliminates TOCTOU race)
- Single position() call replaces any() + position()
- Precise key matching via split_once('=')
- Preserve original indentation on replacement
- Extract print_lan_status helper

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 10:59:35 +02:00
Razvan Dimescu
dddc10336c add numa lan on/off CLI command, update README
- numa lan on/off toggles LAN discovery in numa.toml
- Writes [lan] section if missing, updates enabled if present
- Colored output with restart hint
- README: add lan on/off to help text

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 10:30:22 +02:00
Razvan Dimescu
4e723e8ee7 update README: mDNS, path routing, security defaults, opt-in LAN
- LAN discovery section: multicast → mDNS, add opt-in config example
- Add path-based routing to Why Numa, Local Service Proxy, comparison table, roadmap
- Update developer overrides: 25+ endpoints, mention /diagnose
- Comparison table: add path-based routing row
- Diagram: multicast → mDNS label

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 09:14:18 +02:00
Razvan Dimescu
03ca0bcb28 dashboard: route CRUD, source-aware service controls, XSS fix
- Add inline route management (+ route / x) per service in dashboard
- Expose service source (config vs api) in API response
- Only show service delete button for API-created services
- Pre-fill route port with service target_port
- Fix XSS in route path onclick handlers
- Skip renderServices refresh while route form is open (editingRoute guard)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-23 08:58:31 +02:00
Razvan Dimescu
c021d5a0c8 add unit tests for route matching, config defaults, and service store
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-23 07:49:22 +02:00
Razvan Dimescu
ed12659b26 fmt: fix proxy.rs formatting for CI rustfmt
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 07:13:58 +02:00
Razvan Dimescu
eaab406515 simplify: unify route structs, fix prefix collision, lint fixes
- Unify RouteConfig/RouteEntry/RouteResponse into single RouteEntry
- Fix prefix collision: /api no longer matches /apiary (segment boundary check)
- Add path traversal rejection in route API
- Extract MdnsAnnouncement struct (clippy type_complexity)
- cargo fmt

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 06:57:57 +02:00
Razvan Dimescu
9992418908 LAN opt-in, mDNS migration, security hardening, path-based routing
- LAN discovery disabled by default (opt-in via [lan] enabled = true)
- Replace custom JSON multicast (239.255.70.78:5390) with standard mDNS
  (_numa._tcp.local on 224.0.0.251:5353) using existing DNS parser
- Instance ID in TXT record for multi-instance self-filtering
- API and proxy bind to 127.0.0.1 by default (0.0.0.0 when LAN enabled)
- Path-based routing: longest prefix match with optional prefix stripping
  via [[services]] routes = [{path, port, strip?}]
- REST API: GET/POST/DELETE /services/{name}/routes
- Dashboard shows route lines per service when configured
- Segment-boundary route matching (prevents /api matching /apiary)
- Route path validation (rejects path traversal)

Closes #11

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 06:56:31 +02:00
Razvan Dimescu
0a43feaf1a Merge pull request #10 from razvandimescu/fix/fast-network-detect
Reduce network change detection to 5s
2026-03-22 21:47:25 +02:00
Razvan Dimescu
1bf11190d5 reduce network change detection to 5s with tiered polling
LAN IP checked every 5s (cheap UDP socket call). Full upstream
re-detection runs every 30s as safety net, or immediately when
LAN IP changes. Reduces worst-case network switch recovery from
30s to 5s.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-22 19:36:03 +02:00
58 changed files with 12119 additions and 857 deletions

View File

@@ -79,8 +79,21 @@ jobs:
${{ matrix.name }}.zip
${{ matrix.name }}.zip.sha256
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- name: Publish to crates.io
run: cargo publish
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
release:
needs: build
needs: [build, publish]
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v4

47
.github/workflows/static.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
# Simple workflow for deploying static content to GitHub Pages
name: Deploy static content to Pages
on:
# Runs on pushes targeting the default branch
push:
branches: ["main"]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
# Single deploy job since we're just deploying
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install pandoc
run: sudo apt-get install -y pandoc
- name: Generate blog HTML
run: make blog
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
# Upload entire repository
path: './site'
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

1
.gitignore vendored
View File

@@ -1,3 +1,4 @@
/target
CLAUDE.md
docs/
site/blog/posts/

413
Cargo.lock generated
View File

@@ -18,10 +18,16 @@ dependencies = [
]
[[package]]
name = "anstream"
version = "0.6.21"
name = "anes"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a"
checksum = "4b46cbb362ab8752921c97e041f5e366ee6297bd428a31275b9fcf1e380f7299"
[[package]]
name = "anstream"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "824a212faf96e9acacdbd09febd34438f8f711fb84e09a8916013cd7815ca28d"
dependencies = [
"anstyle",
"anstyle-parse",
@@ -34,15 +40,15 @@ dependencies = [
[[package]]
name = "anstyle"
version = "1.0.13"
version = "1.0.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78"
checksum = "940b3a0ca603d1eade50a4846a2afffd5ef57a9feac2c0e2ec2e14f9ead76000"
[[package]]
name = "anstyle-parse"
version = "0.2.7"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2"
checksum = "52ce7f38b242319f7cabaa6813055467063ecdc9d355bbb4ce0c68908cd8130e"
dependencies = [
"utf8parse",
]
@@ -67,6 +73,15 @@ dependencies = [
"windows-sys 0.61.2",
]
[[package]]
name = "arc-swap"
version = "1.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a07d1f37ff60921c83bdfc7407723bdefe89b44b98a9b772f225c8f9d67141a6"
dependencies = [
"rustversion",
]
[[package]]
name = "asn1-rs"
version = "0.6.2"
@@ -142,9 +157,9 @@ dependencies = [
[[package]]
name = "aws-lc-sys"
version = "0.39.0"
version = "0.39.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fa7e52a4c5c547c741610a2c6f123f3881e409b714cd27e6798ef020c514f0a"
checksum = "83a25cf98105baa966497416dbd42565ce3a8cf8dbfd59803ec9ad46f3126399"
dependencies = [
"cc",
"cmake",
@@ -229,10 +244,16 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33"
[[package]]
name = "cc"
version = "1.2.57"
name = "cast"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a0dd1ca384932ff3641c8718a02769f1698e7563dc6974ffd03346116310423"
checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]]
name = "cc"
version = "1.2.58"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e1e928d4b69e3077709075a938a05ffbedfa53a84c8f766efbf8220bb1ff60e1"
dependencies = [
"find-msvc-tools",
"jobserver",
@@ -253,19 +274,71 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "cmake"
version = "0.1.57"
name = "ciborium"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75443c44cd6b379beb8c5b45d85d0773baf31cce901fe7bb252f4eff3008ef7d"
checksum = "42e69ffd6f0917f5c029256a24d0161db17cea3997d185db0d35926308770f0e"
dependencies = [
"ciborium-io",
"ciborium-ll",
"serde",
]
[[package]]
name = "ciborium-io"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05afea1e0a06c9be33d539b876f1ce3692f4afea2cb41f740e7743225ed1c757"
[[package]]
name = "ciborium-ll"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57663b653d948a338bfb3eeba9bb2fd5fcfaecb9e199e87e1eda4d9e8b240fd9"
dependencies = [
"ciborium-io",
"half",
]
[[package]]
name = "clap"
version = "4.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b193af5b67834b676abd72466a96c1024e6a6ad978a1f484bd90b85c94041351"
dependencies = [
"clap_builder",
]
[[package]]
name = "clap_builder"
version = "4.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "714a53001bf66416adb0e2ef5ac857140e7dc3a0c48fb28b2f10762fc4b5069f"
dependencies = [
"anstyle",
"clap_lex",
]
[[package]]
name = "clap_lex"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8d4a3bb8b1e0c1050499d1815f5ab16d04f0959b233085fb31653fbfc9d98f9"
[[package]]
name = "cmake"
version = "0.1.58"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0f78a02292a74a88ac736019ab962ece0bc380e3f977bf72e376c5d78ff0678"
dependencies = [
"cc",
]
[[package]]
name = "colorchoice"
version = "1.0.4"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75"
checksum = "1d07550c9036bf2ae0c684c4297d503f838287c83c53686d05370d0e139ae570"
[[package]]
name = "compression-codecs"
@@ -293,6 +366,73 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "criterion"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2b12d017a929603d80db1831cd3a24082f8137ce19c69e6447f54f5fc8d692f"
dependencies = [
"anes",
"cast",
"ciborium",
"clap",
"criterion-plot",
"is-terminal",
"itertools",
"num-traits",
"once_cell",
"oorandom",
"plotters",
"rayon",
"regex",
"serde",
"serde_derive",
"serde_json",
"tinytemplate",
"walkdir",
]
[[package]]
name = "criterion-plot"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b50826342786a51a89e2da3a28f1c32b06e387201bc2d19791f622c673706b1"
dependencies = [
"cast",
"itertools",
]
[[package]]
name = "crossbeam-deque"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9dd111b7b7f7d55b72c0a6ae361660ee5853c9af73f70c3c2ef6858b950e2e51"
dependencies = [
"crossbeam-epoch",
"crossbeam-utils",
]
[[package]]
name = "crossbeam-epoch"
version = "0.9.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b82ac4a3c2ca9c3460964f020e1402edd5753411d7737aa39c3714ad1b5420e"
dependencies = [
"crossbeam-utils",
]
[[package]]
name = "crossbeam-utils"
version = "0.8.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d0a5c400df2834b80a4c3327b3aad3a4c4cd4de0629063962b03235697506a28"
[[package]]
name = "crunchy"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "460fbee9c2c2f33933d720630a6a0bac33ba7053db5344fac858d4b8952d77d5"
[[package]]
name = "data-encoding"
version = "2.10.0"
@@ -340,10 +480,16 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "92773504d58c093f6de2459af4af33faa518c13451eb8f2b5698ed3d36e7c813"
[[package]]
name = "env_filter"
version = "1.0.0"
name = "either"
version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a1c3cc8e57274ec99de65301228b537f1e4eedc1b8e0f9411c6caac8ae7308f"
checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719"
[[package]]
name = "env_filter"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32e90c2accc4b07a8456ea0debdc2e7587bdd890680d71173a15d4ae604f6eef"
dependencies = [
"log",
"regex",
@@ -351,9 +497,9 @@ dependencies = [
[[package]]
name = "env_logger"
version = "0.11.9"
version = "0.11.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b2daee4ea451f429a58296525ddf28b45a3b64f1acf6587e2067437bb11e218d"
checksum = "0621c04f2196ac3f488dd583365b9c09be011a4ab8b9f37248ffcc8f6198b56a"
dependencies = [
"anstream",
"anstyle",
@@ -384,6 +530,12 @@ dependencies = [
"miniz_oxide",
]
[[package]]
name = "fnv"
version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1"
[[package]]
name = "form_urlencoded"
version = "1.2.2"
@@ -514,12 +666,48 @@ dependencies = [
"wasm-bindgen",
]
[[package]]
name = "h2"
version = "0.4.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2f44da3a8150a6703ed5d34e164b875fd14c2cdab9af1252a9a1020bde2bdc54"
dependencies = [
"atomic-waker",
"bytes",
"fnv",
"futures-core",
"futures-sink",
"http",
"indexmap",
"slab",
"tokio",
"tokio-util",
"tracing",
]
[[package]]
name = "half"
version = "2.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ea2d84b969582b4b1864a92dc5d27cd2b77b622a8d79306834f1be5ba20d84b"
dependencies = [
"cfg-if",
"crunchy",
"zerocopy",
]
[[package]]
name = "hashbrown"
version = "0.16.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
[[package]]
name = "hermit-abi"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc0fef456e4baa96da950455cd02c081ca953b141298e41db3fc7e36b1da849c"
[[package]]
name = "http"
version = "1.4.0"
@@ -575,6 +763,7 @@ dependencies = [
"bytes",
"futures-channel",
"futures-core",
"h2",
"http",
"http-body",
"httparse",
@@ -747,14 +936,25 @@ checksum = "d98f6fed1fde3f8c21bc40a1abb88dd75e67924f9cffc3ef95607bad8017f8e2"
[[package]]
name = "iri-string"
version = "0.7.10"
version = "0.7.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c91338f0783edbd6195decb37bae672fd3b165faffb89bf7b9e6942f8b1a731a"
checksum = "d8e7418f59cc01c88316161279a7f665217ae316b388e58a0d10e29f54f1e5eb"
dependencies = [
"memchr",
"serde",
]
[[package]]
name = "is-terminal"
version = "0.4.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3640c1c38b8e4e43584d8df18be5fc6b0aa314ce6ebf51b53313d4306cca8e46"
dependencies = [
"hermit-abi",
"libc",
"windows-sys 0.61.2",
]
[[package]]
name = "is_terminal_polyfill"
version = "1.70.2"
@@ -762,10 +962,19 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695"
[[package]]
name = "itoa"
version = "1.0.17"
name = "itertools"
version = "0.10.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2"
checksum = "b0fd2260e829bddf4cb6ea802289de2f86d6a7a690192fbe91b3f46e0f2c8473"
dependencies = [
"either",
]
[[package]]
name = "itoa"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f42a60cbdf9a97f5d2305f08a87dc4e09308d1276d28c869c684d7777685682"
[[package]]
name = "jiff"
@@ -803,10 +1012,12 @@ dependencies = [
[[package]]
name = "js-sys"
version = "0.3.91"
version = "0.3.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b49715b7073f385ba4bc528e5747d02e66cb39c6146efb66b781f131f0fb399c"
checksum = "cc4c90f45aa2e6eacbe8645f77fdea542ac97a494bcd117a67df9ff4d611f995"
dependencies = [
"cfg-if",
"futures-util",
"once_cell",
"wasm-bindgen",
]
@@ -877,9 +1088,9 @@ dependencies = [
[[package]]
name = "mio"
version = "1.1.1"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a69bcab0ad47271a0234d9422b131806bf3968021e5dc9328caf2d4cd58557fc"
checksum = "50b7e5b27aa02a74bac8c3f23f448f8d87ff11f92d3aac1a6ed369ee08cc56c1"
dependencies = [
"libc",
"wasi",
@@ -908,9 +1119,9 @@ dependencies = [
[[package]]
name = "num-conv"
version = "0.2.0"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf97ec579c3c42f953ef76dbf8d55ac91fb219dde70e49aa4a6b7d74e9919050"
checksum = "c6673768db2d862beb9b39a78fdcb1a69439615d5794a1be50caa9bc92c81967"
[[package]]
name = "num-integer"
@@ -932,17 +1143,21 @@ dependencies = [
[[package]]
name = "numa"
version = "0.3.0"
version = "0.8.0"
dependencies = [
"arc-swap",
"axum",
"criterion",
"env_logger",
"futures",
"http",
"http-body-util",
"hyper",
"hyper-util",
"log",
"rcgen",
"reqwest",
"ring",
"rustls",
"serde",
"serde_json",
@@ -951,6 +1166,7 @@ dependencies = [
"tokio",
"tokio-rustls",
"toml",
"tower",
]
[[package]]
@@ -974,6 +1190,12 @@ version = "1.70.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe"
[[package]]
name = "oorandom"
version = "11.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d6790f58c7ff633d8771f42965289203411a5e5c68388703c06e14f24770b41e"
[[package]]
name = "pem"
version = "3.0.6"
@@ -1002,6 +1224,34 @@ version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]]
name = "plotters"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5aeb6f403d7a4911efb1e33402027fc44f29b5bf6def3effcc22d7bb75f2b747"
dependencies = [
"num-traits",
"plotters-backend",
"plotters-svg",
"wasm-bindgen",
"web-sys",
]
[[package]]
name = "plotters-backend"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df42e13c12958a16b3f7f4386b9ab1f3e7933914ecea48da7139435263a4172a"
[[package]]
name = "plotters-svg"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "51bae2ac328883f7acdfea3d66a7c35751187f870bc81f94563733a154d7a670"
dependencies = [
"plotters-backend",
]
[[package]]
name = "portable-atomic"
version = "1.13.1"
@@ -1010,9 +1260,9 @@ checksum = "c33a9471896f1c69cecef8d20cbe2f7accd12527ce60845ff44c153bb2a21b49"
[[package]]
name = "portable-atomic-util"
version = "0.2.5"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a9db96d7fa8782dd8c15ce32ffe8680bbd1e978a43bf51a34d39483540495f5"
checksum = "091397be61a01d4be58e7841595bd4bfedb15f1cd54977d79b8271e94ed799a3"
dependencies = [
"portable-atomic",
]
@@ -1149,6 +1399,26 @@ dependencies = [
"getrandom 0.3.4",
]
[[package]]
name = "rayon"
version = "1.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "368f01d005bf8fd9b1206fb6fa653e6c4a81ceb1466406b81792d87c5677a58f"
dependencies = [
"either",
"rayon-core",
]
[[package]]
name = "rayon-core"
version = "1.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "22e18b0f0062d30d4230b2e85ff77fdfe4326feb054b9783a3460d8435c8ab91"
dependencies = [
"crossbeam-deque",
"crossbeam-utils",
]
[[package]]
name = "rcgen"
version = "0.13.2"
@@ -1201,6 +1471,7 @@ dependencies = [
"base64",
"bytes",
"futures-core",
"h2",
"http",
"http-body",
"http-body-util",
@@ -1309,6 +1580,15 @@ version = "1.0.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9774ba4a74de5f7b1c1451ed6cd5285a32eddb5cccb8cc655a4e50009e06477f"
[[package]]
name = "same-file"
version = "1.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502"
dependencies = [
"winapi-util",
]
[[package]]
name = "serde"
version = "1.0.228"
@@ -1392,9 +1672,9 @@ checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "simd-adler32"
version = "0.3.8"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e320a6c5ad31d271ad523dcf3ad13e2767ad8b1cb8f047f75a8aeaf8da139da2"
checksum = "703d5c7ef118737c72f1af64ad2f6f8c5e1921f818cdcb97b8fe6fc69bf66214"
[[package]]
name = "slab"
@@ -1552,6 +1832,16 @@ dependencies = [
"zerovec",
]
[[package]]
name = "tinytemplate"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be4d6b5f19ff7664e8c98d03e2139cb510db9b0a60b55f8e8709b689d939b6bc"
dependencies = [
"serde",
"serde_json",
]
[[package]]
name = "tinyvec"
version = "1.11.0"
@@ -1770,6 +2060,16 @@ version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
[[package]]
name = "walkdir"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "29790946404f91d9c5d06f9874efddea1dc06c5efe94541a7d6863108e3a5e4b"
dependencies = [
"same-file",
"winapi-util",
]
[[package]]
name = "want"
version = "0.3.1"
@@ -1796,9 +2096,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen"
version = "0.2.114"
version = "0.2.115"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6532f9a5c1ece3798cb1c2cfdba640b9b3ba884f5db45973a6f442510a87d38e"
checksum = "6523d69017b7633e396a89c5efab138161ed5aafcbc8d3e5c5a42ae38f50495a"
dependencies = [
"cfg-if",
"once_cell",
@@ -1809,23 +2109,19 @@ dependencies = [
[[package]]
name = "wasm-bindgen-futures"
version = "0.4.64"
version = "0.4.65"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9c5522b3a28661442748e09d40924dfb9ca614b21c00d3fd135720e48b67db8"
checksum = "2d1faf851e778dfa54db7cd438b70758eba9755cb47403f3496edd7c8fc212f0"
dependencies = [
"cfg-if",
"futures-util",
"js-sys",
"once_cell",
"wasm-bindgen",
"web-sys",
]
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.114"
version = "0.2.115"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18a2d50fcf105fb33bb15f00e7a77b772945a2ee45dcf454961fd843e74c18e6"
checksum = "4e3a6c758eb2f701ed3d052ff5737f5bfe6614326ea7f3bbac7156192dc32e67"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
@@ -1833,9 +2129,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.114"
version = "0.2.115"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03ce4caeaac547cdf713d280eda22a730824dd11e6b8c3ca9e42247b25c631e3"
checksum = "921de2737904886b52bcbb237301552d05969a6f9c40d261eb0533c8b055fedf"
dependencies = [
"bumpalo",
"proc-macro2",
@@ -1846,18 +2142,18 @@ dependencies = [
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.114"
version = "0.2.115"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75a326b8c223ee17883a4251907455a2431acc2791c98c26279376490c378c16"
checksum = "a93e946af942b58934c604527337bad9ae33ba1d5c6900bbb41c2c07c2364a93"
dependencies = [
"unicode-ident",
]
[[package]]
name = "web-sys"
version = "0.3.91"
version = "0.3.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "854ba17bb104abfb26ba36da9729addc7ce7f06f5c0f90f3c391f8461cca21f9"
checksum = "84cde8507f4d7cfcb1185b8cb5890c494ffea65edbe1ba82cfd63661c805ed94"
dependencies = [
"js-sys",
"wasm-bindgen",
@@ -1882,6 +2178,15 @@ dependencies = [
"rustls-pki-types",
]
[[package]]
name = "winapi-util"
version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [
"windows-sys 0.61.2",
]
[[package]]
name = "windows-link"
version = "0.2.1"

View File

@@ -1,23 +1,23 @@
[package]
name = "numa"
version = "0.3.1"
version = "0.8.0"
authors = ["razvandimescu <razvan@dimescu.com>"]
edition = "2021"
description = "Ephemeral DNS overrides for development and testing. Point any hostname to any endpoint. Auto-revert when you're done."
description = "Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS"
license = "MIT"
repository = "https://github.com/razvandimescu/numa"
keywords = ["dns", "proxy", "override", "development", "networking"]
keywords = ["dns", "dns-server", "ad-blocking", "reverse-proxy", "developer-tools"]
categories = ["network-programming", "development-tools"]
[dependencies]
tokio = { version = "1", features = ["rt-multi-thread", "macros", "net", "time"] }
tokio = { version = "1", features = ["rt-multi-thread", "macros", "net", "time", "sync"] }
axum = "0.8"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
toml = "0.8"
log = "0.4"
env_logger = "0.11"
reqwest = { version = "0.12", features = ["rustls-tls", "gzip"], default-features = false }
reqwest = { version = "0.12", features = ["rustls-tls", "gzip", "http2"], default-features = false }
hyper = { version = "1", features = ["client", "http1", "server"] }
hyper-util = { version = "0.1", features = ["client-legacy", "http1", "tokio"] }
http-body-util = "0.1"
@@ -27,3 +27,22 @@ rcgen = { version = "0.13", features = ["pem", "x509-parser"] }
time = "0.3"
rustls = "0.23"
tokio-rustls = "0.26"
arc-swap = "1"
ring = "0.17"
[dev-dependencies]
criterion = { version = "0.5", features = ["html_reports"] }
tower = { version = "0.5", features = ["util"] }
http = "1"
[[bench]]
name = "hot_path"
harness = false
[[bench]]
name = "throughput"
harness = false
[[bench]]
name = "dnssec"
harness = false

View File

@@ -1,6 +1,6 @@
.PHONY: all build lint fmt check audit test clean deploy
.PHONY: all build lint fmt check audit test coverage bench clean deploy blog release
all: lint build
all: lint build test
build:
cargo build
@@ -19,6 +19,26 @@ audit:
test:
cargo test
coverage:
cargo tarpaulin --skip-clean --out stdout
bench:
cargo bench
blog:
@mkdir -p site/blog/posts
@for f in blog/*.md; do \
name=$$(basename "$$f" .md); \
pandoc "$$f" --template=site/blog-template.html -o "site/blog/posts/$$name.html"; \
echo " $$f → site/blog/posts/$$name.html"; \
done
release:
ifndef VERSION
$(error Usage: make release VERSION=0.8.0)
endif
./scripts/release.sh $(VERSION)
clean:
cargo clean

140
README.md
View File

@@ -4,138 +4,100 @@
[![crates.io](https://img.shields.io/crates/v/numa.svg)](https://crates.io/crates/numa)
[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
**DNS you own. Everywhere you go.**
**DNS you own. Everywhere you go.** — [numa.rs](https://numa.rs)
A portable DNS resolver in a single binary. Block ads on any network, name your local services (`frontend.numa`), and override any hostname with auto-revert — all from your laptop, no cloud account or Raspberry Pi required.
Built from scratch in Rust. Zero DNS libraries. RFC 1035 wire protocol parsed by hand. One ~8MB binary, no PHP, no web server, no database — everything is embedded.
Built from scratch in Rust. Zero DNS libraries. RFC 1035 wire protocol parsed by hand. Caching, ad blocking, and local service domains out of the box. Optional recursive resolution from root nameservers with full DNSSEC chain-of-trust validation. One ~8MB binary, everything embedded.
![Numa dashboard](assets/hero-demo.gif)
## Quick Start
```bash
# Install
curl -fsSL https://raw.githubusercontent.com/razvandimescu/numa/main/install.sh | sh
brew install razvandimescu/tap/numa
# or: cargo install numa
# or: curl -fsSL https://raw.githubusercontent.com/razvandimescu/numa/main/install.sh | sh
# Run (port 53 requires root)
sudo numa
# Try it
dig @127.0.0.1 google.com # ✓ resolves normally
dig @127.0.0.1 ads.google.com # ✗ blocked → 0.0.0.0
sudo numa # port 53 requires root
```
Open the dashboard: **http://numa.numa** (or `http://localhost:5380`)
Or build from source:
```bash
git clone https://github.com/razvandimescu/numa.git && cd numa
cargo build --release
sudo ./target/release/numa
```
Set as system DNS: `sudo numa install`
## Why Numa
## Local Services
- **Ad blocking that travels with you** — 385K+ domains blocked via [Hagezi Pro](https://github.com/hagezi/dns-blocklists). Works on any network: coffee shops, hotels, airports.
- **Local service proxy** — `https://frontend.numa` instead of `localhost:5173`. Auto-generated TLS certs, WebSocket support for HMR. Like `/etc/hosts` but with a dashboard and auto-revert.
- **LAN service discovery** — Numa instances on the same network find each other automatically via multicast. Access a teammate's `api.numa` from your machine, zero config.
- **Developer overrides** — point any hostname to any IP, auto-reverts after N minutes. REST API with 22 endpoints.
- **Sub-millisecond caching** — cached lookups in 0ms. Faster than any public resolver.
- **Live dashboard** — real-time stats, query log, blocking controls, service management. LAN accessibility badges show which services are reachable from other devices.
- **macOS + Linux** — `numa install` configures system DNS, `numa service start` runs as launchd/systemd service.
## Local Service Proxy
Name your local dev services with `.numa` domains:
Name your dev services instead of remembering port numbers:
```bash
curl -X POST localhost:5380/services \
-H 'Content-Type: application/json' \
-d '{"name":"frontend","target_port":5173}'
open http://frontend.numa # → proxied to localhost:5173
```
- **HTTPS with green lock** — auto-generated local CA + per-service TLS certs
- **WebSocket** — Vite/webpack HMR works through the proxy
- **Health checks** — dashboard shows green/red status per service
- **LAN sharing** — services bound to `0.0.0.0` are automatically discoverable by other Numa instances on the network. Dashboard shows "LAN" or "local only" per service.
- **Persistent** — services survive restarts
- Or configure in `numa.toml`:
Now `https://frontend.numa` works in your browser — green lock, valid cert, WebSocket passthrough for HMR. No mkcert, no nginx, no `/etc/hosts`.
```toml
[[services]]
name = "frontend"
target_port = 5173
```
Add path-based routing (`app.numa/api → :5001`), share services across machines via LAN discovery, or configure everything in [`numa.toml`](numa.toml).
## LAN Service Discovery
## Ad Blocking & Privacy
Run Numa on multiple machines. They find each other automatically:
385K+ domains blocked via [Hagezi Pro](https://github.com/hagezi/dns-blocklists). Works on any network — coffee shops, hotels, airports. Travels with your laptop.
By default, Numa forwards to your existing system DNS — everything works as before, just with caching and ad blocking on top. For full privacy, set `mode = "recursive"` — Numa resolves directly from root nameservers. No upstream dependency, no single entity sees your full query pattern. DNSSEC validates the full chain of trust: RRSIG signatures, DNSKEY verification, DS delegation, NSEC/NSEC3 denial proofs. [Read how it works →](https://numa.rs/blog/posts/dnssec-from-scratch.html)
## LAN Discovery
Run Numa on multiple machines. They find each other automatically via mDNS:
```
Machine A (192.168.1.5) Machine B (192.168.1.20)
┌──────────────────────┐ ┌──────────────────────┐
│ Numa │ multicast │ Numa │
services: │◄───────────►│ services:
- api (port 8000) │ discovery │ - grafana (3000)
│ - frontend (5173) │ │ │
│ Numa │ mDNS │ Numa │
- api (port 8000) │◄───────────►│ - grafana (3000)
│ - frontend (5173) │ discovery │
└──────────────────────┘ └──────────────────────┘
```
From Machine B:
```bash
dig @127.0.0.1 api.numa # → 192.168.1.5
curl http://api.numa # → proxied to Machine A's port 8000
```
From Machine B: `curl http://api.numa` → proxied to Machine A's port 8000. Enable with `numa lan on`.
No configuration needed. Multicast announcements on `239.255.70.78:5390`, configurable via `[lan]` in `numa.toml`.
**Hub mode** — don't want to install Numa on every machine? Run one instance as a shared DNS server and point other devices to it:
```bash
# On the hub machine, bind to LAN interface
[server]
bind_addr = "0.0.0.0:53"
# On other devices, set DNS to the hub's IP
# They get .numa resolution, ad blocking, caching — zero install
```
**Hub mode**: run one instance with `bind_addr = "0.0.0.0:53"` and point other devices' DNS to it — they get ad blocking + `.numa` resolution without installing anything.
## How It Compares
| | Pi-hole | AdGuard Home | NextDNS | Cloudflare | Numa |
|---|---|---|---|---|---|
| Ad blocking | Yes | Yes | Yes | Limited | 385K+ domains |
| Portable (travels with laptop) | No (appliance) | No (appliance) | Cloud only | Cloud only | Single binary |
| Developer overrides | No | No | No | No | REST API + auto-expiry |
| Local service proxy | No | No | No | No | `.numa` + HTTPS + WS |
| LAN service discovery | No | No | No | No | Multicast, zero config |
| Data stays local | Yes | Yes | Cloud | Cloud | 100% local |
| Zero config | Complex | Docker/setup | Yes | Yes | Works out of the box |
| Self-sovereign DNS | No | No | No | No | pkarr/DHT roadmap |
| | Pi-hole | AdGuard Home | Unbound | Numa |
|---|---|---|---|---|
| Local service proxy + auto TLS | — | — | — | `.numa` domains, HTTPS, WebSocket |
| LAN service discovery | — | — | — | mDNS, zero config |
| Developer overrides (REST API) | | | — | Auto-revert, scriptable |
| Recursive resolver | | | Yes | Yes, with SRTT selection |
| DNSSEC validation | | | Yes | Yes (RSA, ECDSA, Ed25519) |
| Ad blocking | Yes | Yes | — | 385K+ domains |
| Web admin UI | Full | Full | — | Dashboard |
| Encrypted upstream (DoH) | Needs cloudflared | Yes | | Native |
| Portable (laptop) | No (appliance) | No (appliance) | Server | Single binary |
| Community maturity | 56K stars, 10 years | 33K stars | 20 years | New |
## How It Works
## Performance
```
Query → Overrides → .numa TLD → Blocklist → Local Zones → Cache → Upstream
```
691ns cached round-trip. ~2.0M qps throughput. Zero heap allocations in the hot path. Recursive queries average 237ms after SRTT warmup (12x improvement over round-robin). ECDSA P-256 DNSSEC verification: 174ns. [Benchmarks →](bench/)
No DNS libraries. The wire protocol — headers, labels, compression pointers, record types — is parsed and serialized by hand. Runs on `tokio` + `axum`, async per-query task spawning.
## Learn More
[Configuration reference](numa.toml)
- [Blog: Implementing DNSSEC from Scratch in Rust](https://numa.rs/blog/posts/dnssec-from-scratch.html)
- [Blog: I Built a DNS Resolver from Scratch](https://numa.rs/blog/posts/dns-from-scratch.html)
- [Configuration reference](numa.toml) — all options documented inline
- [REST API](src/api.rs) — 27 endpoints across overrides, cache, blocking, services, diagnostics
## Roadmap
- [x] DNS proxy core — forwarding, caching, local zones
- [x] Developer overrides — REST API with auto-expiry
- [x] Ad blocking — 385K+ domains, live dashboard, allowlist
- [x] System integration — macOS + Linux, launchd/systemd, Tailscale/VPN auto-discovery
- [x] Local service proxy — `.numa` domains, HTTP/HTTPS proxy, auto TLS, WebSocket
- [x] LAN service discovery — multicast auto-discovery, cross-machine DNS + proxy
- [ ] pkarr integration — self-sovereign DNS via Mainline DHT (15M nodes)
- [ ] Global `.numa` names — self-publish, DHT-backed, first-come-first-served
- [x] DNS forwarding, caching, ad blocking, developer overrides
- [x] `.numa` local domains — auto TLS, path routing, WebSocket proxy
- [x] LAN service discovery — mDNS, cross-machine DNS + proxy
- [x] DNS-over-HTTPS — encrypted upstream
- [x] Recursive resolution + DNSSEC — chain-of-trust, NSEC/NSEC3
- [x] SRTT-based nameserver selection
- [ ] pkarr integration — self-sovereign DNS via Mainline DHT
- [ ] Global `.numa` names — DHT-backed, no registrar
## License

BIN
assets/devto-cover.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

87
bench/README.md Normal file
View File

@@ -0,0 +1,87 @@
# Benchmarks
Numa has two benchmark suites measuring different layers of performance.
## Micro-benchmarks (`benches/`, criterion)
Nanosecond-precision measurement of individual operations on the hot path.
No running server required — these are pure Rust unit-level benchmarks.
```sh
cargo bench # run all
cargo bench --bench hot_path # parse, serialize, cache, clone
cargo bench --bench throughput # pipeline QPS, buffer alloc
```
### What's measured
**hot_path** — individual operations:
| Benchmark | What it measures |
|-----------|-----------------|
| `buffer_parse` | Wire bytes → DnsPacket (typical response with 4 records) |
| `buffer_serialize` | DnsPacket → wire bytes |
| `packet_clone` | Full DnsPacket clone (what cache hit costs) |
| `cache_lookup_hit` | Cache lookup on a single-entry cache |
| `cache_lookup_hit_populated` | Cache lookup with 1000 entries |
| `cache_lookup_miss` | HashMap miss (baseline) |
| `cache_insert` | Insert into cache with packet clone |
| `round_trip_cached` | Full cached path: parse query → cache hit → serialize response |
**throughput** — pipeline capacity:
| Benchmark | What it measures |
|-----------|-----------------|
| `pipeline_throughput/N` | N cached queries end-to-end (parse → lookup → serialize) |
| `buffer_alloc` | BytePacketBuffer 4KB zero-init cost |
### Reading results
Criterion auto-compares against the previous run:
```
round_trip_cached time: [710.5 ns 715.2 ns 720.1 ns]
change: [-2.48% -1.85% -1.21%] (p = 0.00 < 0.05)
Performance has improved.
```
- The three values are [lower bound, estimate, upper bound] of the mean
- `change` shows the delta vs the last saved baseline
- HTML reports with charts: `target/criterion/report/index.html`
To save a named baseline for comparison:
```sh
cargo bench -- --save-baseline before
# ... make changes ...
cargo bench -- --baseline before
```
## End-to-end benchmark (`bench/dns-bench.sh`)
Real-world latency comparison using `dig` against a running Numa instance
and public resolvers. Measures millisecond-level latency including network I/O.
```sh
# Start Numa first (default port 15353 for testing)
python3 bench/dns-bench.sh [port] [rounds]
python3 bench/dns-bench.sh 15353 20 # default
```
### What's measured
- **Numa (cold)**: cache flushed before each query — measures upstream forwarding
- **Numa (cached)**: queries hit cache — measures local processing
- **System / Google / Cloudflare / Quad9**: public resolver comparison
Results saved to `bench/results.json`.
### When to use which
| Question | Use |
|----------|-----|
| Did my code change make parsing faster? | `cargo bench --bench hot_path` |
| Is the cached path still sub-microsecond? | `cargo bench --bench hot_path` (round_trip_cached) |
| How many queries/sec can we handle? | `cargo bench --bench throughput` |
| Is Numa still competitive with system resolver? | `bench/dns-bench.sh` |
| Did upstream forwarding regress? | `bench/dns-bench.sh` |

50
bench/results.json Normal file
View File

@@ -0,0 +1,50 @@
{
"Numa(cold)": {
"avg": 9,
"p50": 9,
"p99": 18,
"min": 8,
"max": 18,
"count": 50
},
"Numa(cached)": {
"avg": 0,
"p50": 0,
"p99": 0,
"min": 0,
"max": 0,
"count": 50
},
"System": {
"avg": 9.1,
"p50": 8,
"p99": 44,
"min": 7,
"max": 44,
"count": 50
},
"Google": {
"avg": 22.4,
"p50": 17,
"p99": 37,
"min": 13,
"max": 37,
"count": 50
},
"Cloudflare": {
"avg": 18.7,
"p50": 14,
"p99": 132,
"min": 12,
"max": 132,
"count": 50
},
"Quad9": {
"avg": 14.5,
"p50": 13,
"p99": 43,
"min": 12,
"max": 43,
"count": 50
}
}

183
benches/dnssec.rs Normal file
View File

@@ -0,0 +1,183 @@
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use numa::dnssec;
use numa::question::QueryType;
use numa::record::DnsRecord;
// Realistic ECDSA P-256 key (64 bytes) and signature (64 bytes)
fn make_ecdsa_key() -> Vec<u8> {
vec![0xAB; 64]
}
fn make_ecdsa_sig() -> Vec<u8> {
vec![0xCD; 64]
}
// Realistic RSA-2048 key (RFC 3110 format: exp_len=3, exp=65537, mod=256 bytes)
fn make_rsa_key() -> Vec<u8> {
let mut key = vec![3u8]; // exponent length
key.extend(&[0x01, 0x00, 0x01]); // exponent = 65537
key.extend(vec![0xFF; 256]); // modulus (256 bytes = 2048 bits)
key
}
fn make_ed25519_key() -> Vec<u8> {
vec![0xEF; 32]
}
fn make_dnskey(algorithm: u8, public_key: Vec<u8>) -> DnsRecord {
DnsRecord::DNSKEY {
domain: "example.com".into(),
flags: 257,
protocol: 3,
algorithm,
public_key,
ttl: 3600,
}
}
fn make_rrsig(algorithm: u8, signature: Vec<u8>) -> DnsRecord {
DnsRecord::RRSIG {
domain: "example.com".into(),
type_covered: QueryType::A.to_num(),
algorithm,
labels: 2,
original_ttl: 300,
expiration: 2000000000,
inception: 1600000000,
key_tag: 12345,
signer_name: "example.com".into(),
signature,
ttl: 300,
}
}
fn make_rrset() -> Vec<DnsRecord> {
vec![
DnsRecord::A {
domain: "example.com".into(),
addr: "93.184.216.34".parse().unwrap(),
ttl: 300,
},
DnsRecord::A {
domain: "example.com".into(),
addr: "93.184.216.35".parse().unwrap(),
ttl: 300,
},
]
}
fn bench_key_tag(c: &mut Criterion) {
let key = make_rsa_key();
c.bench_function("key_tag_rsa2048", |b| {
b.iter(|| {
dnssec::compute_key_tag(black_box(257), black_box(3), black_box(8), black_box(&key))
})
});
let key = make_ecdsa_key();
c.bench_function("key_tag_ecdsa_p256", |b| {
b.iter(|| {
dnssec::compute_key_tag(black_box(257), black_box(3), black_box(13), black_box(&key))
})
});
}
fn bench_name_to_wire(c: &mut Criterion) {
c.bench_function("name_to_wire_short", |b| {
b.iter(|| dnssec::name_to_wire(black_box("example.com")))
});
c.bench_function("name_to_wire_long", |b| {
b.iter(|| dnssec::name_to_wire(black_box("sub.deep.nested.example.co.uk")))
});
}
fn bench_build_signed_data(c: &mut Criterion) {
let rrsig = make_rrsig(13, make_ecdsa_sig());
let rrset = make_rrset();
let rrset_refs: Vec<&DnsRecord> = rrset.iter().collect();
c.bench_function("build_signed_data_2_A_records", |b| {
b.iter(|| dnssec::build_signed_data(black_box(&rrsig), black_box(&rrset_refs)))
});
}
fn bench_verify_signature(c: &mut Criterion) {
// These will fail verification (keys/sigs are random), but we measure the
// crypto overhead — ring still does the full algorithm before returning error.
let data = vec![0u8; 128]; // typical signed data size
let rsa_key = make_rsa_key();
let rsa_sig = vec![0xAA; 256]; // RSA-2048 signature
c.bench_function("verify_rsa_sha256_2048", |b| {
b.iter(|| {
dnssec::verify_signature(
black_box(8),
black_box(&rsa_key),
black_box(&data),
black_box(&rsa_sig),
)
})
});
let ecdsa_key = make_ecdsa_key();
let ecdsa_sig = make_ecdsa_sig();
c.bench_function("verify_ecdsa_p256", |b| {
b.iter(|| {
dnssec::verify_signature(
black_box(13),
black_box(&ecdsa_key),
black_box(&data),
black_box(&ecdsa_sig),
)
})
});
let ed_key = make_ed25519_key();
let ed_sig = vec![0xBB; 64];
c.bench_function("verify_ed25519", |b| {
b.iter(|| {
dnssec::verify_signature(
black_box(15),
black_box(&ed_key),
black_box(&data),
black_box(&ed_sig),
)
})
});
}
fn bench_ds_verification(c: &mut Criterion) {
let dk = make_dnskey(8, make_rsa_key());
// Compute correct DS digest
let owner_wire = dnssec::name_to_wire("example.com");
let mut dnskey_rdata = vec![1u8, 1, 3, 8]; // flags=257, proto=3, algo=8
dnskey_rdata.extend(&make_rsa_key());
let mut input = Vec::new();
input.extend(&owner_wire);
input.extend(&dnskey_rdata);
let digest = ring::digest::digest(&ring::digest::SHA256, &input);
let ds = DnsRecord::DS {
domain: "example.com".into(),
key_tag: dnssec::compute_key_tag(257, 3, 8, &make_rsa_key()),
algorithm: 8,
digest_type: 2,
digest: digest.as_ref().to_vec(),
ttl: 86400,
};
c.bench_function("verify_ds_sha256", |b| {
b.iter(|| dnssec::verify_ds(black_box(&ds), black_box(&dk), black_box("example.com")))
});
}
criterion_group!(
dnssec_benches,
bench_key_tag,
bench_name_to_wire,
bench_build_signed_data,
bench_verify_signature,
bench_ds_verification,
);
criterion_main!(dnssec_benches);

185
benches/hot_path.rs Normal file
View File

@@ -0,0 +1,185 @@
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use std::net::Ipv4Addr;
use numa::buffer::BytePacketBuffer;
use numa::cache::DnsCache;
use numa::header::ResultCode;
use numa::packet::DnsPacket;
use numa::question::{DnsQuestion, QueryType};
use numa::record::DnsRecord;
fn make_response(domain: &str) -> DnsPacket {
let mut pkt = DnsPacket::new();
pkt.header.id = 0x1234;
pkt.header.response = true;
pkt.header.recursion_desired = true;
pkt.header.recursion_available = true;
pkt.header.rescode = ResultCode::NOERROR;
pkt.questions
.push(DnsQuestion::new(domain.to_string(), QueryType::A));
pkt.answers.push(DnsRecord::A {
domain: domain.to_string(),
addr: Ipv4Addr::new(93, 184, 216, 34),
ttl: 300,
});
// Typical response includes authority + additional records
pkt.authorities.push(DnsRecord::NS {
domain: domain.to_string(),
host: format!("ns1.{domain}"),
ttl: 172800,
});
pkt.authorities.push(DnsRecord::NS {
domain: domain.to_string(),
host: format!("ns2.{domain}"),
ttl: 172800,
});
pkt.resources.push(DnsRecord::A {
domain: format!("ns1.{domain}"),
addr: Ipv4Addr::new(198, 51, 100, 1),
ttl: 172800,
});
pkt
}
fn to_wire(pkt: &DnsPacket) -> Vec<u8> {
let mut buf = BytePacketBuffer::new();
pkt.write(&mut buf).unwrap();
buf.filled().to_vec()
}
fn bench_buffer_parse(c: &mut Criterion) {
let pkt = make_response("example.com");
let wire = to_wire(&pkt);
c.bench_function("buffer_parse", |b| {
b.iter(|| {
let mut buf = BytePacketBuffer::from_bytes(black_box(&wire));
DnsPacket::from_buffer(&mut buf).unwrap()
})
});
}
fn bench_buffer_serialize(c: &mut Criterion) {
let pkt = make_response("example.com");
c.bench_function("buffer_serialize", |b| {
b.iter(|| {
let mut buf = BytePacketBuffer::new();
black_box(&pkt).write(&mut buf).unwrap();
black_box(buf.pos());
})
});
}
fn bench_packet_clone(c: &mut Criterion) {
let pkt = make_response("example.com");
c.bench_function("packet_clone", |b| b.iter(|| black_box(&pkt).clone()));
}
fn bench_cache_lookup_hit(c: &mut Criterion) {
let mut cache = DnsCache::new(10_000, 60, 86400);
let pkt = make_response("example.com");
cache.insert("example.com", QueryType::A, &pkt);
c.bench_function("cache_lookup_hit", |b| {
b.iter(|| {
cache
.lookup(black_box("example.com"), QueryType::A)
.unwrap()
})
});
}
fn bench_cache_lookup_miss(c: &mut Criterion) {
let cache = DnsCache::new(10_000, 60, 86400);
c.bench_function("cache_lookup_miss", |b| {
b.iter(|| cache.lookup(black_box("nonexistent.com"), QueryType::A))
});
}
fn bench_cache_insert(c: &mut Criterion) {
let pkt = make_response("example.com");
c.bench_function("cache_insert", |b| {
let mut cache = DnsCache::new(10_000, 60, 86400);
let mut i = 0u64;
b.iter(|| {
let domain = format!("bench-{i}.example.com");
cache.insert(&domain, QueryType::A, black_box(&pkt));
i += 1;
// Reset cache periodically to avoid filling up
if i % 5000 == 0 {
cache.clear();
}
})
});
}
fn bench_round_trip(c: &mut Criterion) {
// Simulates the cached hot path: parse query → cache hit → serialize response
let query_pkt = {
let mut q = DnsPacket::new();
q.header.id = 0xABCD;
q.header.recursion_desired = true;
q.questions
.push(DnsQuestion::new("example.com".to_string(), QueryType::A));
q
};
let query_wire = to_wire(&query_pkt);
let response = make_response("example.com");
let mut cache = DnsCache::new(10_000, 60, 86400);
cache.insert("example.com", QueryType::A, &response);
c.bench_function("round_trip_cached", |b| {
b.iter(|| {
// 1. Parse incoming query
let mut buf = BytePacketBuffer::from_bytes(black_box(&query_wire));
let query = DnsPacket::from_buffer(&mut buf).unwrap();
let qname = &query.questions[0].name;
let qtype = query.questions[0].qtype;
// 2. Cache lookup
let mut resp = cache.lookup(qname, qtype).unwrap();
resp.header.id = query.header.id;
// 3. Serialize response
let mut resp_buf = BytePacketBuffer::new();
resp.write(&mut resp_buf).unwrap();
black_box(resp_buf.pos());
})
});
}
fn bench_cache_populated_lookup(c: &mut Criterion) {
// Benchmark with a realistically populated cache (1000 entries)
let mut cache = DnsCache::new(10_000, 60, 86400);
for i in 0..1000 {
let domain = format!("domain-{i}.example.com");
let pkt = make_response(&domain);
cache.insert(&domain, QueryType::A, &pkt);
}
c.bench_function("cache_lookup_hit_populated", |b| {
b.iter(|| {
cache
.lookup(black_box("domain-500.example.com"), QueryType::A)
.unwrap()
})
});
}
criterion_group!(
benches,
bench_buffer_parse,
bench_buffer_serialize,
bench_packet_clone,
bench_cache_lookup_hit,
bench_cache_lookup_miss,
bench_cache_insert,
bench_round_trip,
bench_cache_populated_lookup,
);
criterion_main!(benches);

94
benches/throughput.rs Normal file
View File

@@ -0,0 +1,94 @@
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion, Throughput};
use std::net::Ipv4Addr;
use numa::buffer::BytePacketBuffer;
use numa::header::ResultCode;
use numa::packet::DnsPacket;
use numa::question::{DnsQuestion, QueryType};
use numa::record::DnsRecord;
fn make_query_wire(domain: &str) -> Vec<u8> {
let mut q = DnsPacket::new();
q.header.id = 0xABCD;
q.header.recursion_desired = true;
q.questions
.push(DnsQuestion::new(domain.to_string(), QueryType::A));
let mut buf = BytePacketBuffer::new();
q.write(&mut buf).unwrap();
buf.filled().to_vec()
}
fn make_response(domain: &str) -> DnsPacket {
let mut pkt = DnsPacket::new();
pkt.header.id = 0xABCD;
pkt.header.response = true;
pkt.header.recursion_desired = true;
pkt.header.recursion_available = true;
pkt.header.rescode = ResultCode::NOERROR;
pkt.questions
.push(DnsQuestion::new(domain.to_string(), QueryType::A));
pkt.answers.push(DnsRecord::A {
domain: domain.to_string(),
addr: Ipv4Addr::new(93, 184, 216, 34),
ttl: 300,
});
pkt
}
/// Simulates the complete cached query pipeline (sans network I/O):
/// parse → cache lookup → TTL adjust → serialize response
fn simulate_cached_pipeline(query_wire: &[u8], cache: &numa::cache::DnsCache) -> usize {
let mut buf = BytePacketBuffer::from_bytes(query_wire);
let query = DnsPacket::from_buffer(&mut buf).unwrap();
let q = &query.questions[0];
let mut resp = cache.lookup(&q.name, q.qtype).unwrap();
resp.header.id = query.header.id;
let mut resp_buf = BytePacketBuffer::new();
resp.write(&mut resp_buf).unwrap();
resp_buf.pos()
}
fn bench_pipeline_throughput(c: &mut Criterion) {
let domains: Vec<String> = (0..100)
.map(|i| format!("domain-{i}.example.com"))
.collect();
let mut cache = numa::cache::DnsCache::new(10_000, 60, 86400);
for d in &domains {
cache.insert(d, QueryType::A, &make_response(d));
}
let query_wires: Vec<Vec<u8>> = domains.iter().map(|d| make_query_wire(d)).collect();
let mut group = c.benchmark_group("pipeline_throughput");
for count in [1, 10, 100] {
group.throughput(Throughput::Elements(count));
group.bench_with_input(BenchmarkId::from_parameter(count), &count, |b, &count| {
let mut idx = 0usize;
b.iter(|| {
for _ in 0..count {
let wire = &query_wires[idx % query_wires.len()];
simulate_cached_pipeline(wire, &cache);
idx += 1;
}
});
});
}
group.finish();
}
/// Measures the overhead of BytePacketBuffer allocation + zero-init
fn bench_buffer_alloc(c: &mut Criterion) {
c.bench_function("buffer_alloc", |b| {
b.iter(|| {
let buf = BytePacketBuffer::new();
criterion::black_box(buf.pos());
})
});
}
criterion_group!(benches, bench_pipeline_throughput, bench_buffer_alloc,);
criterion_main!(benches);

327
blog/dns-from-scratch.md Normal file
View File

@@ -0,0 +1,327 @@
---
title: I Built a DNS Resolver from Scratch in Rust
description: How DNS actually works at the wire level — label compression, TTL tricks, DoH, and what surprised me building a resolver with zero DNS libraries.
date: March 2026
---
I wanted to understand how DNS actually works. Not the "it translates domain names to IP addresses" explanation — the actual bytes on the wire. What does a DNS packet look like? How does label compression work? Why is everything crammed into 512 bytes?
So I built one from scratch in Rust. No `hickory-dns`, no `trust-dns`, no `simple-dns`. The entire RFC 1035 wire protocol — headers, labels, compression pointers, record types — parsed and serialized by hand. It started as a weekend learning project, became a side project I kept coming back to over 6 years, and eventually turned into [Numa](https://github.com/razvandimescu/numa) — which I now use as my actual system DNS.
A note on terminology: Numa supports two resolution modes. *Forward* mode relays queries to an upstream (Quad9, Cloudflare, or any DoH provider). *Recursive* mode walks the delegation chain from root servers itself — iterative queries to root, TLD, and authoritative nameservers, with full DNSSEC validation. In both modes, Numa does useful things with your DNS traffic locally (caching, ad blocking, overrides, local service domains) before resolving what it can't answer. This post covers the wire protocol and forwarding path; [the next post](/blog/posts/dnssec-from-scratch.html) covers recursive resolution and DNSSEC.
Here's what surprised me along the way.
## What does a DNS packet actually look like?
You can see a real one yourself. Run this:
```bash
dig @127.0.0.1 example.com A +noedns
```
```
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15242
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 53 IN A 104.18.27.120
example.com. 53 IN A 104.18.26.120
```
That's the human-readable version. But what's actually on the wire? A DNS query for `example.com A` is just 29 bytes:
```
ID Flags QCount ACount NSCount ARCount
┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐
Header: AB CD 01 00 00 01 00 00 00 00 00 00
└────┘ └────┘ └────┘ └────┘ └────┘ └────┘
↑ ↑ ↑
│ │ └─ 1 question, 0 answers, 0 authority, 0 additional
│ └─ Standard query, recursion desired
└─ Random ID (we'll match this in the response)
Question: 07 65 78 61 6D 70 6C 65 03 63 6F 6D 00 00 01 00 01
── ───────────────────── ── ───────── ── ───── ─────
7 e x a m p l e 3 c o m end A IN
↑ ↑ ↑
└─ length prefix └─ length └─ root label (end of name)
```
12 bytes of header + 17 bytes of question = 29 bytes to ask "what's the IP for example.com?" Compare that to an HTTP request for the same information — you'd need hundreds of bytes just for headers.
We can send exactly those bytes and capture what comes back:
```python
python3 -c "
import socket
# Hand-craft a DNS query: header (12 bytes) + question (17 bytes)
q = b'\xab\xcd\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00' # header
q += b'\x07example\x03com\x00\x00\x01\x00\x01' # question
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.sendto(q, ('127.0.0.1', 53))
resp = s.recv(512)
for i in range(0, len(resp), 16):
h = ' '.join(f'{b:02x}' for b in resp[i:i+16])
a = ''.join(chr(b) if 32<=b<127 else '.' for b in resp[i:i+16])
print(f'{i:08x} {h:<48s} {a}')
"
```
```
00000000 ab cd 81 80 00 01 00 02 00 00 00 00 07 65 78 61 .............exa
00000010 6d 70 6c 65 03 63 6f 6d 00 00 01 00 01 07 65 78 mple.com......ex
00000020 61 6d 70 6c 65 03 63 6f 6d 00 00 01 00 01 00 00 ample.com.......
00000030 00 19 00 04 68 12 1b 78 07 65 78 61 6d 70 6c 65 ....h..x.example
00000040 03 63 6f 6d 00 00 01 00 01 00 00 00 19 00 04 68 .com...........h
00000050 12 1a 78 ..x
```
83 bytes back. Let's annotate the response:
```
ID Flags QCount ACount NSCount ARCount
┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐
Header: AB CD 81 80 00 01 00 02 00 00 00 00
└────┘ └────┘ └────┘ └────┘ └────┘ └────┘
↑ ↑ ↑ ↑
│ │ │ └─ 2 answers
│ │ └─ 1 question (echoed back)
│ └─ Response flag set, recursion available
└─ Same ID as our query
Question: 07 65 78 61 6D 70 6C 65 03 63 6F 6D 00 00 01 00 01
(same as our query — echoed back)
Answer 1: 07 65 78 61 6D 70 6C 65 03 63 6F 6D 00 00 01 00 01
───────────────────────────────────── ── ───── ─────
e x a m p l e . c o m end A IN
00 00 00 19 00 04 68 12 1B 78
─────────── ───── ───────────
TTL: 25s len:4 104.18.27.120
Answer 2: (same domain repeated) 00 01 00 01 00 00 00 19 00 04 68 12 1A 78
───────────
104.18.26.120
```
Notice something wasteful? The domain `example.com` appears *three times* — once in the question, twice in the answers. That's 39 bytes of repeated names in an 83-byte packet. DNS has a solution for this — but first, the overall structure.
The whole thing fits in a single UDP datagram. The structure is:
```
+--+--+--+--+--+--+--+--+
| Header | 12 bytes: ID, flags, counts
+--+--+--+--+--+--+--+--+
| Questions | What you're asking
+--+--+--+--+--+--+--+--+
| Answers | The response records
+--+--+--+--+--+--+--+--+
| Authorities | NS records for the zone
+--+--+--+--+--+--+--+--+
| Additional | Extra helpful records
+--+--+--+--+--+--+--+--+
```
In Rust, parsing the header is just reading 12 bytes and unpacking the flags:
```rust
pub fn read(buffer: &mut BytePacketBuffer) -> Result<DnsHeader> {
let id = buffer.read_u16()?;
let flags = buffer.read_u16()?;
// Flags pack 9 fields into 16 bits
let recursion_desired = (flags & (1 << 8)) > 0;
let truncated_message = (flags & (1 << 9)) > 0;
let authoritative_answer = (flags & (1 << 10)) > 0;
let opcode = (flags >> 11) & 0x0F;
let response = (flags & (1 << 15)) > 0;
// ... and so on
}
```
No padding, no alignment, no JSON overhead. DNS was designed in 1987 when every byte counted, and honestly? The wire format is kind of beautiful in its efficiency.
## Label compression is the clever part
Remember how `example.com` appeared three times in that 83-byte response? Domain names in DNS are stored as a sequence of **labels** — length-prefixed segments:
```
example.com → [7]example[3]com[0]
```
The `[7]` means "the next 7 bytes are a label." The `[0]` is the root label (end of name). That's 13 bytes per occurrence, 39 bytes for three repetitions. In a response with authority and additional records, domain names can account for half the packet.
DNS solves this with **compression pointers** — if the top two bits of a length byte are `11`, the remaining 14 bits are an offset back into the packet where the rest of the name can be found. A well-compressed version of our response would replace the answer names with `C0 0C` — a 2-byte pointer to offset 12 where `example.com` first appears in the question section. That turns 39 bytes of names into 15 (13 + 2 + 2). Our upstream didn't bother compressing, but many do — especially when related domains appear:
```
Offset 0x20: [6]google[3]com[0] ← full name
Offset 0x40: [4]mail[0xC0][0x20] ← "mail" + pointer to offset 0x20
Offset 0x50: [3]www[0xC0][0x20] ← "www" + pointer to offset 0x20
```
Pointers can chain — a pointer can point to another pointer. Parsing this correctly requires tracking your position in the buffer and handling jumps:
```rust
pub fn read_qname(&mut self, outstr: &mut String) -> Result<()> {
let mut pos = self.pos();
let mut jumped = false;
let mut delim = "";
loop {
let len = self.get(pos)?;
// Top two bits set = compression pointer
if (len & 0xC0) == 0xC0 {
if !jumped {
self.seek(pos + 2)?; // advance past the pointer
}
let offset = (((len as u16) ^ 0xC0) << 8) | self.get(pos + 1)? as u16;
pos = offset as usize;
jumped = true;
continue;
}
pos += 1;
if len == 0 { break; } // root label
outstr.push_str(delim);
outstr.push_str(&self.get_range(pos, len as usize)?
.iter().map(|&b| b as char).collect::<String>());
delim = ".";
pos += len as usize;
}
if !jumped {
self.seek(pos)?;
}
Ok(())
}
```
This one bit me: when you follow a pointer, you must *not* advance the buffer's read position past where you jumped from. The pointer is 2 bytes, so you advance by 2, but the actual label data lives elsewhere in the packet. If you follow the pointer and also advance past it, you'll skip over the next record entirely. I spent a fun evening debugging that one.
## TTL adjustment on read, not write
This is my favorite trick in the whole codebase. I initially stored the remaining TTL and decremented it, which meant I needed a background thread to sweep expired entries. It worked, but it felt wrong — too much machinery for something simple.
The cleaner approach: store the original TTL and the timestamp when the record was cached. On read, compute `remaining = original_ttl - elapsed`. If it's zero or negative, the entry is stale — evict it lazily.
```rust
pub fn lookup(&mut self, domain: &str, qtype: QueryType) -> Option<DnsPacket> {
let key = (domain.to_lowercase(), qtype);
let entry = self.entries.get(&key)?;
let elapsed = entry.cached_at.elapsed().as_secs() as u32;
if elapsed >= entry.original_ttl {
self.entries.remove(&key);
return None;
}
// Adjust TTLs in the response to reflect remaining time
let mut packet = entry.packet.clone();
for answer in &mut packet.answers {
answer.set_ttl(entry.original_ttl.saturating_sub(elapsed));
}
Some(packet)
}
```
No background thread. No timer. Entries expire lazily. The cache stays consistent because every consumer sees the adjusted TTL.
## The resolution pipeline
Each incoming UDP packet spawns a tokio task. Each task walks a deterministic pipeline — every step either answers or passes to the next:
```
┌─────────────────────────────────────────────────────┐
│ Numa Resolution Pipeline │
└─────────────────────────────────────────────────────┘
Query ──→ Overrides ──→ .numa TLD ──→ Blocklist ──→ Zones ──→ Cache ──→ DoH
│ │ │ │ │ │ │
│ │ match? │ match? │ blocked? │ match? │ hit? │
│ ↓ ↓ ↓ ↓ ↓ ↓
│ respond respond 0.0.0.0 respond respond forward
│ (auto-reverts (reverse (ad gone) (static (TTL to upstream
│ after N min) proxy+TLS) records) adjusted) (encrypted)
└──→ Each step either answers or passes to the next.
```
This is where "from scratch" pays off. Want conditional forwarding for Tailscale? Insert a step before the upstream. Want to override `api.example.com` for 5 minutes while debugging? Add an entry in the overrides step — it auto-expires. A DNS library would have hidden this pipeline behind an opaque `resolve()` call.
## DNS-over-HTTPS: the "wait, that's it?" moment
The most recent addition, and honestly the one that surprised me with how little code it needed. DoH (RFC 8484) is conceptually simple: take the exact same DNS wire-format packet you'd send over UDP, POST it to an HTTPS endpoint with `Content-Type: application/dns-message`, and parse the response the same way. Same bytes, different transport.
```rust
async fn forward_doh(
query: &DnsPacket,
url: &str,
client: &reqwest::Client,
timeout_duration: Duration,
) -> Result<DnsPacket> {
let mut send_buffer = BytePacketBuffer::new();
query.write(&mut send_buffer)?;
let resp = timeout(timeout_duration, client
.post(url)
.header("content-type", "application/dns-message")
.header("accept", "application/dns-message")
.body(send_buffer.filled().to_vec())
.send())
.await??.error_for_status()?;
let bytes = resp.bytes().await?;
let mut recv_buffer = BytePacketBuffer::from_bytes(&bytes);
DnsPacket::from_buffer(&mut recv_buffer)
}
```
The one gotcha that cost me an hour: Quad9 and other DoH providers require HTTP/2. My first attempt used HTTP/1.1 and got a cryptic 400 Bad Request. Adding the `http2` feature to reqwest fixed it. The upside of HTTP/2? Connection multiplexing means subsequent queries reuse the TLS session — ~16ms vs ~50ms for the first query. Free performance.
The `Upstream` enum dispatches between UDP and DoH based on the URL scheme:
```rust
pub enum Upstream {
Udp(SocketAddr),
Doh { url: String, client: reqwest::Client },
}
```
If the configured address starts with `https://`, it's DoH. Otherwise, plain UDP. Simple, no toggles.
## "Why not just use dnsmasq + nginx + mkcert?"
You absolutely can — those are mature, battle-tested tools. The difference is integration: with dnsmasq + nginx + mkcert, you're configuring three tools with three config formats. Numa puts the DNS record, reverse proxy, and TLS cert behind one API call:
```bash
curl -X POST localhost:5380/services -d '{"name":"frontend","target_port":5173}'
```
That creates the DNS entry, generates a TLS certificate, and starts proxying — including WebSocket upgrade for Vite HMR. One command, no config files. Having full control over the resolution pipeline is what makes auto-revert overrides and LAN discovery possible.
## What I learned
**DNS is a 40-year-old protocol that works remarkably well.** The wire format is tight, the caching model is elegant, and the hierarchical delegation system has scaled to billions of queries per day. The things people complain about (DNSSEC complexity, lack of encryption) are extensions bolted on decades later, not flaws in the original design.
**The hard parts aren't where you'd expect.** Parsing the wire protocol was straightforward (RFC 1035 is well-written). The hard parts were: browsers rejecting wildcard certs under single-label TLDs, macOS resolver quirks (`scutil` vs `/etc/resolv.conf`), and getting multiple processes to bind the same multicast port (`SO_REUSEPORT` on macOS, `SO_REUSEADDR` on Linux).
**Learn the vocabulary before you show up.** I initially called Numa a "DNS resolver" and got corrected — it's a forwarding resolver. The distinction matters to people who work with DNS professionally, and being sloppy about it cost me credibility in my first community posts.
## What's next
**Update (March 2026):** Recursive resolution and DNSSEC validation are now shipped. Numa resolves from root nameservers with full chain-of-trust verification (RSA/SHA-256, ECDSA P-256, Ed25519) and NSEC/NSEC3 authenticated denial of existence.
**[Read the follow-up: Implementing DNSSEC from Scratch in Rust →](/blog/posts/dnssec-from-scratch.html)**
Still on the roadmap:
- **DoT (DNS-over-TLS)** — DoH was first because it passes through captive portals and corporate firewalls (port 443 vs 853). DoT has less framing overhead, so it's faster. Both will be available.
- **[pkarr](https://github.com/pubky/pkarr) integration** — self-sovereign DNS via the Mainline BitTorrent DHT. Publish DNS records signed with your Ed25519 key, no registrar needed.
[github.com/razvandimescu/numa](https://github.com/razvandimescu/numa)

189
blog/dnssec-from-scratch.md Normal file
View File

@@ -0,0 +1,189 @@
---
title: Implementing DNSSEC from Scratch in Rust
description: Recursive resolution from root hints, chain-of-trust validation, NSEC/NSEC3 denial proofs, and what I learned implementing DNSSEC with zero DNS libraries.
date: March 2026
---
In the [previous post](/blog/posts/dns-from-scratch.html) I covered how DNS works at the wire level — packet format, label compression, TTL caching, DoH. Numa was a forwarding resolver: it parsed packets, did useful things locally, and relayed the rest to Cloudflare or Quad9.
That post ended with "recursive resolution and DNSSEC are on the roadmap." This post is about building both.
The short version: Numa now resolves from root nameservers with iterative queries, validates the full DNSSEC chain of trust, and cryptographically proves that non-existent domains don't exist. No upstream dependency. No DNS libraries. Just `ring` for the crypto primitives and a lot of RFC reading.
## Why recursive?
A forwarding resolver trusts its upstream. When you ask Quad9 for `cloudflare.com`, you trust that Quad9 returns the real answer. If Quad9 lies, gets compromised, or is legally compelled to redirect you — you have no way to know.
A recursive resolver doesn't trust anyone. It starts at the root nameservers (operated by 12 independent organizations) and follows the delegation chain: root → `.com` TLD → `cloudflare.com` authoritative servers. Each server only answers for its own zone. No single entity sees your full query pattern.
DNSSEC adds cryptographic proof to each step. The root signs `.com`'s key. `.com` signs `cloudflare.com`'s key. `cloudflare.com` signs its own records. If any step is tampered with, the chain breaks and Numa rejects the response.
## The iterative resolution loop
Recursive resolution is a misnomer — the resolver actually uses *iterative* queries. It asks root "where is `cloudflare.com`?", root says "I don't know, but here are the `.com` nameservers." It asks `.com`, which says "here are cloudflare's nameservers." It asks those, and gets the answer.
```
resolve("cloudflare.com", A)
→ ask 198.41.0.4 (a.root-servers.net)
← "try .com: ns1.gtld-servers.net (192.5.6.30)" [referral + glue]
→ ask 192.5.6.30 (ns1.gtld-servers.net)
← "try cloudflare: ns1.cloudflare.com (173.245.58.51)" [referral + glue]
→ ask 173.245.58.51 (ns1.cloudflare.com)
← "104.16.132.229" [answer]
```
The implementation (`src/recursive.rs`) is a loop with three possible outcomes per query:
1. **Answer** — the server knows the record. Cache it, return it.
2. **Referral** — the server delegates to another zone. Extract NS records and glue (A/AAAA records for the nameservers, included in the additional section to avoid a chicken-and-egg problem), then query the next server.
3. **NXDOMAIN/REFUSED** — the name doesn't exist or the server refuses. Cache the negative result.
CNAME chasing adds complexity: if you ask for `www.cloudflare.com` and get a CNAME to `cloudflare.com`, you need to restart resolution for the new name. I cap this at 8 levels.
### TLD priming
Cold-cache resolution is slow. Every query needs root → TLD → authoritative, each with its own network round-trip. For the first query to `example.com`, that's three serial UDP round-trips before you get an answer.
TLD priming solves this. On startup, Numa queries root for NS records of 34 common TLDs (`.com`, `.org`, `.net`, `.io`, `.dev`, plus EU ccTLDs), caching NS records, glue addresses, DS records, and DNSKEY records. After priming, the first query to any `.com` domain skips root entirely — it already knows where `.com`'s nameservers are, and already has the DNSSEC keys needed to validate the response.
## DNSSEC chain of trust
DNSSEC doesn't encrypt DNS traffic. It *signs* it. Every DNS record can have an accompanying RRSIG (signature) record. The resolver verifies the signature against the zone's DNSKEY, then verifies that DNSKEY against the parent zone's DS (delegation signer) record, walking up until it reaches the root trust anchor — a hardcoded public key that IANA publishes and the entire internet agrees on.
<img src="../dnssec-chain.svg" alt="DNSSEC chain of trust diagram — verifying cloudflare.com from answer through .com TLD to root trust anchor">
### How keys get there
The domain owner generates the DNSKEY keypair — typically their DNS provider (Cloudflare, etc.) does this. The owner then submits the DS record (a hash of their DNSKEY) to their registrar (Namecheap, GoDaddy), who passes it to the registry (Verisign for `.com`). The registry signs it into the TLD zone, and IANA signs the TLD's DS into the root. Trust flows up; keys flow down.
The irony: you "own" your DNSSEC keys, but your registrar controls whether the DS record gets published. If they remove it — by mistake, by policy, or by court order — your DNSSEC chain breaks silently.
### The trust anchor
IANA's root KSK (Key Signing Key) has key tag 20326, algorithm 8 (RSA/SHA-256), and a 256-byte public key. It was last rolled in 2018. I hardcode it as a `const` array — this is the one thing in the entire system that requires out-of-band trust.
```rust
const ROOT_KSK_PUBLIC_KEY: &[u8] = &[
0x03, 0x01, 0x00, 0x01, 0xac, 0xff, 0xb4, 0x09,
// ... 256 bytes total
];
```
When IANA rolls this key (rare — the previous key lasted from 2010 to 2018), every DNSSEC validator on the internet needs updating. For Numa, that means a binary update. Something to watch. Every DNSKEY also has a key tag — a 16-bit checksum over its RDATA. The first test I wrote: compute the root KSK's key tag and assert it equals 20326. Instant confidence that the encoding is correct.
## The crypto
Numa uses `ring` for all cryptographic operations. Three algorithms cover the vast majority of signed zones:
| Algorithm | ID | Usage | Verify time |
|---|---|---|---|
| RSA/SHA-256 | 8 | Root, most TLDs | 10.9 µs |
| ECDSA P-256 | 13 | Cloudflare, many modern zones | 174 ns |
| Ed25519 | 15 | Newer zones | ~200 ns |
### RSA key format conversion
DNS stores RSA public keys in RFC 3110 format (exponent length, exponent, modulus). `ring` expects PKCS#1 DER (ASN.1 encoded). Converting between them means writing a minimal ASN.1 encoder with leading-zero stripping and sign-bit padding. Getting this wrong produces keys that `ring` silently rejects — one of the harder bugs to track down.
### ECDSA is simpler
ECDSA P-256 keys in DNS are 64 bytes (x + y coordinates). `ring` expects uncompressed point format: `0x04` prefix + 64 bytes. One line:
```rust
let mut uncompressed = Vec::with_capacity(65);
uncompressed.push(0x04);
uncompressed.extend_from_slice(public_key); // 64 bytes from DNS
```
Signatures are also 64 bytes (r + s), used directly. No format conversion needed.
### Building the signed data
RRSIG verification doesn't sign the DNS packet — it signs a canonical form of the records. Building this correctly is the most detail-sensitive part of DNSSEC. The signed data is:
1. RRSIG RDATA fields (type covered, algorithm, labels, original TTL, expiration, inception, key tag, signer name) — *without* the signature itself
2. For each record in the RRset: owner name (lowercased, uncompressed) + type + class + original TTL (from the RRSIG, not the record's current TTL) + RDATA length + canonical RDATA
The records must be sorted by their canonical wire-format representation. Owner names must be lowercased. The TTL must be the *original* TTL from the RRSIG, not the decremented TTL from caching.
Getting any of these details wrong — wrong TTL, wrong case, wrong sort order, wrong RDATA encoding — produces a valid-looking but incorrect signed data blob, and `ring` returns a signature mismatch with no diagnostic information. I spent more time debugging signed data construction than any other part of DNSSEC.
## Proving a name doesn't exist
Verifying that `cloudflare.com` has a valid A record is one thing. Proving that `doesnotexist.cloudflare.com` *doesn't* exist — cryptographically, in a way that can't be forged — is harder.
### NSEC
NSEC records form a chain. Each NSEC says "the next name in this zone after me is X, and at my name these record types exist." If you query `beta.example.com` and the zone has `alpha.example.com → NSEC → gamma.example.com`, the gap proves `beta` doesn't exist — there's nothing between `alpha` and `gamma`.
For NXDOMAIN proofs, RFC 4035 §5.4 requires two things:
1. An NSEC record whose gap covers the queried name
2. An NSEC record proving no wildcard exists at the closest encloser
The canonical DNS name ordering (RFC 4034 §6.1) compares labels right-to-left, case-insensitive. `a.example.com` < `b.example.com` because at the `example.com` level they're equal, then `a` < `b`. But `z.example.com` < `a.example.org` because `.com` < `.org` at the TLD level.
### NSEC3
NSEC3 solves NSEC's zone enumeration problem — with NSEC, you can walk the chain and discover every name in the zone. NSEC3 hashes the names first (iterated SHA-1 with a salt), so the NSEC3 chain reveals hashes, not names.
The proof is a 3-part closest encloser proof (RFC 5155 §8.4): find an ancestor whose hash matches an NSEC3 owner, prove the next-closer name falls within a hash range gap, and prove the wildcard at the closest encloser also falls within a gap. All three must hold, or the denial is rejected.
I cap NSEC3 iterations at 500 (RFC 9276 recommends 0). Higher iteration counts are a DoS vector — each verification requires `iterations + 1` SHA-1 hashes.
## Making it fast
Cold-cache DNSSEC validation initially required ~5 network fetches per query (DNSKEY for each zone in the chain, plus DS records). Three optimizations brought this down to ~1:
**TLD priming** (startup) — fetch root DNSKEY + each TLD's NS/DS/DNSKEY. After priming, the trust chain from root to any `.com` zone is fully cached.
**Referral DS piggybacking** — when a TLD server refers you to `cloudflare.com`'s nameservers, the authority section often includes DS records for the child zone. Cache them during resolution instead of fetching separately during validation.
**DNSKEY prefetch** — before the validation loop, scan all RRSIGs for signer zones and batch-fetch any missing DNSKEYs. This avoids serial DNSKEY fetches inside the per-RRset verification loop.
Result: a cold-cache query for `cloudflare.com` with full DNSSEC validation takes ~90ms. The TLD chain is already warm; only one DNSKEY fetch is needed (for `cloudflare.com` itself).
| Operation | Time |
|---|---|
| ECDSA P-256 verify | 174 ns |
| Ed25519 verify | ~200 ns |
| RSA/SHA-256 verify | 10.9 µs |
| DS digest (SHA-256) | 257 ns |
| Key tag computation | 2063 ns |
| Cold-cache validation (1 fetch) | ~90 ms |
The network fetch dominates. The crypto is noise.
## Surviving hostile networks
I deployed Numa as my system DNS and switched networks. Everything broke — every query SERVFAIL, 3-second timeout. The ISP blocks outbound UDP port 53 to everything except whitelisted public resolvers. Root servers, TLD servers, authoritative servers — all unreachable over UDP.
But TCP port 53 worked. Every DNS server is required to support TCP (RFC 1035 section 4.2.2). The ISP only filters UDP.
The fix has three parts:
**TCP fallback.** Every outbound query tries UDP first (800ms timeout). If UDP fails or the response is truncated, retry immediately over TCP. TCP uses a 2-byte length prefix before the DNS message — trivial to implement, and it handles DNSSEC responses that exceed the UDP payload limit.
**UDP auto-disable.** After 3 consecutive UDP failures, flip a global `AtomicBool` and skip UDP entirely — go TCP-first for all queries. This avoids burning 800ms per hop on a network where UDP will never work. The flag resets when the network changes (detected via LAN IP monitoring).
**Query minimization (RFC 7816).** When querying root servers, send only the TLD — `com` instead of `secret-project.example.com`. Root servers handle trillions of queries and are operated by 12 organizations. Minimization reduces what they learn from yours.
The result: on a network that blocks UDP:53, Numa detects the block within the first 3 queries, switches to TCP, and resolves normally at 300-500ms per cold query. Cached queries remain 0ms. No manual config change needed — switch networks and it adapts.
I wouldn't have found this without dogfooding. The code worked perfectly on my home network. It took a real hostile network to expose the assumption that UDP always works.
## What I learned
**DNSSEC is a verification system, not an encryption system.** It proves authenticity — this record was signed by the zone owner. It doesn't hide what you're querying. For privacy, you still need encrypted transport (DoH/DoT) or recursive resolution (no single upstream).
**The hardest bugs are in data serialization, not crypto.** `ring` either verifies or it doesn't — a binary answer. But getting the signed data blob exactly right (correct TTL, correct case, correct sort, correct RDATA encoding for each record type) requires extreme precision. A single wrong byte means verification fails with no hint about what's wrong.
**Negative proofs are harder than positive proofs.** Verifying a record exists: verify one RRSIG. Proving a record doesn't exist: find the right NSEC/NSEC3 records, verify their RRSIGs, check gap coverage, check wildcard denial, compute hashes. The NSEC3 closest encloser proof alone has three sub-proofs, each requiring hash computation and range checking.
**Performance optimization is about avoiding network, not avoiding CPU.** The crypto takes nanoseconds to microseconds. The network fetch takes tens of milliseconds. Every optimization that matters — TLD priming, DS piggybacking, DNSKEY prefetch — is about eliminating a round trip, not speeding up a hash.
## What's next
- **[pkarr](https://github.com/pubky/pkarr) integration** — self-sovereign DNS via the Mainline BitTorrent DHT. Your Ed25519 key is your domain. No registrar, no ICANN.
- **DoT (DNS-over-TLS)** — the last encrypted transport we don't support
The code is at [github.com/razvandimescu/numa](https://github.com/razvandimescu/numa) — the DNSSEC validation is in [`src/dnssec.rs`](https://github.com/razvandimescu/numa/blob/main/src/dnssec.rs) and the recursive resolver in [`src/recursive.rs`](https://github.com/razvandimescu/numa/blob/main/src/recursive.rs). MIT license.

60
deploy.sh Executable file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/env bash
set -euo pipefail
VERSION="${1:-}"
if [ -z "$VERSION" ]; then
echo "Usage: ./deploy.sh v0.5.1"
exit 1
fi
# Strip leading 'v' for Cargo.toml (accepts both "v0.5.1" and "0.5.1")
SEMVER="${VERSION#v}"
TAG="v${SEMVER}"
# Validate semver format
if ! [[ "$SEMVER" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Error: '$SEMVER' is not a valid semver (expected: X.Y.Z)"
exit 1
fi
# Check we're on main
BRANCH=$(git branch --show-current)
if [ "$BRANCH" != "main" ]; then
echo "Error: must be on main branch (currently on '$BRANCH')"
exit 1
fi
# Check working tree is clean
if [ -n "$(git status --porcelain -- ':!deploy.sh' ':!Cargo.toml' ':!Cargo.lock')" ]; then
echo "Error: working tree has uncommitted changes"
git status --short
exit 1
fi
# Check tag doesn't already exist
if git rev-parse "$TAG" >/dev/null 2>&1; then
echo "Error: tag '$TAG' already exists"
exit 1
fi
CURRENT=$(grep '^version = ' Cargo.toml | head -1 | sed 's/version = "\(.*\)"/\1/')
echo "Bumping $CURRENT$SEMVER"
# Update Cargo.toml version
sed -i '' "s/^version = \"$CURRENT\"/version = \"$SEMVER\"/" Cargo.toml
# Update Cargo.lock
cargo check --quiet 2>/dev/null
# Commit, tag, push
git add Cargo.toml Cargo.lock
git commit -m "bump version to $SEMVER"
git tag "$TAG"
git push
git push origin "$TAG"
echo ""
echo "✓ Tagged $TAG and pushed"
echo " → GitHub Actions: release binaries + crates.io publish"
echo " → Watch: gh run list --limit 1"

View File

@@ -0,0 +1,618 @@
# Launch Drafts
## Lessons Learned
**r/selfhosted** (0 upvotes, hostile) — "replaces Pi-hole" framing triggered
defensive comparisons. Audience protects their stack.
**r/programare** (26 upvotes, 22 comments, 12K views, 90.6% ratio) — worked
because it led with technical achievement. But: "what does this offer over
/etc/hosts?" and "mature solutions exist (dnsmasq, nginx)" were the top
objections. Tool-replacement angle falls flat with generalist audiences.
**r/webdev** — removed by moderators (self-promotion rules).
Key takeaways:
- Lead with what's *unique*, not what it *replaces*
- Write like explaining to a colleague, not marketing copy
- Pick ONE hook per community — don't try to be everything
- Triple-check the GitHub link works before posting
- Authentic tone > polished bullets
- Agree with "just use X" — then show what X can't do
- Don't oversell the pkarr/token vision — one sentence max
- Benchmark request from r/programare (Mydocalm) — warm follow-up content
---
## Launch Order
~~0. **r/programare** — done (2026-03-21). 12K views, 26 upvotes, 22 comments.~~
~~1. **r/webdev** — removed by moderators.~~
~~2. **r/degoogle** — done~~
~~3. **r/node** — done~~
4. **r/coolgithubprojects** — zero friction, just post the repo
~~5. **r/sideproject** — done (2026-03-29)~~
6. **r/dns** — technical DNS audience, recursive + DNSSEC angle
7. **Show HN** — Tuesday-Thursday, 9-10 AM ET
8. **r/rust** — same day as HN, technical deep-dive
9. **r/commandline** — 24h after HN
10. **r/selfhosted** — only if HN hits front page, lead with recursive + LAN discovery
11. **r/programare follow-up** — benchmark post + recursive/DNSSEC update
---
## Community Drafts
### Show HN
**Title (72 chars):**
Show HN: I built a DNS resolver from scratch in Rust no DNS libraries
**Body:**
I wanted to understand how DNS actually works at the wire level, so I built
a resolver from scratch. No dns libraries — the RFC 1035 protocol (headers,
labels, compression pointers, record types) is all hand-parsed. It started
as a learning project and turned into something I use daily as my system DNS.
What it does today:
- **Forward mode by default** — transparent proxy to your existing DNS with
caching and ad blocking. Changes nothing about your network.
- **Full recursive resolver** — set `mode = "recursive"` and it resolves from
root nameservers. No upstream dependency. CNAME chasing, TLD priming, SRTT.
- **DNSSEC validation** — chain-of-trust verification from root KSK.
RSA/SHA-256, ECDSA P-256, Ed25519. Sets the AD bit on verified responses.
- **Ad blocking** — ~385K+ domains via Hagezi Pro, works on any network
- **DNS-over-HTTPS** — encrypted upstream (Quad9, Cloudflare, or any
provider) as an alternative to recursive mode
- **`.numa` local domains** — register `frontend.numa → localhost:5173` and
it creates both the DNS record and an HTTP/HTTPS reverse proxy with
auto-generated TLS certs. WebSocket passthrough works (Vite HMR).
- **LAN service discovery** — run Numa on two machines, they find each other
via UDP multicast. Zero config.
- **Developer overrides** — point any hostname to any IP, auto-reverts
after N minutes. REST API for scripting.
Single binary, macOS + Linux. `sudo numa install` and it's your system DNS —
forward mode by default, recursive when you're ready.
The interesting technical bits: the recursive resolver walks root → TLD →
authoritative with iterative queries, caching NS/DS/DNSKEY records at each
hop. DNSSEC validation verifies RRSIG signatures against DNSKEY, walks the
chain via DS records up to the hardcoded root trust anchor. ECDSA P-256
verification takes 174ns (benchmarked with criterion). Cold-cache validation
for a new domain is ~90ms, with only 1 network fetch needed (TLD chain is
pre-warmed on startup). SRTT-based nameserver selection learns which
servers respond fastest — average recursive query drops from 2.8s to
237ms after warmup (12x).
It also handles hostile networks: if your ISP blocks UDP port 53,
Numa detects this after 3 failures and switches all
queries to TCP automatically. Resets when you change networks. RFC 7816
query minimization means root servers only see the TLD, not your full
query.
The DNS cache adjusts TTLs on read (remaining time, not original). Each
query is an async tokio task. EDNS0 with DO bit and 1232-byte payload
(DNS Flag Day 2020).
Longer term I want to add pkarr/DHT resolution for self-sovereign DNS,
but that's future work.
https://github.com/razvandimescu/numa
---
### r/rust
**Title:** I built a recursive DNS resolver from scratch in Rust — DNSSEC, no DNS libraries
**Body:**
I've been building a DNS resolver in Rust as a learning project that became
my daily driver. The entire DNS wire protocol is implemented by hand —
no `trust-dns`, no `hickory-dns`, no `simple-dns`. Headers, label sequences,
compression pointers, EDNS, all of it.
Some things I found interesting while building this:
**Recursive resolution** — iterative queries from root hints, walking
root → TLD → authoritative. CNAME chasing, A+AAAA glue extraction from
additional sections, referral depth limits. TLD priming pre-warms NS + DS +
DNSKEY for 34 gTLDs + EU ccTLDs on startup.
**DNSSEC chain-of-trust** — the most involved part. Verify RRSIG signatures
against DNSKEY, walk DS records up to the hardcoded root KSK (key tag 20326).
Uses `ring` for crypto: RSA/SHA-256, ECDSA P-256 (174ns per verify), Ed25519.
RFC 3110 RSA keys need converting to PKCS#1 DER for ring — wrote an ASN.1
encoder for that. RRSIG time validity checks per RFC 4035 §5.3.1.
**NSEC/NSEC3 denial proofs** — proving a name *doesn't* exist is harder than
proving it does. NSEC uses canonical DNS name ordering to prove gap coverage.
NSEC3 uses iterated SHA-1 hashing + base32hex + a 3-part closest encloser
proof (RFC 5155 §8.4). Both require authority-section RRSIG verification.
**Wire protocol parsing** — DNS uses a binary format with label compression
(pointers back into the packet via 2-byte offsets). Parsing this correctly
is surprisingly tricky because pointers can chain. I use a `BytePacketBuffer`
that tracks position and handles jumps.
**Performance** — TLD chain pre-warming means cold-cache DNSSEC validation
needs ~1 DNSKEY fetch (down from 5). Referral DS piggybacking caches DS
from authority sections during resolution. ECDSA P-256 verify: 174ns.
RSA/SHA-256: 10.9µs. DS verify: 257ns.
**LAN service discovery** — Numa instances on the same network find each
other via UDP multicast. The tricky part was self-filtering: I initially
filtered by IP, but two instances on the same host share an IP. Switched to
a per-process instance ID (`pid ^ nanos`).
**Auto TLS** — generates a local CA + per-service certs using `rcgen`.
`numa install` trusts the CA in the OS keychain. HTTPS proxy via `rustls` +
`tokio-rustls`.
Single binary, no runtime dependencies. Uses `tokio`, `axum` (REST
API/dashboard), `hyper` (reverse proxy), `ring` (DNSSEC crypto), `reqwest`
(DoH), `socket2` (multicast), `rcgen` + `rustls` (TLS).
Happy to discuss any of the implementation decisions.
https://github.com/razvandimescu/numa
---
### r/degoogle
**Title:** I replaced cloud DNS with a recursive resolver — resolves from root, no upstream, DNSSEC
**Body:**
I wanted a DNS setup with zero cloud dependency. No NextDNS account,
no Cloudflare dashboard, no Pi-hole appliance, no upstream resolver seeing
my queries. Just a single binary on my laptop that resolves everything
itself.
Built one in Rust. What it does:
- **Forward mode by default** — transparent proxy to your existing DNS with
caching and ad blocking. Changes nothing about your network.
- **Recursive resolution** — set `mode = "recursive"` and it resolves directly
from root nameservers. No Quad9, no Cloudflare, no upstream dependency.
Each authoritative server only sees the query for its zone — no single
entity sees your full browsing pattern.
- **DNSSEC validation** — verifies the chain of trust from root KSK.
Responses are cryptographically verified — no one can tamper with them
in transit.
- **System-level ad blocking** — Hagezi Pro list (~385K+ domains),
works on any network. Coffee shop WiFi, airport, hotel.
- **ISP resistant** — in recursive mode, if UDP is blocked Numa switches
to TCP automatically. Or set `mode = "auto"` to probe on startup and
fall back to encrypted DoH if needed.
- **Query minimization** — root servers only see the TLD (.com), not
your full domain. RFC 7816.
- **Zero telemetry, zero cloud** — all data stays on your machine. No
account, no login, no analytics. Config is a single TOML file.
- **Local service naming** — bonus for developers: `https://app.numa`
instead of `localhost:3000`, with auto-generated TLS certs
Single binary, macOS + Linux. `sudo numa install` and it's your system
DNS — forward mode by default, recursive when you're ready. No Docker,
no PHP, no external dependencies.
The DNS wire protocol is parsed from scratch — no DNS libraries. You can
read every line of code.
```
brew install razvandimescu/tap/numa
# or
cargo install numa
```
MIT license. https://github.com/razvandimescu/numa
---
### r/node
**Title:** I replaced localhost:5173 with frontend.numa — auto HTTPS, HMR works, no nginx
**Body:**
Running a Vite frontend on :5173, Express API on :3000, maybe docs on
:4000 — I could never remember which port was which. And CORS between
`localhost:5173` and `localhost:3000` is its own special hell.
How do you get named domains with HTTPS locally?
1. /etc/hosts + mkcert + nginx
2. dnsmasq + mkcert + Caddy
3. `sudo numa`
What it actually does:
```
curl -X POST localhost:5380/services \
-d '{"name":"frontend","target_port":5173}'
```
Now `https://frontend.numa` works in my browser. Green lock, valid cert.
- **HMR works** — Vite, webpack, socket.io all pass through the proxy.
No special config.
- **CORS solved** — `frontend.numa` and `api.numa` share the `.numa`
cookie domain. Cross-service auth just works.
- **Path routing** — `app.numa/api → :3000`, `app.numa/auth → :3001`.
Like nginx location blocks, zero config files.
No mkcert, no nginx.conf, no Caddyfile, no editing /etc/hosts.
Single binary, one command.
```
brew install razvandimescu/tap/numa
# or
cargo install numa
```
https://github.com/razvandimescu/numa
---
### r/dns
**Title:** Numa — recursive DNS resolver from scratch in Rust, DNSSEC, no DNS libraries
**Body:**
I built a recursive DNS resolver where the entire wire protocol (RFC 1035 —
headers, label compression, EDNS0) is hand-parsed. No `hickory-dns`,
no `trust-dns`.
What it does:
- Full recursive resolver from root hints (iterative queries, no upstream needed)
- DNSSEC chain-of-trust validation (RSA/SHA-256, ECDSA P-256, Ed25519)
- EDNS0 with DO bit, 1232-byte payload (DNS Flag Day 2020 compliant)
- DNS-over-HTTPS as an alternative upstream mode
- Ad blocking (~385K+ domains via Hagezi Pro)
- Conditional forwarding (auto-detects Tailscale/VPN split-DNS)
- Local zones, ephemeral overrides with auto-revert via REST API
DNSSEC implementation: DNSKEY/DS/RRSIG record parsing, canonical wire format
for signed data, key tag computation (RFC 4034), DS digest verification.
Chain walks from zone → TLD → root trust anchor. ECDSA P-256 signature
verification in 174ns. TLD chain pre-warmed on startup. Referral DS records
piggybacked from authority sections during resolution.
NSEC/NSEC3 authenticated denial of existence: NXDOMAIN gap proofs, NSEC3
closest encloser proofs (3-part per RFC 5155), NODATA type absence proofs,
authority-section RRSIG verification. Iteration cap at 500 for NSEC3 DoS
prevention.
What it doesn't do (yet): no authoritative zone serving (AXFR/NOTIFY).
Single binary, macOS + Linux. MIT license.
https://github.com/razvandimescu/numa
---
### Lobsters (invite-only)
**Title:** Numa — DNS resolver from scratch in Rust, no DNS libraries
**Body:**
I built a DNS resolver in Rust — RFC 1035 wire protocol parsed by hand,
no `trust-dns` or `hickory-dns`. Started as a learning project, became
my daily system DNS.
Beyond resolving, it does local `.numa` domains with auto HTTPS reverse
proxy (register `frontend.numa → localhost:5173`, get a green lock and
WebSocket passthrough), and LAN service discovery via UDP multicast —
two machines running Numa find each other's services automatically.
Implementation bits I found interesting: DNS label compression (chained
2-byte pointers back into the packet), browsers rejecting wildcard certs
under single-label TLDs (`*.numa` fails — need per-service SANs), and
`SO_REUSEPORT` on macOS for multiple processes binding the same multicast
port.
Set `mode = "recursive"` for DNSSEC-validated resolution from root
nameservers — no upstream, no middleman.
Single binary, macOS + Linux.
https://github.com/razvandimescu/numa
---
### r/coolgithubprojects
**Post type:** Image post with `hero-demo.gif`, GitHub link in first comment.
**Title:** Numa — portable DNS resolver built from scratch in Rust. Ad blocking, local HTTPS domains, LAN discovery, recursive resolution with DNSSEC. Single binary.
**First comment (post immediately):**
https://github.com/razvandimescu/numa
```
brew install razvandimescu/tap/numa && sudo numa
```
No DNS libraries — RFC 1035 wire protocol parsed by hand.
Recursive resolution from root nameservers with full DNSSEC
chain-of-trust validation. 385K+ blocked ad domains.
.numa local domains with auto TLS and WebSocket proxy.
---
### r/sideproject
**Title:** I built a DNS resolver from scratch in Rust — it's now my daily system DNS
**Body:**
Last year I wanted to understand how DNS actually works at the wire
level, so I started parsing RFC 1035 packets by hand. No DNS libraries,
no trust-dns, no hickory-dns — just bytes and the spec.
It turned into something I use every day. What it does now:
- **Ad blocking** on any network (coffee shops, airports) — 385K+
domains blocked, travels with my laptop
- **Local service naming** — `https://frontend.numa` instead of
`localhost:5173`, with auto-generated TLS certs and WebSocket
passthrough for HMR
- **Recursive resolution** from root nameservers with DNSSEC
chain-of-trust validation — set `mode = "recursive"` for full
privacy, no upstream dependency, no single entity sees my query
pattern
- **LAN discovery** — two machines running Numa find each other's
services automatically via mDNS
Single Rust binary, ~8MB, MIT license. `sudo numa install` and it's your
system DNS — caching, ad blocking, .numa domains, zero config changes.
I wrote about the technical journey here:
- [I Built a DNS Resolver from Scratch](https://numa.rs/blog/posts/dns-from-scratch.html)
- [Implementing DNSSEC from Scratch](https://numa.rs/blog/posts/dnssec-from-scratch.html)
https://github.com/razvandimescu/numa
---
### r/webdev (Showoff Saturday — posted 2026-03-28)
**Title:** I replaced localhost:5173 with frontend.numa — shared cookie domain, auto HTTPS, no nginx
**Body:**
The port numbers weren't the real problem. It was CORS between
`localhost:5173` and `localhost:3000`, Secure cookies not setting over
HTTP, and service workers requiring a secure context.
I built a DNS resolver that gives local services named domains under a
shared TLD:
```
curl -X POST localhost:5380/services \
-d '{"name":"frontend","target_port":5173}'
```
Now `https://frontend.numa` and `https://api.numa` share the `.numa`
cookie domain. Cross-service auth just works. Secure cookies set.
Service workers run.
What's under the hood:
- **Auto HTTPS** — generates a local CA + per-service TLS certs. Green
lock, no mkcert.
- **WebSocket passthrough** — Vite/webpack HMR goes through the proxy.
No special config.
- **Path routing** — `app.numa/api → :3000`, `app.numa/auth → :3001`.
Like nginx location blocks.
- **Also a full DNS resolver** — forward mode with caching and ad
blocking by default. Set `mode = "recursive"` for full DNSSEC-validated
resolution from root nameservers.
Single Rust binary. `sudo numa install` and it's your system DNS — caching,
ad blocking, .numa domains. No nginx, no Caddy, no /etc/hosts.
```
brew install razvandimescu/tap/numa
# or
cargo install numa
```
https://github.com/razvandimescu/numa
**Lessons from r/node (2026-03-24):** "Can't remember 3 ports?" got
pushback — the CORS/cookie angle resonated more. Lead with what you
can't do without it, not what's annoying.
---
### r/commandline
**Title:** numa — local dev DNS with auto HTTPS and LAN service discovery, single Rust binary
**Body:**
I run 5-6 local services and wanted named domains with HTTPS instead of
remembering port numbers. Built a DNS resolver that handles `.numa`
domains:
```
curl -X POST localhost:5380/services \
-d '{"name":"api","target_port":8000}'
```
Now `https://api.numa` resolves, proxies to localhost:8000, and has a
valid TLS cert. WebSocket passthrough works — Vite HMR goes through
the proxy fine.
The part I didn't expect to be useful: LAN service discovery. Two
machines running numa find each other via UDP multicast. I register
`api.numa` on my laptop, my teammate's numa instance picks it up
automatically. Zero config.
Also blocks ~385K+ ad domains since it's already your DNS resolver.
Portable — works on any network (coffee shops, airports). Set
`mode = "recursive"` for full DNSSEC-validated resolution from root
nameservers — no upstream dependency.
```
brew install razvandimescu/tap/numa
sudo numa
```
Single binary, DNS wire protocol parsed from scratch (no DNS libraries).
https://github.com/razvandimescu/numa
---
### r/selfhosted (only if Show HN hits front page)
**Title:** Numa — recursive resolver + ad blocking + LAN service discovery in one binary
**Body:**
I built a DNS resolver in Rust that I've been running as my system DNS.
Two features I'm most proud of:
**Recursive resolution + DNSSEC** — set `mode = "recursive"` and it resolves
from root nameservers, no upstream dependency. Chain-of-trust verification
(RSA, ECDSA, Ed25519), NSEC/NSEC3 denial proofs. No single entity sees your
full query pattern — each authoritative server only sees its zone's queries.
**LAN service discovery** — I register `api.numa → localhost:8000` on my
laptop. My colleague's machine, also running Numa, picks it up via UDP
multicast — `api.numa` resolves to my IP on his machine. Zero config.
The rest of what it does:
- **Ad blocking** — ~385K+ domains (Hagezi Pro), portable. Works on any
network including coffee shops and airports.
- **DNS-over-HTTPS** — encrypted upstream as an alternative to recursive mode.
- **Auto HTTPS for local services** — generates a local CA + per-service
TLS certs. `https://frontend.numa` with a green lock, WebSocket passthrough.
- **Hub mode** — point other devices' DNS to it, they get ad blocking +
`.numa` resolution without installing anything.
Replaces Pi-hole + Unbound in one binary. No Raspberry Pi, no Docker, no PHP.
Single binary, macOS + Linux. Config is one optional TOML file.
**What it doesn't do (yet):** No web-based config editor (TOML + REST API).
DoT listener is in progress.
`brew install razvandimescu/tap/numa` or `cargo install numa`
https://github.com/razvandimescu/numa
---
## Preparation Checklist
- [ ] Verify GitHub repo is PUBLIC before any post
- [ ] Build some comment history on posting account first
- [ ] Post HN Tuesday-Thursday, 9-10 AM Eastern
- [ ] Respond to every comment within 2 hours for the first 6 hours
- [ ] Have fixes ready to ship within 24h for reported issues
- [ ] Don't oversell the pkarr/token vision — one sentence max
## Rules
- Verify GitHub repo is PUBLIC before every post
- Use an account with comment history, not a fresh one
- Respond to every comment within 2 hours
- Never be defensive — acknowledge valid criticism, redirect
- If someone says "just use X" — agree it works, explain what's *uniquely different*
- Lead with unique capabilities, not tool replacement
---
## Prepared Responses
**"What does this offer over /etc/hosts?"** *(actual r/programare objection)*
/etc/hosts is static and per-machine. Numa gives you: auto-revert after N
minutes (great for testing), a REST API so scripts can create/remove entries,
HTTPS reverse proxy with auto TLS, and LAN discovery so you don't have to
edit hosts on every device. Different tools for different problems.
**"Mature solutions already exist (dnsmasq, nginx, etc.)"** *(actual r/programare objection)*
Absolutely — and they're great. The thing they don't do: register a service
on machine A and have it automatically appear on machine B via multicast.
Numa integrates DNS + reverse proxy + TLS + discovery into one binary so
those pieces work together. If you only need DNS forwarding, dnsmasq is the
right tool.
**"Why not Pi-hole / AdGuard Home?"**
They're network appliances — need dedicated hardware or Docker. Numa is a
single binary on your laptop. When you move to a coffee shop, your ad
blocking comes with you. Plus the reverse proxy + LAN discovery.
**"Why from scratch / no DNS libraries?"**
Started as a learning project to understand the wire protocol. Turned out
having full control over the pipeline makes features like conditional
forwarding and override injection trivial — they're just steps in the
resolution chain.
**"Vibe coded / AI generated?"**
I use AI as a coding partner — same as using Stack Overflow or pair
programming. I make the architecture decisions, direct what gets built,
and review everything. The DNS wire protocol parser was the original
learning project I wrote by hand. Later features were built collaboratively
with AI assistance. You can read every line — nothing is opaque generated
slop.
**"Why sudo / why port 53?"**
Port 53 requires root on Unix. Numa only needs it for the UDP socket.
You can also bind to a high port for testing: `bind_addr = "127.0.0.1:5353"`.
**"What about .numa TLD conflicts?"**
The TLD is configurable in `numa.toml`. If `.numa` ever becomes official,
change it to anything else.
**"Does it support DoH/DoT?"**
DoH is built in — set `address = "https://9.9.9.9/dns-query"` in
`[upstream]` and your queries are encrypted. Or set `mode = "auto"` to
probe root servers and fall back to DoH if blocked. DoT listener support
is in progress (PR #25).
**"But Quad9/Cloudflare still sees my queries"**
In forward mode (the default), yes — your upstream resolver sees your queries.
Set `mode = "recursive"` and Numa resolves directly from root nameservers —
no single upstream sees your full query pattern. Each authoritative server
only sees the query relevant to its zone. Add `[dnssec] enabled = true` to
cryptographically verify responses.
**"Show me benchmarks / performance numbers"** *(actual r/programare request)*
Benchmark suite is in `benches/` (criterion). Cached round-trip: 691ns.
Pipeline throughput: ~2.0M qps. DNSSEC: ECDSA P-256 verify 174ns, RSA/SHA-256
10.9µs, DS verify 257ns. Cold-cache DNSSEC validation ~90ms (1 network fetch,
TLD chain pre-warmed). Full comparison against system resolver, Quad9,
Cloudflare, Google on the site.
**"Why not just use Unbound?"**
Numa supports recursive resolution with DNSSEC validation, same as Unbound
(`mode = "recursive"`). The difference:
Numa also has built-in ad blocking, a dashboard, `.numa` local domains with
auto HTTPS, LAN service discovery, and developer overrides. Unbound does
one thing well; Numa integrates six features into one binary.
**"Why not Technitium?"**
Technitium is the closest in features — recursive, DNSSEC, ad blocking,
dashboard. Good tool. Two differences: (1) Numa is a single static binary,
Technitium requires the .NET runtime; (2) Numa has developer tooling that
Technitium doesn't — `.numa` local domains with auto TLS reverse proxy,
path-based routing, LAN service discovery, ephemeral overrides with
auto-revert. Different audiences: Technitium targets server admins, Numa
targets developers on laptops.
**"Does it support Windows?"**
macOS and Linux are the primary targets. Windows has scaffolding in the code
but is not tested. If there's demand, it's on the list.

View File

@@ -70,8 +70,10 @@ echo ""
echo " \033[38;2;107;124;78mInstalled:\033[0m $INSTALL_DIR/numa ($TAG)"
echo ""
echo " Get started:"
echo " sudo numa # start the DNS server"
echo " sudo numa install # set as system DNS"
echo " sudo numa service start # run as persistent service"
echo " open http://localhost:5380 # dashboard"
echo " sudo numa install # install service + set as system DNS"
echo " open http://localhost:5380 # dashboard"
echo ""
echo " Other commands:"
echo " sudo numa # run in foreground (no service)"
echo " sudo numa uninstall # restore original DNS"
echo ""

View File

@@ -1,12 +1,48 @@
[server]
bind_addr = "0.0.0.0:53"
api_port = 5380
# api_bind_addr = "127.0.0.1" # default; set to "0.0.0.0" for LAN dashboard access
# [upstream]
# address = "" # auto-detect from system resolver (default)
# address = "9.9.9.9" # or set explicitly
# port = 53
# mode = "forward" # "forward" (default) — relay to upstream
# # "recursive" — resolve from root hints (no address needed)
# address = "https://dns.quad9.net/dns-query" # DNS-over-HTTPS (encrypted)
# address = "https://cloudflare-dns.com/dns-query" # Cloudflare DoH
# address = "9.9.9.9" # plain UDP
# port = 53 # only for forward mode, plain UDP
# timeout_ms = 3000
# root_hints = [ # only used in recursive mode
# "198.41.0.4", # a.root-servers.net (Verisign)
# "199.9.14.201", # b.root-servers.net (USC-ISI)
# "192.33.4.12", # c.root-servers.net (Cogent)
# "199.7.91.13", # d.root-servers.net (UMD)
# "192.203.230.10", # e.root-servers.net (NASA)
# "192.5.5.241", # f.root-servers.net (ISC)
# "192.112.36.4", # g.root-servers.net (US DoD)
# "198.97.190.53", # h.root-servers.net (US Army)
# "192.36.148.17", # i.root-servers.net (Netnod)
# "192.58.128.30", # j.root-servers.net (Verisign)
# "193.0.14.129", # k.root-servers.net (RIPE NCC)
# "199.7.83.42", # l.root-servers.net (ICANN)
# "202.12.27.33", # m.root-servers.net (WIDE)
# ]
# prime_tlds = [ # TLDs to pre-warm on startup (recursive mode)
# "com", "net", "org", "info", # gTLDs
# "io", "dev", "app", "xyz", "me",
# "eu", "uk", "de", "fr", "nl", # EU + European ccTLDs
# "it", "es", "pl", "se", "no",
# "dk", "fi", "at", "be", "ie",
# "pt", "cz", "ro", "gr", "hu",
# "bg", "hr", "sk", "si", "lt",
# "lv", "ee", "ch", "is",
# "co", "br", "au", "ca", "jp", # other major ccTLDs
# ]
# [blocking]
# enabled = true # set to false to disable ad blocking
# refresh_hours = 24
# lists = ["https://cdn.jsdelivr.net/gh/hagezi/dns-blocklists@latest/hosts/pro.txt"]
# allowlist = ["example.com"] # domains to never block
[cache]
max_entries = 10000
@@ -18,6 +54,7 @@ enabled = true
port = 80
tls_port = 443
tld = "numa"
# bind_addr = "127.0.0.1" # default; set to "0.0.0.0" for LAN access to .numa services
# Pre-configured services (numa.numa is always added automatically)
# [[services]]
@@ -40,3 +77,14 @@ tld = "numa"
# record_type = "A"
# value = "127.0.0.1"
# ttl = 60
# DNSSEC signature validation (requires mode = "recursive")
# [dnssec]
# enabled = false # opt-in: verify chain of trust from root KSK
# strict = false # true = SERVFAIL on bogus signatures
# LAN service discovery via mDNS (disabled by default — no network traffic unless enabled)
# [lan]
# enabled = true # discover other Numa instances via mDNS (_numa._tcp.local)
# broadcast_interval_secs = 30
# peer_timeout_secs = 90

306
scripts/benchmark.sh Executable file
View File

@@ -0,0 +1,306 @@
#!/usr/bin/env bash
set -euo pipefail
API="${NUMA_API:-http://127.0.0.1:5380}"
DNS="${NUMA_DNS:-127.0.0.1}"
NUMA_BIN="${NUMA_BIN:-/usr/local/bin/numa}"
LAUNCHD_PLIST="/Library/LaunchDaemons/com.numa.dns.plist"
DOMAINS=(
paypal.com ebay.com zoom.us slack.com discord.com
microsoft.com apple.com meta.com oracle.com ibm.com
docker.com kubernetes.io prometheus.io grafana.com terraform.io
python.org nodejs.org golang.org wikipedia.org reddit.com
stackoverflow.com stripe.com linear.app nytimes.com bbc.co.uk
rust-lang.org fastly.com hetzner.com uber.com airbnb.com
notion.so figma.com netflix.com spotify.com dropbox.com
gitlab.com twitch.tv shopify.com vercel.app mozilla.org
)
stats() {
curl -s "$API/query-log" | python3 -c "
import sys, json
data = json.load(sys.stdin)
rec = [q for q in data if q['path'] == 'RECURSIVE']
if not rec:
print('No recursive queries in log.')
sys.exit()
vals = sorted([q['latency_ms'] for q in rec])
n = len(vals)
print(f'Recursive queries: {n}')
print(f' Avg: {sum(vals)/n:.1f}ms')
print(f' Median: {vals[n//2]:.1f}ms')
print(f' P95: {vals[int(n*0.95)]:.1f}ms')
print(f' P99: {vals[int(n*0.99)]:.1f}ms')
print(f' Min: {min(vals):.1f}ms')
print(f' Max: {max(vals):.1f}ms')
print(f' <100ms: {sum(1 for v in vals if v < 100)}')
print(f' <200ms: {sum(1 for v in vals if v < 200)}')
print(f' <500ms: {sum(1 for v in vals if v < 500)}')
print(f' >1s: {sum(1 for v in vals if v >= 1000)}')
print()
print('Slowest 5:')
for q in sorted(rec, key=lambda q: q['latency_ms'], reverse=True)[:5]:
print(f' {q[\"latency_ms\"]:>8.1f}ms {q[\"query_type\"]:5s} {q[\"domain\"]:35s} {q[\"rescode\"]}')
print()
print('Fastest 5:')
for q in sorted(rec, key=lambda q: q['latency_ms'])[:5]:
print(f' {q[\"latency_ms\"]:>8.1f}ms {q[\"query_type\"]:5s} {q[\"domain\"]:35s} {q[\"rescode\"]}')
"
}
query_all() {
local label="$1"
echo "=== $label ==="
for d in "${DOMAINS[@]}"; do
printf " %-25s " "$d"
dig "@$DNS" "$d" A +noall +stats 2>/dev/null | grep "Query time"
done
echo
}
flush_cache() {
curl -s -X DELETE "$API/cache" > /dev/null
echo "Cache flushed ($(curl -s "$API/stats" | python3 -c "import sys,json; print(json.load(sys.stdin)['cache']['entries'])" 2>/dev/null || echo '?') entries)."
}
wait_for_api() {
local attempts=0
while ! curl -sf "$API/health" > /dev/null 2>&1; do
attempts=$((attempts + 1))
if [ $attempts -ge 20 ]; then
echo "ERROR: API not reachable at $API after 10s" >&2
exit 1
fi
sleep 0.5
done
}
wait_for_priming() {
echo -n "Waiting for TLD priming..."
local prev=0
local stable=0
for _ in $(seq 1 60); do
local entries
entries=$(curl -s "$API/stats" | python3 -c "import sys,json; print(json.load(sys.stdin)['cache']['entries'])" 2>/dev/null || echo 0)
if [ "$entries" -gt 0 ] && [ "$entries" = "$prev" ]; then
stable=$((stable + 1))
if [ $stable -ge 3 ]; then
echo " done ($entries cache entries)."
return
fi
else
stable=0
fi
prev="$entries"
sleep 1
done
echo " timeout (cache: $prev entries)."
}
# restart_numa <config_toml_body>
# Writes config to a temp file, stops numa (launchd or manual), starts with that config.
restart_numa() {
local config_body="$1"
local tmpconf
tmpconf=$(mktemp /tmp/numa-bench-XXXXXX)
mv "$tmpconf" "${tmpconf}.toml"
tmpconf="${tmpconf}.toml"
echo "$config_body" > "$tmpconf"
# Stop launchd-managed numa if active
if sudo launchctl list com.numa.dns &>/dev/null; then
sudo launchctl unload "$LAUNCHD_PLIST" 2>/dev/null || true
sleep 1
fi
# Kill any remaining
sudo killall numa 2>/dev/null || true
sleep 2
sudo "$NUMA_BIN" "$tmpconf" &
wait_for_api
wait_for_priming
echo "numa ready (pid $(pgrep numa | head -1), config: $tmpconf)."
}
# Restore the launchd service
restore_launchd() {
sudo killall numa 2>/dev/null || true
sleep 1
if [ -f "$LAUNCHD_PLIST" ]; then
sudo launchctl load "$LAUNCHD_PLIST" 2>/dev/null || true
echo "Restored launchd service."
fi
}
run_pass() {
local label="$1"
flush_cache
sleep 0.5
query_all "$label"
echo "=== $label — stats ==="
stats
}
case "${1:-full}" in
cold)
echo "--- Cold cache benchmark ---"
run_pass "Cold SRTT + Cold cache"
;;
warm)
echo "--- Warm SRTT benchmark ---"
echo "Priming SRTT..."
for d in "${DOMAINS[@]}"; do dig "@$DNS" "$d" A +short > /dev/null 2>&1; done
run_pass "Warm SRTT + Cold cache"
;;
stats)
stats
;;
compare-srtt)
echo "============================================"
echo " A/B: SRTT OFF vs ON (dnssec off)"
echo "============================================"
echo
restart_numa "$(cat <<'TOML'
[upstream]
mode = "recursive"
srtt = false
TOML
)"
echo
run_pass "SRTT OFF"
echo
echo "--------------------------------------------"
echo
restart_numa "$(cat <<'TOML'
[upstream]
mode = "recursive"
srtt = true
TOML
)"
echo
run_pass "SRTT ON"
echo
restore_launchd
;;
compare-dnssec)
echo "============================================"
echo " A/B: DNSSEC OFF vs ON (srtt on)"
echo "============================================"
echo
restart_numa "$(cat <<'TOML'
[upstream]
mode = "recursive"
srtt = true
[dnssec]
enabled = false
TOML
)"
echo
run_pass "DNSSEC OFF"
echo
echo "--------------------------------------------"
echo
restart_numa "$(cat <<'TOML'
[upstream]
mode = "recursive"
srtt = true
[dnssec]
enabled = true
TOML
)"
echo
run_pass "DNSSEC ON"
echo
restore_launchd
;;
compare-all)
echo "============================================"
echo " Full A/B matrix"
echo " 1. SRTT OFF + DNSSEC OFF (baseline)"
echo " 2. SRTT ON + DNSSEC OFF"
echo " 3. SRTT ON + DNSSEC ON"
echo "============================================"
echo
# --- 1. Baseline ---
restart_numa "$(cat <<'TOML'
[upstream]
mode = "recursive"
srtt = false
[dnssec]
enabled = false
TOML
)"
echo
run_pass "SRTT OFF + DNSSEC OFF"
echo
echo "--------------------------------------------"
echo
# --- 2. SRTT only ---
restart_numa "$(cat <<'TOML'
[upstream]
mode = "recursive"
srtt = true
[dnssec]
enabled = false
TOML
)"
echo
run_pass "SRTT ON + DNSSEC OFF"
echo
echo "--------------------------------------------"
echo
# --- 3. Both ---
restart_numa "$(cat <<'TOML'
[upstream]
mode = "recursive"
srtt = true
[dnssec]
enabled = true
TOML
)"
echo
run_pass "SRTT ON + DNSSEC ON"
echo
restore_launchd
;;
full|*)
echo "--- Full benchmark (cold → warm → SRTT-only) ---"
echo
wait_for_priming
flush_cache
sleep 0.5
query_all "Pass 1: Cold SRTT + Cold cache"
flush_cache
sleep 0.5
query_all "Pass 2: Warm SRTT + Cold cache"
echo "=== Pass 2 stats (SRTT-warm) ==="
stats
;;
esac

43
scripts/release.sh Executable file
View File

@@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -euo pipefail
if [ $# -ne 1 ]; then
echo "Usage: $0 <version> (e.g. 0.7.0)" >&2
exit 1
fi
VERSION="$1"
TAG="v$VERSION"
# Sanity checks
if ! git diff --quiet || ! git diff --cached --quiet; then
echo "ERROR: working tree is dirty — commit or stash first" >&2
exit 1
fi
if [ "$(git branch --show-current)" != "main" ]; then
echo "ERROR: must be on main branch" >&2
exit 1
fi
if git tag -l "$TAG" | grep -q .; then
echo "ERROR: tag $TAG already exists" >&2
exit 1
fi
CURRENT=$(grep '^version = ' Cargo.toml | head -1 | sed 's/version = "\(.*\)"/\1/')
echo "Bumping $CURRENT -> $VERSION"
# Bump version
sed -i.bak "s/^version = \"$CURRENT\"/version = \"$VERSION\"/" Cargo.toml
rm -f Cargo.toml.bak
cargo update --workspace
# Commit, tag, push
git add Cargo.toml Cargo.lock
git commit -m "chore: bump version to $VERSION"
git tag "$TAG"
git push origin main --tags
echo
echo "Released $TAG — GitHub Actions will build, publish to crates.io, and create the release."

1
site/CNAME Normal file
View File

@@ -0,0 +1 @@
numa.rs

301
site/blog-template.html Normal file
View File

@@ -0,0 +1,301 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>$title$ — Numa</title>
<meta name="description" content="$description$">
<link rel="stylesheet" href="/fonts/fonts.css">
<style>
*, *::before, *::after { margin: 0; padding: 0; box-sizing: border-box; }
:root {
--bg-deep: #f5f0e8;
--bg-surface: #ece5da;
--bg-elevated: #e3dbce;
--bg-card: #faf7f2;
--amber: #c0623a;
--amber-dim: #9e4e2d;
--teal: #6b7c4e;
--teal-dim: #566540;
--violet: #64748b;
--text-primary: #2c2418;
--text-secondary: #6b5e4f;
--text-dim: #a39888;
--border: rgba(0, 0, 0, 0.08);
--border-amber: rgba(192, 98, 58, 0.22);
--font-display: 'Instrument Serif', Georgia, serif;
--font-body: 'DM Sans', system-ui, sans-serif;
--font-mono: 'JetBrains Mono', monospace;
}
html { scroll-behavior: smooth; }
body {
background: var(--bg-deep);
color: var(--text-primary);
font-family: var(--font-body);
font-weight: 400;
line-height: 1.7;
-webkit-font-smoothing: antialiased;
}
body::before {
content: '';
position: fixed;
inset: 0;
background-image: url("data:image/svg+xml,%3Csvg viewBox='0 0 256 256' xmlns='http://www.w3.org/2000/svg'%3E%3Cfilter id='n'%3E%3CfeTurbulence type='fractalNoise' baseFrequency='0.9' numOctaves='4' stitchTiles='stitch'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23n)' opacity='0.025'/%3E%3C/svg%3E");
pointer-events: none;
z-index: 9999;
}
/* --- Blog nav --- */
.blog-nav {
padding: 1.5rem 2rem;
display: flex;
align-items: center;
gap: 1.5rem;
}
.blog-nav a {
font-family: var(--font-mono);
font-size: 0.75rem;
letter-spacing: 0.08em;
text-transform: uppercase;
color: var(--text-dim);
text-decoration: none;
transition: color 0.2s;
}
.blog-nav a:hover { color: var(--amber); }
.blog-nav .wordmark {
font-family: var(--font-display);
font-size: 1.4rem;
font-weight: 400;
color: var(--text-primary);
text-decoration: none;
letter-spacing: -0.02em;
}
.blog-nav .wordmark:hover { color: var(--amber); }
.blog-nav .sep {
color: var(--text-dim);
font-family: var(--font-mono);
font-size: 0.75rem;
}
/* --- Article --- */
.article {
max-width: 720px;
margin: 0 auto;
padding: 3rem 2rem 6rem;
}
.article-header {
margin-bottom: 3rem;
padding-bottom: 2rem;
border-bottom: 1px solid var(--border);
}
.article-header h1 {
font-family: var(--font-display);
font-weight: 400;
font-size: clamp(2rem, 5vw, 3rem);
line-height: 1.15;
margin-bottom: 1rem;
color: var(--text-primary);
}
.article-meta {
font-family: var(--font-mono);
font-size: 0.75rem;
color: var(--text-dim);
letter-spacing: 0.04em;
}
.article-meta a {
color: var(--amber);
text-decoration: none;
}
.article-meta a:hover { text-decoration: underline; }
/* --- Prose --- */
.article h2 {
font-family: var(--font-display);
font-weight: 600;
font-size: 1.8rem;
line-height: 1.2;
margin: 3rem 0 1rem;
color: var(--text-primary);
}
.article h3 {
font-family: var(--font-body);
font-weight: 600;
font-size: 1.2rem;
margin: 2rem 0 0.75rem;
color: var(--text-primary);
}
.article p {
margin-bottom: 1.25rem;
color: var(--text-secondary);
font-size: 1.05rem;
}
.article a {
color: var(--amber);
text-decoration: underline;
text-decoration-color: rgba(192, 98, 58, 0.3);
text-underline-offset: 2px;
transition: text-decoration-color 0.2s;
}
.article a:hover {
text-decoration-color: var(--amber);
}
.article strong {
color: var(--text-primary);
font-weight: 600;
}
.article ul, .article ol {
margin-bottom: 1.25rem;
padding-left: 1.5rem;
color: var(--text-secondary);
}
.article li {
margin-bottom: 0.4rem;
font-size: 1.05rem;
}
.article blockquote {
border-left: 3px solid var(--amber);
padding: 0.75rem 1.25rem;
margin: 1.5rem 0;
background: rgba(192, 98, 58, 0.04);
border-radius: 0 4px 4px 0;
}
.article blockquote p {
color: var(--text-secondary);
font-style: italic;
margin-bottom: 0;
}
/* --- Code --- */
.article code {
font-family: var(--font-mono);
font-size: 0.88em;
background: var(--bg-elevated);
padding: 0.15em 0.4em;
border-radius: 3px;
color: var(--amber-dim);
}
.article pre {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: 6px;
padding: 1.25rem 1.5rem;
margin: 1.5rem 0;
overflow-x: auto;
line-height: 1.55;
}
.article pre code {
background: none;
padding: 0;
border-radius: 0;
color: var(--text-primary);
font-size: 0.85rem;
}
/* --- Images --- */
.article img {
max-width: 100%;
border-radius: 6px;
border: 1px solid var(--border);
margin: 1.5rem 0;
}
/* --- Tables --- */
.article table {
width: 100%;
border-collapse: collapse;
margin: 1.5rem 0;
font-size: 0.95rem;
}
.article th {
font-family: var(--font-mono);
font-size: 0.75rem;
letter-spacing: 0.06em;
text-transform: uppercase;
color: var(--text-dim);
text-align: left;
padding: 0.6rem 1rem;
border-bottom: 2px solid var(--border);
}
.article td {
padding: 0.6rem 1rem;
border-bottom: 1px solid var(--border);
color: var(--text-secondary);
}
/* --- Footer --- */
.blog-footer {
text-align: center;
padding: 3rem 2rem;
border-top: 1px solid var(--border);
max-width: 720px;
margin: 0 auto;
}
.blog-footer a {
font-family: var(--font-mono);
font-size: 0.75rem;
letter-spacing: 0.08em;
text-transform: uppercase;
color: var(--text-dim);
text-decoration: none;
margin: 0 1rem;
}
.blog-footer a:hover { color: var(--amber); }
/* --- Responsive --- */
@media (max-width: 640px) {
.article { padding: 2rem 1.25rem 4rem; }
.article pre { padding: 1rem; margin-left: -0.5rem; margin-right: -0.5rem; border-radius: 0; border-left: none; border-right: none; }
}
</style>
</head>
<body>
<nav class="blog-nav">
<a href="/" class="wordmark">Numa</a>
<span class="sep">/</span>
<a href="/blog/">Blog</a>
</nav>
<article class="article">
<header class="article-header">
<h1>$title$</h1>
<div class="article-meta">
$date$ · <a href="https://dimescu.ro">Razvan Dimescu</a>
</div>
</header>
$body$
</article>
<footer class="blog-footer">
<a href="https://github.com/razvandimescu/numa">GitHub</a>
<a href="/">Home</a>
<a href="/blog/">Blog</a>
</footer>
</body>
</html>

136
site/blog/dnssec-chain.svg Normal file
View File

@@ -0,0 +1,136 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 720 680" font-family="'DM Sans', system-ui, sans-serif" font-size="13">
<defs>
<marker id="arr" viewBox="0 0 10 10" refX="10" refY="5" markerWidth="7" markerHeight="7" orient="auto-start-reverse">
<path d="M 0 0 L 10 5 L 0 10 z" fill="#64748b"/>
</marker>
<marker id="arr-amber" viewBox="0 0 10 10" refX="10" refY="5" markerWidth="7" markerHeight="7" orient="auto-start-reverse">
<path d="M 0 0 L 10 5 L 0 10 z" fill="#c0623a"/>
</marker>
<marker id="arr-teal" viewBox="0 0 10 10" refX="10" refY="5" markerWidth="7" markerHeight="7" orient="auto-start-reverse">
<path d="M 0 0 L 10 5 L 0 10 z" fill="#6b7c4e"/>
</marker>
<filter id="s" x="-3%" y="-3%" width="106%" height="106%">
<feDropShadow dx="0" dy="1" stdDeviation="2" flood-opacity="0.06"/>
</filter>
</defs>
<!-- Background -->
<rect width="720" height="680" rx="8" fill="#faf7f2"/>
<!-- Title -->
<text x="360" y="36" text-anchor="middle" font-size="15" font-weight="600" fill="#2c2418" font-family="'Instrument Serif', Georgia, serif" letter-spacing="-0.02em">DNSSEC Chain of Trust</text>
<text x="360" y="54" text-anchor="middle" font-size="11" fill="#a39888">Verifying cloudflare.com — from answer to root trust anchor</text>
<!-- Legend -->
<g transform="translate(28, 72)">
<rect width="14" height="14" rx="3" fill="#c0623a" opacity="0.15" stroke="#c0623a" stroke-width="1"/>
<text x="20" y="12" font-size="11" fill="#6b5e4f">Verify signature (RRSIG → DNSKEY)</text>
<rect x="230" width="14" height="14" rx="3" fill="#6b7c4e" opacity="0.15" stroke="#6b7c4e" stroke-width="1"/>
<text x="250" y="12" font-size="11" fill="#6b5e4f">Vouch for key (DS → parent DNSKEY)</text>
<rect x="478" width="14" height="14" rx="3" fill="#2c2418" opacity="0.08" stroke="#2c2418" stroke-opacity="0.15" stroke-width="1"/>
<text x="498" y="12" font-size="11" fill="#6b5e4f">DNS record / key</text>
</g>
<!-- ═══ ZONE: cloudflare.com ═══ -->
<rect x="40" y="104" width="640" height="152" rx="8" fill="none" stroke="rgba(0,0,0,0.06)" stroke-dasharray="4,3"/>
<text x="56" y="122" font-size="10" font-weight="600" fill="#a39888" letter-spacing="0.08em" font-family="'JetBrains Mono', monospace">CLOUDFLARE.COM ZONE</text>
<!-- A record -->
<rect x="80" y="138" width="320" height="38" rx="6" fill="white" stroke="rgba(0,0,0,0.08)" filter="url(#s)"/>
<text x="96" y="157" font-size="12" font-weight="600" fill="#2c2418" font-family="'JetBrains Mono', monospace">cloudflare.com A 104.16.132.229</text>
<text x="96" y="170" font-size="10" fill="#a39888">The answer we want to verify</text>
<!-- RRSIG -->
<line x1="400" y1="157" x2="440" y2="157" stroke="#c0623a" stroke-width="1.5" marker-end="url(#arr-amber)"/>
<text x="412" y="149" font-size="9" fill="#c0623a" font-weight="600">signed by</text>
<rect x="445" y="138" width="220" height="38" rx="6" fill="rgba(192,98,58,0.06)" stroke="rgba(192,98,58,0.2)" filter="url(#s)"/>
<text x="461" y="155" font-size="11" font-weight="600" fill="#9e4e2d" font-family="'JetBrains Mono', monospace">RRSIG</text>
<text x="505" y="155" font-size="11" fill="#6b5e4f">tag=34505, algo=13</text>
<text x="461" y="170" font-size="10" fill="#a39888">signer: cloudflare.com</text>
<!-- DNSKEY -->
<rect x="80" y="192" width="320" height="50" rx="6" fill="white" stroke="rgba(0,0,0,0.08)" filter="url(#s)"/>
<text x="96" y="211" font-size="11" font-weight="600" fill="#2c2418" font-family="'JetBrains Mono', monospace">DNSKEY</text>
<text x="156" y="211" font-size="11" fill="#6b5e4f">cloudflare.com, tag=34505</text>
<text x="96" y="228" font-size="11" fill="#6b7c4e" font-weight="500">ECDSA P-256</text>
<text x="194" y="228" font-size="10" fill="#a39888">— 174ns to verify</text>
<!-- RRSIG → DNSKEY arrow -->
<path d="M 555 176 L 555 192 L 400 192 L 400 200" stroke="#c0623a" stroke-width="1.5" fill="none" marker-end="url(#arr-amber)"/>
<text x="460" y="189" font-size="9" fill="#c0623a" font-weight="600">verified with</text>
<!-- ═══ ZONE: .com ═══ -->
<rect x="40" y="270" width="640" height="132" rx="8" fill="none" stroke="rgba(0,0,0,0.06)" stroke-dasharray="4,3"/>
<text x="56" y="288" font-size="10" font-weight="600" fill="#a39888" letter-spacing="0.08em" font-family="'JetBrains Mono', monospace">.COM TLD ZONE</text>
<!-- DS connecting zones -->
<line x1="240" y1="242" x2="240" y2="302" stroke="#6b7c4e" stroke-width="1.5" marker-end="url(#arr-teal)"/>
<text x="252" y="276" font-size="9" fill="#6b7c4e" font-weight="600">vouched for by</text>
<!-- DS record at .com -->
<rect x="80" y="304" width="320" height="38" rx="6" fill="rgba(107,124,78,0.06)" stroke="rgba(107,124,78,0.2)" filter="url(#s)"/>
<text x="96" y="321" font-size="11" font-weight="600" fill="#566540" font-family="'JetBrains Mono', monospace">DS</text>
<text x="118" y="321" font-size="11" fill="#6b5e4f">tag=2371, digest=SHA-256</text>
<text x="96" y="336" font-size="10" fill="#a39888">hash of cloudflare.com DNSKEY</text>
<!-- DS signed by RRSIG -->
<line x1="400" y1="323" x2="440" y2="323" stroke="#c0623a" stroke-width="1.5" marker-end="url(#arr-amber)"/>
<text x="412" y="315" font-size="9" fill="#c0623a" font-weight="600">signed by</text>
<rect x="445" y="304" width="220" height="38" rx="6" fill="rgba(192,98,58,0.06)" stroke="rgba(192,98,58,0.2)" filter="url(#s)"/>
<text x="461" y="321" font-size="11" font-weight="600" fill="#9e4e2d" font-family="'JetBrains Mono', monospace">RRSIG</text>
<text x="505" y="321" font-size="11" fill="#6b5e4f">tag=19718, signer=com</text>
<!-- .com DNSKEY -->
<rect x="80" y="356" width="320" height="32" rx="6" fill="white" stroke="rgba(0,0,0,0.08)" filter="url(#s)"/>
<text x="96" y="377" font-size="11" font-weight="600" fill="#2c2418" font-family="'JetBrains Mono', monospace">DNSKEY</text>
<text x="156" y="377" font-size="11" fill="#6b5e4f">com, tag=19718</text>
<!-- RRSIG → .com DNSKEY -->
<path d="M 555 342 L 555 356 L 400 356 L 400 366" stroke="#c0623a" stroke-width="1.5" fill="none" marker-end="url(#arr-amber)"/>
<text x="460" y="353" font-size="9" fill="#c0623a" font-weight="600">verified with</text>
<!-- ═══ ZONE: root ═══ -->
<rect x="40" y="404" width="640" height="132" rx="8" fill="none" stroke="rgba(0,0,0,0.06)" stroke-dasharray="4,3"/>
<text x="56" y="422" font-size="10" font-weight="600" fill="#a39888" letter-spacing="0.08em" font-family="'JetBrains Mono', monospace">ROOT ZONE (.)</text>
<!-- DS connecting .com → root -->
<line x1="240" y1="388" x2="240" y2="436" stroke="#6b7c4e" stroke-width="1.5" marker-end="url(#arr-teal)"/>
<text x="252" y="416" font-size="9" fill="#6b7c4e" font-weight="600">vouched for by</text>
<!-- DS at root -->
<rect x="80" y="438" width="320" height="38" rx="6" fill="rgba(107,124,78,0.06)" stroke="rgba(107,124,78,0.2)" filter="url(#s)"/>
<text x="96" y="455" font-size="11" font-weight="600" fill="#566540" font-family="'JetBrains Mono', monospace">DS</text>
<text x="118" y="455" font-size="11" fill="#6b5e4f">tag=30909, digest=SHA-256</text>
<text x="96" y="470" font-size="10" fill="#a39888">hash of com DNSKEY</text>
<!-- DS signed by root RRSIG -->
<line x1="400" y1="457" x2="440" y2="457" stroke="#c0623a" stroke-width="1.5" marker-end="url(#arr-amber)"/>
<text x="412" y="449" font-size="9" fill="#c0623a" font-weight="600">signed by</text>
<rect x="445" y="438" width="220" height="38" rx="6" fill="rgba(192,98,58,0.06)" stroke="rgba(192,98,58,0.2)" filter="url(#s)"/>
<text x="461" y="455" font-size="11" font-weight="600" fill="#9e4e2d" font-family="'JetBrains Mono', monospace">RRSIG</text>
<text x="505" y="455" font-size="11" fill="#6b5e4f">signer=.</text>
<!-- Root DNSKEY -->
<rect x="80" y="490" width="320" height="32" rx="6" fill="white" stroke="rgba(0,0,0,0.08)" filter="url(#s)"/>
<text x="96" y="511" font-size="11" font-weight="600" fill="#2c2418" font-family="'JetBrains Mono', monospace">DNSKEY</text>
<text x="156" y="511" font-size="11" fill="#6b5e4f">root (.), tag=20326, RSA/SHA-256</text>
<!-- RRSIG → root DNSKEY -->
<path d="M 555 476 L 555 490 L 400 490 L 400 500" stroke="#c0623a" stroke-width="1.5" fill="none" marker-end="url(#arr-amber)"/>
<text x="460" y="487" font-size="9" fill="#c0623a" font-weight="600">verified with</text>
<!-- ═══ TRUST ANCHOR ═══ -->
<line x1="240" y1="522" x2="240" y2="558" stroke="#2c2418" stroke-width="2" stroke-dasharray="4,3"/>
<rect x="120" y="560" width="480" height="52" rx="8" fill="#2c2418" filter="url(#s)"/>
<text x="360" y="582" text-anchor="middle" font-size="12" font-weight="600" fill="#faf7f2" font-family="'JetBrains Mono', monospace">ROOT TRUST ANCHOR</text>
<text x="360" y="600" text-anchor="middle" font-size="11" fill="#a39888">IANA KSK, key_tag=20326 — hardcoded in Numa as const [u8; 256]</text>
<!-- Flow summary -->
<text x="360" y="646" text-anchor="middle" font-size="12" fill="#6b5e4f" font-style="italic">Trust flows up (DS records). Keys flow down (DNSKEY → RRSIG).</text>
<text x="360" y="664" text-anchor="middle" font-size="11" fill="#a39888">If any link breaks — wrong signature, missing DS, expired RRSIG — Numa rejects the response.</text>
</svg>

After

Width:  |  Height:  |  Size: 9.2 KiB

193
site/blog/index.html Normal file
View File

@@ -0,0 +1,193 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Blog — Numa</title>
<meta name="description" content="Technical writing about DNS, Rust, and building infrastructure from scratch.">
<link rel="stylesheet" href="/fonts/fonts.css">
<style>
*, *::before, *::after { margin: 0; padding: 0; box-sizing: border-box; }
:root {
--bg-deep: #f5f0e8;
--bg-surface: #ece5da;
--bg-card: #faf7f2;
--amber: #c0623a;
--amber-dim: #9e4e2d;
--teal: #6b7c4e;
--text-primary: #2c2418;
--text-secondary: #6b5e4f;
--text-dim: #a39888;
--border: rgba(0, 0, 0, 0.08);
--font-display: 'Instrument Serif', Georgia, serif;
--font-body: 'DM Sans', system-ui, sans-serif;
--font-mono: 'JetBrains Mono', monospace;
}
body {
background: var(--bg-deep);
color: var(--text-primary);
font-family: var(--font-body);
font-weight: 400;
line-height: 1.7;
-webkit-font-smoothing: antialiased;
}
body::before {
content: '';
position: fixed;
inset: 0;
background-image: url("data:image/svg+xml,%3Csvg viewBox='0 0 256 256' xmlns='http://www.w3.org/2000/svg'%3E%3Cfilter id='n'%3E%3CfeTurbulence type='fractalNoise' baseFrequency='0.9' numOctaves='4' stitchTiles='stitch'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23n)' opacity='0.025'/%3E%3C/svg%3E");
pointer-events: none;
z-index: 9999;
}
.blog-nav {
padding: 1.5rem 2rem;
display: flex;
align-items: center;
gap: 1.5rem;
}
.blog-nav a {
font-family: var(--font-mono);
font-size: 0.75rem;
letter-spacing: 0.08em;
text-transform: uppercase;
color: var(--text-dim);
text-decoration: none;
transition: color 0.2s;
}
.blog-nav a:hover { color: var(--amber); }
.blog-nav .wordmark {
font-family: var(--font-display);
font-size: 1.4rem;
font-weight: 400;
color: var(--text-primary);
text-decoration: none;
letter-spacing: -0.02em;
}
.blog-nav .wordmark:hover { color: var(--amber); }
.blog-nav .sep {
color: var(--text-dim);
font-family: var(--font-mono);
font-size: 0.75rem;
}
.blog-index {
max-width: 720px;
margin: 0 auto;
padding: 3rem 2rem 6rem;
}
.blog-index h1 {
font-family: var(--font-display);
font-weight: 400;
font-size: 2.5rem;
margin-bottom: 3rem;
}
.post-list {
list-style: none;
}
.post-list li {
padding: 1.5rem 0;
border-bottom: 1px solid var(--border);
}
.post-list li:first-child {
border-top: 1px solid var(--border);
}
.post-list a {
text-decoration: none;
display: block;
}
.post-list .post-title {
font-family: var(--font-display);
font-size: 1.4rem;
font-weight: 600;
color: var(--text-primary);
line-height: 1.3;
margin-bottom: 0.4rem;
transition: color 0.2s;
}
.post-list a:hover .post-title {
color: var(--amber);
}
.post-list .post-desc {
font-size: 0.95rem;
color: var(--text-secondary);
line-height: 1.5;
margin-bottom: 0.4rem;
}
.post-list .post-date {
font-family: var(--font-mono);
font-size: 0.72rem;
color: var(--text-dim);
letter-spacing: 0.04em;
}
.blog-footer {
text-align: center;
padding: 3rem 2rem;
border-top: 1px solid var(--border);
max-width: 720px;
margin: 0 auto;
}
.blog-footer a {
font-family: var(--font-mono);
font-size: 0.75rem;
letter-spacing: 0.08em;
text-transform: uppercase;
color: var(--text-dim);
text-decoration: none;
margin: 0 1rem;
}
.blog-footer a:hover { color: var(--amber); }
</style>
</head>
<body>
<nav class="blog-nav">
<a href="/" class="wordmark">Numa</a>
<span class="sep">/</span>
<a href="/blog/">Blog</a>
</nav>
<main class="blog-index">
<h1>Blog</h1>
<ul class="post-list">
<li>
<a href="/blog/posts/dnssec-from-scratch.html">
<div class="post-title">Implementing DNSSEC from Scratch in Rust</div>
<div class="post-desc">Recursive resolution from root hints, chain-of-trust validation, NSEC/NSEC3 denial proofs, and what I learned implementing DNSSEC with zero DNS libraries.</div>
<div class="post-date">March 2026</div>
</a>
</li>
<li>
<a href="/blog/posts/dns-from-scratch.html">
<div class="post-title">I Built a DNS Resolver from Scratch in Rust</div>
<div class="post-desc">How DNS actually works at the wire level — label compression, TTL tricks, DoH implementation, and what I learned building a resolver with zero DNS libraries.</div>
<div class="post-date">March 2026</div>
</a>
</li>
</ul>
</main>
<footer class="blog-footer">
<a href="https://github.com/razvandimescu/numa">GitHub</a>
<a href="/">Home</a>
</footer>
</body>
</html>

View File

@@ -4,9 +4,7 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Numa — Dashboard</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Instrument+Serif:ital@0;1&family=DM+Sans:opsz,wght@9..40,400;9..40,500;9..40,600&family=JetBrains+Mono:wght@400;500&display=swap" rel="stylesheet">
<link rel="stylesheet" href="/fonts/fonts.css">
<style>
*, *::before, *::after { margin: 0; padding: 0; box-sizing: border-box; }
@@ -103,7 +101,7 @@ body {
/* Stat cards row */
.stats-row {
display: grid;
grid-template-columns: repeat(5, 1fr);
grid-template-columns: repeat(6, 1fr);
gap: 1rem;
}
.stat-card {
@@ -127,6 +125,8 @@ body {
.stat-card.blocked::before { background: var(--rose); }
.stat-card.overrides::before { background: var(--violet); }
.stat-card.uptime::before { background: var(--cyan); }
.stat-card.memory::before { background: var(--text-dim); }
.stat-card.memory .stat-value { color: var(--text-secondary); }
.stat-label {
font-size: 0.7rem;
@@ -217,6 +217,7 @@ body {
min-width: 2px;
}
.path-bar-fill.forward { background: var(--amber); }
.path-bar-fill.recursive { background: var(--cyan); }
.path-bar-fill.cached { background: var(--teal); }
.path-bar-fill.local { background: var(--violet); }
.path-bar-fill.override { background: var(--emerald); }
@@ -280,11 +281,13 @@ body {
font-weight: 500;
}
.path-tag.FORWARD { background: rgba(192, 98, 58, 0.12); color: var(--amber-dim); }
.path-tag.RECURSIVE { background: rgba(74, 124, 138, 0.12); color: var(--cyan); }
.path-tag.CACHED { background: rgba(107, 124, 78, 0.12); color: var(--teal-dim); }
.path-tag.LOCAL { background: rgba(100, 116, 139, 0.12); color: var(--violet-dim); }
.path-tag.OVERRIDE { background: rgba(82, 122, 82, 0.12); color: var(--emerald); }
.path-tag.SERVFAIL { background: rgba(181, 68, 58, 0.12); color: var(--rose); }
.path-tag.BLOCKED { background: rgba(163, 152, 136, 0.15); color: var(--text-dim); }
.path-tag.COALESCED { background: rgba(138, 104, 158, 0.12); color: var(--violet-dim); }
/* Sidebar panels */
.sidebar {
@@ -467,10 +470,74 @@ body {
display: none;
}
/* Memory sidebar panel */
.memory-bar {
display: flex;
height: 18px;
border-radius: 4px;
overflow: hidden;
background: var(--bg-surface);
margin-bottom: 0.8rem;
}
.memory-bar-seg {
height: 100%;
min-width: 2px;
transition: width 0.6s ease;
}
.memory-bar-seg.cache { background: var(--teal); }
.memory-bar-seg.blocklist { background: var(--rose); }
.memory-bar-seg.querylog { background: var(--amber); }
.memory-bar-seg.srtt { background: var(--cyan); }
.memory-bar-seg.overrides { background: var(--violet); }
.memory-row {
display: flex;
align-items: center;
padding: 0.3rem 0;
border-bottom: 1px solid var(--border);
font-family: var(--font-mono);
font-size: 0.72rem;
}
.memory-row:last-child { border-bottom: none; }
.memory-row-dot {
width: 8px;
height: 8px;
border-radius: 2px;
flex-shrink: 0;
margin-right: 0.5rem;
}
.memory-row-label {
flex: 1;
color: var(--text-secondary);
}
.memory-row-size {
width: 65px;
text-align: right;
color: var(--text-primary);
font-weight: 500;
}
.memory-row-entries {
width: 90px;
text-align: right;
color: var(--text-dim);
}
.memory-rss {
margin-top: 0.5rem;
padding-top: 0.5rem;
border-top: 1px solid var(--border);
display: flex;
justify-content: space-between;
font-family: var(--font-mono);
font-size: 0.72rem;
color: var(--text-dim);
}
/* Responsive */
@media (max-width: 1100px) {
.main-grid { grid-template-columns: 1fr; }
}
@media (max-width: 900px) {
.stats-row { grid-template-columns: repeat(3, 1fr); }
}
@media (max-width: 700px) {
.stats-row { grid-template-columns: repeat(2, 1fr); }
.dashboard { padding: 1rem; }
@@ -523,6 +590,11 @@ body {
<div class="stat-value" id="uptime"></div>
<div class="stat-sub" id="uptimeSub">&nbsp;</div>
</div>
<div class="stat-card memory">
<div class="stat-label">Memory</div>
<div class="stat-value" id="memoryRss"></div>
<div class="stat-sub" id="memorySub">&nbsp;</div>
</div>
</div>
<!-- Resolution paths -->
@@ -547,6 +619,8 @@ body {
<select id="logFilterPath" onchange="applyLogFilter()"
style="font-family:var(--font-mono);font-size:0.7rem;padding:0.25rem 0.4rem;border:1px solid var(--border);border-radius:4px;background:var(--bg-surface);color:var(--text-secondary);outline:none;">
<option value="">all paths</option>
<option value="RECURSIVE">recursive</option>
<option value="COALESCED">coalesced</option>
<option value="FORWARD">forward</option>
<option value="CACHED">cached</option>
<option value="BLOCKED">blocked</option>
@@ -580,10 +654,11 @@ body {
<!-- Local services -->
<div class="panel">
<div class="panel-header">
<div>
<div style="flex:1;">
<span class="panel-title">Local Services</span>
<div style="font-size:0.68rem;color:var(--text-dim);margin-top:0.15rem;">Give localhost apps clean .numa URLs. Persistent, with HTTP proxy.</div>
</div>
<span id="lanToggle" style="font-family:var(--font-mono);font-size:0.68rem;cursor:default;user-select:none;" title=""></span>
</div>
<div class="panel-body">
<form class="override-form" id="serviceForm" onsubmit="return addService(event)">
@@ -644,6 +719,17 @@ body {
</div>
</div>
<!-- Memory breakdown -->
<div class="panel" id="memoryPanel">
<div class="panel-header">
<span class="panel-title">Memory</span>
<span class="panel-title" id="memoryTotal" style="color: var(--text-dim)"></span>
</div>
<div class="panel-body" id="memoryBody">
<div class="empty-state">No memory data</div>
</div>
</div>
<!-- Cache entries -->
<div class="panel">
<div class="panel-header">
@@ -660,6 +746,7 @@ body {
<script>
const API = '';
const h = s => String(s).replace(/&/g,'&amp;').replace(/</g,'&lt;').replace(/>/g,'&gt;').replace(/"/g,'&quot;').replace(/'/g,'&#39;');
let prevTotal = null;
let lastLogEntries = [];
let prevTime = null;
@@ -707,8 +794,72 @@ function formatRemaining(secs) {
return `${Math.floor(secs / 3600)}h ${Math.floor((secs % 3600) / 60)}m left`;
}
function formatBytes(bytes) {
if (bytes === 0) return '0 B';
if (bytes < 1024) return bytes + ' B';
if (bytes < 1048576) return (bytes / 1024).toFixed(1) + ' KB';
if (bytes < 1073741824) return (bytes / 1048576).toFixed(1) + ' MB';
return (bytes / 1073741824).toFixed(1) + ' GB';
}
const MEMORY_COMPONENTS = [
{ key: 'cache', label: 'Cache', cls: 'cache', color: 'var(--teal)' },
{ key: 'blocklist', label: 'Blocklist', cls: 'blocklist', color: 'var(--rose)' },
{ key: 'query_log', label: 'Query Log', cls: 'querylog', color: 'var(--amber)' },
{ key: 'srtt', label: 'SRTT', cls: 'srtt', color: 'var(--cyan)' },
{ key: 'overrides', label: 'Overrides', cls: 'overrides', color: 'var(--violet)' },
];
function renderMemory(mem, stats) {
if (!mem) return;
// Stat card
document.getElementById('memoryRss').textContent = formatBytes(mem.process_memory_bytes);
document.getElementById('memorySub').textContent = 'est. ' + formatBytes(mem.total_estimated_bytes);
const entryCounts = {
cache: stats.cache.entries,
blocklist: stats.blocking.domains_loaded,
query_log: mem.query_log_entries,
srtt: mem.srtt_entries,
overrides: stats.overrides.active,
};
// Sidebar panel
const total = mem.total_estimated_bytes || 1;
document.getElementById('memoryTotal').textContent = formatBytes(total);
const barSegments = MEMORY_COMPONENTS.map(c => {
const bytes = mem[c.key + '_bytes'] || 0;
const pct = ((bytes / total) * 100).toFixed(1);
return `<div class="memory-bar-seg ${c.cls}" style="width:${pct}%" title="${c.label}: ${formatBytes(bytes)} (${pct}%)"></div>`;
}).join('');
const rows = MEMORY_COMPONENTS.map(c => {
const bytes = mem[c.key + '_bytes'] || 0;
const entries = entryCounts[c.key] || 0;
return `
<div class="memory-row">
<div class="memory-row-dot" style="background:${c.color}"></div>
<span class="memory-row-label">${c.label}</span>
<span class="memory-row-size">${formatBytes(bytes)}</span>
<span class="memory-row-entries">${formatNumber(entries)} entries</span>
</div>`;
}).join('');
document.getElementById('memoryBody').innerHTML = `
<div class="memory-bar">${barSegments}</div>
${rows}
<div class="memory-rss">
<span>Process Footprint</span>
<span>${formatBytes(mem.process_memory_bytes)}</span>
</div>
`;
}
const PATH_DEFS = [
{ key: 'forwarded', label: 'Forward', cls: 'forward' },
{ key: 'recursive', label: 'Recursive', cls: 'recursive' },
{ key: 'cached', label: 'Cached', cls: 'cached' },
{ key: 'local', label: 'Local', cls: 'local' },
{ key: 'overridden', label: 'Override', cls: 'override' },
@@ -766,7 +917,7 @@ function applyLogFilter() {
<td>${e.query_type}</td>
<td class="domain-cell" title="${e.domain}">${e.domain}${allowBtn}</td>
<td><span class="path-tag ${e.path}">${e.path}</span></td>
<td>${e.rescode}</td>
<td style="white-space:nowrap;"><span style="display:inline-block;width:15px;text-align:center;">${e.dnssec === 'secure' ? '<svg title="DNSSEC verified" width="12" height="12" viewBox="0 0 24 24" fill="none" stroke="var(--emerald)" stroke-width="2.5" stroke-linecap="round" stroke-linejoin="round" style="vertical-align:-1px;"><path d="M12 22s8-4 8-10V5l-8-3-8 3v7c0 6 8 10 8 10z"/><path d="m9 12 2 2 4-4"/></svg>' : ''}</span>${e.rescode}</td>
<td>${e.latency_ms.toFixed(1)}ms</td>
</tr>`;
}).join('');
@@ -874,6 +1025,31 @@ async function refresh() {
document.getElementById('uptime').textContent = formatUptime(stats.uptime_secs);
document.getElementById('uptimeSub').textContent = formatUptimeSub(stats.uptime_secs);
document.getElementById('footerUpstream').textContent = stats.upstream || '';
document.getElementById('footerConfig').textContent = stats.config_path || '';
document.getElementById('footerData').textContent = stats.data_dir || '';
const modeEl = document.getElementById('footerMode');
modeEl.textContent = stats.mode || '—';
modeEl.style.color = stats.mode === 'recursive' ? 'var(--emerald)' : 'var(--amber)';
document.getElementById('footerDnssec').textContent = stats.dnssec ? 'on' : 'off';
document.getElementById('footerDnssec').style.color = stats.dnssec ? 'var(--emerald)' : 'var(--text-dim)';
document.getElementById('footerSrtt').textContent = stats.srtt ? 'on' : 'off';
document.getElementById('footerSrtt').style.color = stats.srtt ? 'var(--emerald)' : 'var(--text-dim)';
// LAN status indicator
const lanEl = document.getElementById('lanToggle');
if (stats.lan) {
if (!stats.lan.enabled) {
lanEl.style.color = 'var(--text-dim)';
lanEl.textContent = 'LAN off';
lanEl.title = 'Enable with: numa lan on';
} else {
const pc = stats.lan.peers || 0;
lanEl.style.color = pc > 0 ? 'var(--emerald)' : 'var(--teal)';
lanEl.textContent = `LAN on · ${pc} peer${pc !== 1 ? 's' : ''}`;
lanEl.title = 'mDNS discovery active (_numa._tcp.local)';
}
}
document.getElementById('overrideCount').textContent = stats.overrides.active;
document.getElementById('blockedCount').textContent = formatNumber(q.blocked);
const bl = stats.blocking;
@@ -917,7 +1093,7 @@ async function refresh() {
prevTime = now;
// Cache hit rate
const answered = q.cached + q.forwarded + q.local + q.overridden;
const answered = q.cached + q.forwarded + q.recursive + q.coalesced + q.local + q.overridden;
const hitRate = answered > 0 ? ((q.cached / answered) * 100).toFixed(1) : '0.0';
document.getElementById('cacheRate').textContent = hitRate + '%';
@@ -929,6 +1105,7 @@ async function refresh() {
renderServices(services);
renderBlockingInfo(blockingInfo);
renderAllowlist(allowlist);
renderMemory(stats.memory, stats);
} catch (err) {
document.getElementById('statusDot').className = 'status-dot error';
@@ -989,14 +1166,14 @@ async function checkDomain(event) {
if (result.blocked) {
el.style.background = 'rgba(181, 68, 58, 0.1)';
el.style.color = 'var(--rose)';
el.innerHTML = `<strong>Blocked</strong> — ${result.reason}` +
(result.matched_rule ? `<br>Rule: <code>${result.matched_rule}</code>` : '') +
` <button class="btn-delete" onclick="allowDomain('${domain}')" style="color:var(--emerald);font-size:0.7rem;margin-left:0.4rem;">allow</button>`;
el.innerHTML = `<strong>Blocked</strong> — ${h(result.reason)}` +
(result.matched_rule ? `<br>Rule: <code>${h(result.matched_rule)}</code>` : '') +
` <button class="btn-delete" onclick="allowDomain('${h(domain)}')" style="color:var(--emerald);font-size:0.7rem;margin-left:0.4rem;">allow</button>`;
} else {
el.style.background = 'rgba(82, 122, 82, 0.1)';
el.style.color = 'var(--emerald)';
el.innerHTML = `<strong>Allowed</strong> — ${result.reason}` +
(result.matched_rule ? `<br>Rule: <code>${result.matched_rule}</code>` : '');
el.innerHTML = `<strong>Allowed</strong> — ${h(result.reason)}` +
(result.matched_rule ? `<br>Rule: <code>${h(result.matched_rule)}</code>` : '');
}
} catch (err) {
el.style.display = 'block';
@@ -1086,7 +1263,10 @@ async function removeAllowlistDomain(domain) {
} catch (err) {}
}
let editingRoute = false;
function renderServices(entries) {
if (editingRoute) return;
const el = document.getElementById('servicesList');
if (!entries.length) {
el.innerHTML = '<div class="empty-state">No services configured</div>';
@@ -1098,18 +1278,69 @@ function renderServices(entries) {
? '<span class="lan-badge shared" title="Reachable from other devices on the network">LAN</span>'
: '<span class="lan-badge local-only" title="Bound to localhost — not reachable from other devices. Start with 0.0.0.0 to share on LAN.">local only</span>')
: '';
const routeLines = (e.routes || []).map(r =>
`<div class="service-port" style="color:var(--text-dim);display:flex;align-items:center;gap:0.3rem;">` +
`<span style="display:inline-block;min-width:60px;">${h(r.path)}</span> ` +
`&rarr; :${parseInt(r.port)||0}` +
(r.strip ? ` <span style="opacity:0.6;">(strip)</span>` : '') +
(e.name === 'numa' ? '' : ` <button class="btn-delete" onclick="deleteRoute('${h(e.name)}','${h(r.path)}')" title="Remove route" style="font-size:0.65rem;padding:0 0.25rem;min-width:auto;opacity:0.5;">&times;</button>`) +
`</div>`
).join('');
const deletable = e.source !== 'config' && e.name !== 'numa';
const name = h(e.name);
return `
<div class="service-item">
<span class="health-dot ${e.healthy ? 'up' : 'down'}" title="${e.healthy ? 'running' : 'not reachable'}"></span>
<div class="service-info">
<div class="service-name"><a href="${e.url}" target="_blank">${e.name}.numa</a>${lanBadge}</div>
<div class="service-port">localhost:${e.target_port} &rarr; proxied</div>
<div class="service-name"><a href="${h(e.url)}" target="_blank">${name}.numa</a>${lanBadge}</div>
<div class="service-port">localhost:${parseInt(e.target_port)||0} &rarr; proxied</div>
${routeLines}
${e.name === 'numa' ? '' : `<div style="margin-top:0.3rem;"><button onclick="toggleRouteForm('${name}')" style="font-size:0.7rem;padding:0.1rem 0.4rem;background:var(--emerald);color:var(--bg);border:none;border-radius:4px;cursor:pointer;">+ route</button><div id="routeForm-${name}" style="display:none;margin-top:0.3rem;"><div style="display:flex;gap:0.3rem;align-items:center;"><input type="text" id="routePath-${name}" placeholder="/path" style="flex:2;padding:0.25rem 0.4rem;font-size:0.75rem;"><input type="number" id="routePort-${name}" value="${parseInt(e.target_port)||0}" min="1" max="65535" style="flex:1;padding:0.25rem 0.4rem;font-size:0.75rem;"><label style="font-size:0.7rem;color:var(--text-dim);display:flex;align-items:center;gap:0.2rem;"><input type="checkbox" id="routeStrip-${name}">strip</label><button onclick="addRoute('${name}')" style="font-size:0.7rem;padding:0.2rem 0.5rem;background:var(--emerald);color:var(--bg);border:none;border-radius:4px;cursor:pointer;">add</button></div><div class="override-error" id="routeError-${name}" style="display:none;font-size:0.7rem;"></div></div></div>`}
</div>
${e.name === 'numa' ? '' : `<button class="btn-delete" onclick="deleteService('${e.name}')" title="Remove service">&times;</button>`}
${deletable ? `<button class="btn-delete" onclick="deleteService('${name}')" title="Remove service">&times;</button>` : ''}
</div>
`}).join('');
}
function toggleRouteForm(name) {
const el = document.getElementById('routeForm-' + name);
const opening = el.style.display === 'none';
el.style.display = opening ? 'block' : 'none';
editingRoute = opening;
}
async function addRoute(name) {
const errEl = document.getElementById('routeError-' + name);
errEl.style.display = 'none';
try {
const path = document.getElementById('routePath-' + name).value.trim();
const port = parseInt(document.getElementById('routePort-' + name).value) || 0;
const strip = document.getElementById('routeStrip-' + name).checked;
const res = await fetch(API + '/services/' + encodeURIComponent(name) + '/routes', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ path, port, strip }),
});
if (!res.ok) throw new Error(await res.text());
editingRoute = false;
refresh();
} catch (err) {
errEl.textContent = err.message;
errEl.style.display = 'block';
}
}
async function deleteRoute(name, path) {
try {
await fetch(API + '/services/' + encodeURIComponent(name) + '/routes', {
method: 'DELETE',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ path }),
});
refresh();
} catch (err) { /* next refresh will update */ }
}
async function addService(event) {
event.preventDefault();
const errEl = document.getElementById('serviceError');
@@ -1151,8 +1382,13 @@ setInterval(refresh, 2000);
</script>
<div style="text-align:center;padding:0.8rem;font-family:var(--font-mono);font-size:0.68rem;color:var(--text-dim);">
Upstream: <span id="footerUpstream" style="user-select:all;color:var(--emerald);"></span>
· Logs: <span id="logPath" style="user-select:all;">macOS: /usr/local/var/log/numa.log · Linux: journalctl -u numa -f</span>
Config: <span id="footerConfig" style="user-select:all;color:var(--emerald);"></span>
· Data: <span id="footerData" style="user-select:all;color:var(--emerald);"></span>
· Upstream: <span id="footerUpstream" style="user-select:all;color:var(--emerald);"></span>
· Mode: <span id="footerMode" style="color:var(--text-dim);"></span>
· DNSSEC: <span id="footerDnssec" style="color:var(--text-dim);"></span>
· SRTT: <span id="footerSrtt" style="color:var(--text-dim);"></span>
· Logs: <span style="user-select:all;color:var(--emerald);">macOS: /usr/local/var/log/numa.log · Linux: journalctl -u numa -f</span>
· <a href="https://github.com/razvandimescu/numa" target="_blank" rel="noopener" style="color:var(--amber);text-decoration:none;">GitHub</a>
</div>

Binary file not shown.

Binary file not shown.

36
site/fonts/fonts.css Normal file
View File

@@ -0,0 +1,36 @@
/* Self-hosted fonts — no external requests to Google */
@font-face {
font-family: 'Instrument Serif';
font-style: normal;
font-weight: 400;
font-display: swap;
src: url(/fonts/instrument-serif-latin.woff2) format('woff2');
}
@font-face {
font-family: 'Instrument Serif';
font-style: italic;
font-weight: 400;
font-display: swap;
src: url(/fonts/instrument-serif-italic-latin.woff2) format('woff2');
}
@font-face {
font-family: 'DM Sans';
font-style: normal;
font-weight: 400 600;
font-display: swap;
src: url(/fonts/dm-sans-latin.woff2) format('woff2');
}
@font-face {
font-family: 'DM Sans';
font-style: italic;
font-weight: 400;
font-display: swap;
src: url(/fonts/dm-sans-italic-latin.woff2) format('woff2');
}
@font-face {
font-family: 'JetBrains Mono';
font-style: normal;
font-weight: 400 500;
font-display: swap;
src: url(/fonts/jetbrains-mono-latin.woff2) format('woff2');
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -3,11 +3,14 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Numa — DNS that governs itself</title>
<meta name="description" content="DNS you own. Block ads, override DNS for development, name your local services with .numa domains, cache for speed. A single portable binary built from scratch in Rust.">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Instrument+Serif:ital@0;1&family=DM+Sans:ital,opsz,wght@0,9..40,400;0,9..40,500;0,9..40,600;1,9..40,400&family=JetBrains+Mono:wght@400;500&display=swap" rel="stylesheet">
<title>Numa — DNS you own. Everywhere you go.</title>
<meta name="description" content="DNS you own. Portable DNS resolver with caching, ad blocking, .numa local domains, developer overrides. Optional recursive resolution with full DNSSEC validation. Built from scratch in Rust.">
<link rel="canonical" href="https://numa.rs">
<meta property="og:title" content="Numa — DNS you own. Everywhere you go.">
<meta property="og:description" content="Portable DNS resolver with caching, ad blocking, .numa local domains, and developer overrides. Optional recursive resolution with full DNSSEC validation. Built from scratch in Rust.">
<meta property="og:type" content="website">
<meta property="og:url" content="https://numa.rs">
<link rel="stylesheet" href="/fonts/fonts.css">
<style>
*, *::before, *::after { margin: 0; padding: 0; box-sizing: border-box; }
@@ -163,7 +166,7 @@ section {
h2 {
font-family: var(--font-display);
font-weight: 600;
font-weight: 400;
font-size: clamp(2rem, 4vw, 3rem);
line-height: 1.2;
margin-bottom: 1.5rem;
@@ -226,7 +229,7 @@ p.lead {
.hero .wordmark {
font-family: var(--font-display);
font-weight: 700;
font-weight: 400;
font-size: clamp(4.5rem, 12vw, 9rem);
line-height: 0.9;
letter-spacing: -0.03em;
@@ -508,7 +511,7 @@ p.lead {
.layer-card h3 {
font-family: var(--font-display);
font-size: 1.4rem;
font-weight: 600;
font-weight: 400;
margin-bottom: 1.25rem;
}
@@ -552,7 +555,7 @@ p.lead {
.arch-subsection h3 {
font-family: var(--font-display);
font-size: 1.5rem;
font-weight: 600;
font-weight: 400;
margin-bottom: 2rem;
}
@@ -785,6 +788,169 @@ p.lead {
background: rgba(82, 122, 82, 0.04);
}
/* ===========================
PERFORMANCE
=========================== */
.perf-section {
background: var(--bg-surface);
}
.perf-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 3rem;
margin-top: 3rem;
align-items: start;
}
.perf-table-wrapper {
overflow-x: auto;
border: 1px solid var(--border);
}
.perf-table {
width: 100%;
border-collapse: collapse;
font-size: 0.85rem;
min-width: 380px;
}
.perf-table thead th {
font-family: var(--font-mono);
font-size: 0.7rem;
letter-spacing: 0.08em;
text-transform: uppercase;
color: var(--text-dim);
padding: 0.8rem 1rem;
text-align: right;
border-bottom: 1px solid var(--border);
background: var(--bg-elevated);
font-weight: 500;
}
.perf-table thead th:first-child {
text-align: left;
}
.perf-table tbody td {
padding: 0.65rem 1rem;
border-bottom: 1px solid var(--border);
color: var(--text-secondary);
text-align: right;
font-family: var(--font-mono);
font-size: 0.82rem;
}
.perf-table tbody td:first-child {
font-family: var(--font-body);
font-size: 0.85rem;
color: var(--text-primary);
text-align: left;
font-weight: 400;
}
.perf-table tbody tr:hover {
background: var(--bg-elevated);
}
.perf-table tbody tr.perf-highlight td {
color: var(--emerald);
font-weight: 500;
}
.perf-table tbody tr.perf-highlight td:first-child {
color: var(--emerald);
}
.perf-sidebar {
display: flex;
flex-direction: column;
gap: 1.5rem;
}
.perf-stat {
background: var(--bg-card);
border: 1px solid var(--border);
padding: 1.5rem;
box-shadow: 0 1px 4px rgba(0,0,0,0.04);
}
.perf-stat-value {
font-family: var(--font-display);
font-size: 2.2rem;
font-weight: 400;
line-height: 1.1;
}
.perf-stat-value.emerald { color: var(--emerald); }
.perf-stat-value.teal { color: var(--teal); }
.perf-stat-value.amber { color: var(--amber); }
.perf-stat-label {
font-size: 0.82rem;
color: var(--text-secondary);
margin-top: 0.4rem;
}
.perf-bar-group {
margin-top: 1.5rem;
}
.perf-bar-row {
display: flex;
align-items: center;
gap: 0.75rem;
margin-bottom: 0.6rem;
}
.perf-bar-label {
font-size: 0.75rem;
color: var(--text-secondary);
width: 80px;
flex-shrink: 0;
text-align: right;
}
.perf-bar-track {
flex: 1;
height: 18px;
background: var(--bg-elevated);
border-radius: 2px;
overflow: hidden;
position: relative;
}
.perf-bar-fill {
height: 100%;
border-radius: 2px;
transition: width 0.6s ease;
}
.perf-bar-fill.emerald { background: var(--emerald); }
.perf-bar-fill.teal { background: var(--teal); }
.perf-bar-fill.dim { background: var(--text-dim); }
.perf-bar-ms {
font-family: var(--font-mono);
font-size: 0.7rem;
color: var(--text-dim);
width: 42px;
flex-shrink: 0;
}
.perf-note {
font-size: 0.78rem;
color: var(--text-dim);
margin-top: 2rem;
line-height: 1.6;
}
.perf-note a {
color: var(--teal-dim);
text-decoration: none;
border-bottom: 1px solid var(--border-teal);
}
/* ===========================
TECHNICAL
=========================== */
@@ -824,6 +990,8 @@ p.lead {
color: var(--text-secondary);
overflow-x: auto;
position: relative;
white-space: pre-wrap;
word-break: break-all;
}
.code-block::before {
@@ -980,6 +1148,7 @@ footer .closing {
.problem-grid { grid-template-columns: 1fr; gap: 2rem; }
.layers-grid { grid-template-columns: 1fr; }
.tech-grid { grid-template-columns: 1fr; }
.perf-grid { grid-template-columns: 1fr; }
.network-grid { grid-template-columns: repeat(2, 1fr); }
.network-connections { display: none; }
.hero-line { display: none; }
@@ -1036,9 +1205,9 @@ footer .closing {
</div>
<div class="problem-grid">
<div class="problem-text reveal reveal-delay-1">
<p>Every time you visit a website, you ask a DNS resolver where to go. That resolver sees every domain you visit, when, and how often.</p>
<p>Today, a handful of operators control this infrastructure. ICANN governs the root. Registrars can seize domains. Governments compel censorship. Your ISP logs your queries by default.</p>
<p>The protocol that underpins the entire internet has no built-in privacy, no cryptographic ownership, and no way for users to choose who they trust.</p>
<p>Every time you visit a website, you ask a DNS resolver where to go. That resolver sees every domain you visit, when, and how often. Your ISP logs these queries by default.</p>
<p>Ad blockers work in one browser. Pi-hole needs a Raspberry Pi. Your local dev services live at <code>localhost:5173</code> and you can never remember which port is which.</p>
<p>DNS is the foundation of everything you do on the internet, but the tools for controlling it locally are either too complex (dnsmasq + nginx + mkcert) or too limited (cloud-only, appliance-only).</p>
</div>
<div class="dns-diagram reveal reveal-delay-2">
<div class="dns-node"><span class="node-dot dim"></span>Your browser</div>
@@ -1062,44 +1231,44 @@ footer .closing {
<div class="container">
<div class="reveal">
<div class="section-label">How It Works</div>
<h2>Three layers, built incrementally</h2>
<p class="lead">Numa starts as a practical developer tool and evolves toward a decentralized network. Each layer stands on its own.</p>
<h2>What it does today</h2>
<p class="lead">A DNS resolver with caching, ad blocking, local service domains, and a REST API. Optional recursive resolution with DNSSEC. Everything runs in a single binary.</p>
</div>
<div class="layers-grid">
<div class="layer-card reveal reveal-delay-1">
<div class="layer-badge">Today</div>
<h3>DNS You Control</h3>
<div class="layer-badge">Layer 1</div>
<h3>Resolve &amp; Protect</h3>
<ul>
<li>Forward mode by default &mdash; transparent proxy to your existing DNS, with caching</li>
<li>Ad &amp; tracker blocking &mdash; 385K+ domains, zero config</li>
<li>Ephemeral DNS overrides with auto-revert</li>
<li>Local service proxy &mdash; <code>frontend.numa</code> instead of <code>localhost:5173</code></li>
<li>Live dashboard with real-time stats and controls</li>
<li>REST API &mdash; 22 endpoints for programmatic control</li>
<li>Recursive resolution &mdash; opt-in, resolve from root nameservers, no upstream needed</li>
<li>DNSSEC validation &mdash; chain-of-trust + NSEC/NSEC3 denial proofs (RSA, ECDSA, Ed25519)</li>
<li>TTL-aware caching (sub-ms lookups)</li>
<li>Single binary, portable &mdash; your ad blocker travels with you</li>
<li>Single binary, portable &mdash; macOS, Linux, and Windows</li>
</ul>
</div>
<div class="layer-card reveal reveal-delay-2">
<div class="layer-badge">Next</div>
<h3>Self-Sovereign DNS</h3>
<div class="layer-badge">Layer 2</div>
<h3>Developer Tools</h3>
<ul>
<li>pkarr integration: Ed25519 keys as domains</li>
<li>Resolve via Mainline BitTorrent DHT (10M+ nodes)</li>
<li>No registrar, no blockchain, no ICANN</li>
<li>Cryptographic verification built-in</li>
<li>Human-readable aliases for pkarr domains</li>
<li>Local service proxy &mdash; <code>frontend.numa</code> instead of <code>localhost:5173</code></li>
<li>Path-based routing &mdash; <code>app.numa/api</code> &rarr; <code>:5001</code></li>
<li>Ephemeral DNS overrides with auto-revert</li>
<li>LAN service discovery via mDNS</li>
<li>Conditional forwarding &mdash; plays nice with Tailscale/VPN split-DNS</li>
<li>REST API &mdash; script everything, automate anything</li>
<li>Live dashboard with real-time stats and controls</li>
</ul>
</div>
<div class="layer-card reveal reveal-delay-3">
<div class="layer-badge">Vision</div>
<h3>Decentralized Resolver Network</h3>
<div class="layer-badge">Coming Next</div>
<h3>Self-Sovereign DNS</h3>
<ul>
<li>Operators run Numa nodes and stake tokens</li>
<li>Earn rewards for uptime, correctness, latency</li>
<li>Independent auditors send challenge queries</li>
<li>Slashing for NXDOMAIN hijacking or poisoned records</li>
<li>Geographic diversity bonuses</li>
<li>Privacy-preserving resolution (DoH/DoT)</li>
<li>pkarr integration &mdash; DNS via Mainline DHT, no registrar needed</li>
<li>Global <code>.numa</code> names &mdash; self-publish, DHT-backed</li>
<li>.onion bridge &mdash; human-readable names for Tor hidden services</li>
<li>Ed25519 same-key binding &mdash; zero new trust assumptions</li>
<li>No blockchain required for core naming</li>
</ul>
</div>
</div>
@@ -1131,66 +1300,14 @@ footer .closing {
<span class="pipeline-arrow">&rarr;</span>
<div class="pipeline-node"><div class="pipeline-box">Cache</div></div>
<span class="pipeline-arrow">&rarr;</span>
<div class="pipeline-node"><div class="pipeline-box hl-violet">pkarr / DHT</div></div>
<div class="pipeline-node"><div class="pipeline-box hl-violet">Recursive / Forward (DoH)</div></div>
<span class="pipeline-arrow">&rarr;</span>
<div class="pipeline-node"><div class="pipeline-box">Upstream</div></div>
<div class="pipeline-node"><div class="pipeline-box highlight">DNSSEC Validate</div></div>
<span class="pipeline-arrow">&rarr;</span>
<div class="pipeline-node"><div class="pipeline-box hl-emerald">Respond</div></div>
</div>
</div>
<div class="arch-subsection reveal">
<h3>Layered resilience</h3>
<div class="layer-stack">
<div class="stack-row">
<div class="stack-label" style="color: var(--violet)">L4 Permanence</div>
<div class="stack-value">Arweave immutable zone snapshots (future)</div>
</div>
<div class="stack-row">
<div class="stack-label" style="color: var(--violet-dim)">L3 Distribution</div>
<div class="stack-value">Mainline DHT via pkarr &mdash; 10M+ nodes</div>
</div>
<div class="stack-row">
<div class="stack-label" style="color: var(--amber)">L2 Serving</div>
<div class="stack-value">Numa instances worldwide</div>
</div>
<div class="stack-row">
<div class="stack-label" style="color: var(--teal)">L1 Compatibility</div>
<div class="stack-value">Standard DNS wire protocol &mdash; RFC 1035</div>
</div>
</div>
</div>
<div class="arch-subsection reveal">
<h3>Network actors</h3>
<div class="network-grid">
<div class="network-actor">
<span class="actor-icon" style="color: var(--teal)" aria-hidden="true">&compfn;</span>
<h4 style="color: var(--teal)">Users</h4>
<p>Choose resolvers from a decentralized marketplace based on latency, privacy, and reputation</p>
</div>
<div class="network-actor">
<span class="actor-icon" style="color: var(--amber)" aria-hidden="true">&diamond;</span>
<h4 style="color: var(--amber)">Operators</h4>
<p>Stake tokens, run Numa nodes, earn rewards proportional to verified service quality</p>
</div>
<div class="network-actor">
<span class="actor-icon" style="color: var(--rose)" aria-hidden="true">&target;</span>
<h4 style="color: var(--rose)">Auditors</h4>
<p>Send challenge queries from diverse locations, verify correctness and latency</p>
</div>
<div class="network-actor">
<span class="actor-icon" style="color: var(--violet)" aria-hidden="true">&equiv;</span>
<h4 style="color: var(--violet)">Chain</h4>
<p>Accounting, reputation scores, reward distribution, slashing proofs</p>
</div>
</div>
<div class="network-connections" aria-hidden="true">
<div class="network-conn-line"></div>
<div class="network-conn-line"></div>
<div class="network-conn-line"></div>
</div>
</div>
</div>
</section>
@@ -1217,6 +1334,14 @@ footer .closing {
</tr>
</thead>
<tbody>
<tr>
<td>Recursive resolver</td>
<td class="cross">No (needs Unbound)</td>
<td class="cross">Cloud only</td>
<td class="cross">Cloud only</td>
<td class="cross">No</td>
<td class="check">Root hints + full DNSSEC</td>
</tr>
<tr>
<td>Ad &amp; tracker blocking</td>
<td class="check">Yes</td>
@@ -1265,6 +1390,22 @@ footer .closing {
<td class="check">Yes</td>
<td class="check">Real-time + controls</td>
</tr>
<tr>
<td>DNS-over-HTTPS upstream</td>
<td class="cross">No</td>
<td class="check">Yes</td>
<td class="check">Yes</td>
<td class="cross">No</td>
<td class="check">Built in (HTTP/2 + rustls)</td>
</tr>
<tr>
<td>Conditional forwarding</td>
<td class="cross">No</td>
<td class="cross">No</td>
<td class="cross">No</td>
<td class="muted">Manual</td>
<td class="check">Auto-detects Tailscale/VPN</td>
</tr>
<tr>
<td>Zero config needed</td>
<td class="cross">Complex setup</td>
@@ -1273,14 +1414,6 @@ footer .closing {
<td class="cross">Docker/setup</td>
<td class="check">Works out of the box</td>
</tr>
<tr>
<td>Self-sovereign DNS roadmap</td>
<td class="cross">No</td>
<td class="cross">No</td>
<td class="cross">No</td>
<td class="cross">No</td>
<td class="check">pkarr / DHT</td>
</tr>
</tbody>
</table>
</div>
@@ -1289,6 +1422,133 @@ footer .closing {
<div class="section-road" aria-hidden="true"><div class="roman-bricks"></div></div>
<!-- ==================== PERFORMANCE ==================== -->
<section class="perf-section" id="performance">
<div class="container">
<div class="reveal">
<div class="section-label" style="color: var(--emerald)">Performance</div>
<h2>Measured, not claimed</h2>
<p class="lead">Benchmarked with <code style="font-size:0.85em">dig</code> against public resolvers on the same machine. Cached queries resolve in under a microsecond.</p>
</div>
<div class="perf-grid">
<div class="reveal reveal-delay-1">
<div class="perf-table-wrapper">
<table class="perf-table">
<caption class="sr-only">DNS resolver latency comparison</caption>
<thead>
<tr>
<th>Resolver</th>
<th>Avg</th>
<th>P50</th>
<th>P99</th>
</tr>
</thead>
<tbody>
<tr class="perf-highlight">
<td>Numa (cached)</td>
<td>&lt;1ms</td>
<td>&lt;1ms</td>
<td>&lt;1ms</td>
</tr>
<tr>
<td>Numa (cold)</td>
<td>9ms</td>
<td>9ms</td>
<td>18ms</td>
</tr>
<tr>
<td>System resolver</td>
<td>9ms</td>
<td>8ms</td>
<td>44ms</td>
</tr>
<tr>
<td>Quad9</td>
<td>15ms</td>
<td>13ms</td>
<td>43ms</td>
</tr>
<tr>
<td>Cloudflare</td>
<td>19ms</td>
<td>14ms</td>
<td>132ms</td>
</tr>
<tr>
<td>Google</td>
<td>22ms</td>
<td>17ms</td>
<td>37ms</td>
</tr>
</tbody>
</table>
</div>
<div class="perf-bar-group">
<div class="perf-bar-row">
<span class="perf-bar-label">Numa</span>
<div class="perf-bar-track"><div class="perf-bar-fill emerald" style="width: 2%"></div></div>
<span class="perf-bar-ms">&lt;1ms</span>
</div>
<div class="perf-bar-row">
<span class="perf-bar-label">System</span>
<div class="perf-bar-track"><div class="perf-bar-fill dim" style="width: 20%"></div></div>
<span class="perf-bar-ms">9ms</span>
</div>
<div class="perf-bar-row">
<span class="perf-bar-label">Quad9</span>
<div class="perf-bar-track"><div class="perf-bar-fill dim" style="width: 33%"></div></div>
<span class="perf-bar-ms">15ms</span>
</div>
<div class="perf-bar-row">
<span class="perf-bar-label">Cloudflare</span>
<div class="perf-bar-track"><div class="perf-bar-fill dim" style="width: 42%"></div></div>
<span class="perf-bar-ms">19ms</span>
</div>
<div class="perf-bar-row">
<span class="perf-bar-label">Google</span>
<div class="perf-bar-track"><div class="perf-bar-fill dim" style="width: 49%"></div></div>
<span class="perf-bar-ms">22ms</span>
</div>
</div>
</div>
<div class="perf-sidebar reveal reveal-delay-2">
<div class="perf-stat">
<div class="perf-stat-value emerald">689 ns</div>
<div class="perf-stat-label">Cached round-trip &mdash; parse query, cache lookup, serialize response</div>
</div>
<div class="perf-stat">
<div class="perf-stat-value teal">2.0M</div>
<div class="perf-stat-label">Queries per second (single-threaded pipeline throughput, batched)</div>
</div>
<div class="perf-stat">
<div class="perf-stat-value amber">0 allocations</div>
<div class="perf-stat-label">Heap allocations in the I/O path &mdash; 4KB stack buffers, inline serialization</div>
</div>
<div class="perf-stat">
<div class="perf-stat-value teal">174 ns</div>
<div class="perf-stat-label">ECDSA P-256 signature verification (DNSSEC). RSA/SHA-256: 10.9&micro;s. DS digest: 257ns.</div>
</div>
<div class="perf-stat">
<div class="perf-stat-value emerald">~90 ms</div>
<div class="perf-stat-label">Cold-cache DNSSEC validation &mdash; only 1 network fetch needed (TLD chain pre-warmed on startup)</div>
</div>
<p class="perf-note">
Cold queries match system resolver speed &mdash; the bottleneck is upstream RTT, not Numa. We don't claim to be faster when the network is the limit.
<br><br>
Benchmarks are reproducible: <code style="font-size:0.85em">cargo bench</code> for micro-benchmarks, <code style="font-size:0.85em">python3 bench/dns-bench.sh</code> for end-to-end.
<a href="https://github.com/razvandimescu/numa/tree/main/bench">Methodology &rarr;</a>
</p>
</div>
</div>
</div>
</section>
<div class="section-road on-surface" aria-hidden="true"><div class="roman-bricks"></div></div>
<!-- ==================== TECHNICAL ==================== -->
<section id="technical">
<div class="container">
@@ -1304,26 +1564,34 @@ footer .closing {
<dt>DNS Libraries</dt>
<dd>Zero &mdash; wire protocol parsed from scratch</dd>
<dt>Resolution Modes</dt>
<dd>Recursive (iterative from root hints, CNAME chasing, glue extraction) or Forward (DoH / plain UDP)</dd>
<dt>DNSSEC</dt>
<dd>Chain-of-trust via ring &mdash; RSA/SHA-256, ECDSA P-256, Ed25519. NSEC/NSEC3 denial proofs. EDNS0 DO bit, 1232-byte payload (DNS Flag Day 2020).</dd>
<dt>Dependencies</dt>
<dd>8 runtime crates (tokio, axum, hyper, serde, serde_json, toml, log, futures)</dd>
<dd>19 runtime crates &mdash; tokio, axum, hyper, ring (DNSSEC), reqwest (DoH), rcgen + rustls (TLS), socket2 (multicast), serde, and more</dd>
<dt>Packet Format</dt>
<dd>RFC 1035 compliant, 4096-byte UDP (EDNS)</dd>
<dd>RFC 1035 compliant. EDNS0 OPT pseudo-record. Parses A, AAAA, NS, CNAME, MX, SOA, SRV, HTTPS, DNSKEY, DS, RRSIG, NSEC, NSEC3.</dd>
<dt>Concurrency</dt>
<dd>Arc&lt;ServerCtx&gt; + std::sync::Mutex (sub-&micro;s holds, never across .await)</dd>
<dt>Signatures</dt>
<dd>Ed25519 via pkarr for self-sovereign domains</dd>
<dd>Arc&lt;ServerCtx&gt; + RwLock for reads, Mutex for writes (never across .await)</dd>
</dl>
<div class="code-block reveal reveal-delay-2">
<span class="comment"># Install (pick one)</span>
<span class="prompt">$</span> <span class="cmd">brew install</span> razvandimescu/tap/numa
<span class="prompt">$</span> <span class="cmd">cargo install</span> numa
<span class="prompt">$</span> <span class="cmd">curl</span> <span class="flag">-fsSL</span> https://raw.githubusercontent.com/razvandimescu/numa/main/install.sh <span class="flag">|</span> <span class="cmd">sh</span>
<span class="comment"># Run</span>
<span class="prompt">$</span> <span class="cmd">sudo numa</span> <span class="comment"># bind to :53, :80, :5380</span>
<span class="prompt">$</span> <span class="cmd">dig</span> <span class="flag">@127.0.0.1</span> google.com <span class="comment"># test resolution</span>
<span class="prompt">$</span> <span class="cmd">open</span> http://numa.numa <span class="comment"># dashboard</span>
<span class="prompt">$</span> <span class="cmd">open</span> http://localhost:5380 <span class="comment"># dashboard</span>
<span class="prompt">$</span> <span class="cmd">curl</span> <span class="flag">-X POST</span> localhost:5380/services \
<span class="flag">-d</span> <span class="str">'{"name":"frontend",
"target_port":5173}'</span> <span class="comment"># http://frontend.numa</span>
"target_port":5173}'</span> <span class="comment"># https://frontend.numa</span>
</div>
</div>
</div>
@@ -1345,7 +1613,7 @@ footer .closing {
</div>
<div class="roadmap-item done">
<span class="phase">Phase 1</span>
<span class="phase-desc">Override layer + REST API with 18 endpoints</span>
<span class="phase-desc">Override layer + REST API for programmatic DNS control</span>
</div>
<div class="roadmap-item done">
<span class="phase">Phase 2</span>
@@ -1359,25 +1627,29 @@ footer .closing {
<span class="phase">Phase 4</span>
<span class="phase-desc">Local service proxy &mdash; .numa domains, HTTP/HTTPS reverse proxy, auto TLS, WebSocket</span>
</div>
<div class="roadmap-item phase-teal">
<div class="roadmap-item done">
<span class="phase">Phase 5</span>
<span class="phase-desc">pkarr integration &mdash; resolve Ed25519 keys via Mainline DHT (15M nodes)</span>
<span class="phase-desc">DNS-over-HTTPS &mdash; encrypted upstream, HTTP/2 connection pooling</span>
</div>
<div class="roadmap-item done">
<span class="phase">Phase 6</span>
<span class="phase-desc">Recursive resolution &mdash; resolve from root nameservers, no upstream dependency</span>
</div>
<div class="roadmap-item done">
<span class="phase">Phase 7</span>
<span class="phase-desc">DNSSEC validation &mdash; chain-of-trust, NSEC/NSEC3 denial proofs, RSA + ECDSA + Ed25519</span>
</div>
<div class="roadmap-item phase-teal">
<span class="phase">Phase 6</span>
<span class="phase">Phase 8</span>
<span class="phase-desc">pkarr integration &mdash; self-sovereign DNS via Mainline DHT, no registrar needed</span>
</div>
<div class="roadmap-item phase-teal">
<span class="phase">Phase 9</span>
<span class="phase-desc">Global .numa names &mdash; self-publish, DHT-backed, first-come-first-served</span>
</div>
<div class="roadmap-item phase-amber">
<span class="phase">Phase 7</span>
<span class="phase-desc">Audit protocol &mdash; challenge-based verification of resolver honesty</span>
</div>
<div class="roadmap-item phase-violet">
<span class="phase">Phase 8</span>
<span class="phase-desc">Numa Network &mdash; proof-of-service consensus, NUMA token, paid .numa domains</span>
</div>
<div class="roadmap-item phase-violet">
<span class="phase">Phase 9</span>
<span class="phase-desc">.onion bridge &mdash; human-readable .numa names for Tor hidden services</span>
<div class="roadmap-item phase-teal">
<span class="phase">Phase 10</span>
<span class="phase-desc">.onion bridge &mdash; human-readable Tor naming via Ed25519 same-key binding</span>
</div>
</div>
</div>
@@ -1391,6 +1663,7 @@ footer .closing {
</p>
<div class="footer-links reveal reveal-delay-1">
<a href="https://github.com/razvandimescu/numa" target="_blank" rel="noopener">GitHub</a>
<a href="/blog/">Blog</a>
<a href="https://github.com/razvandimescu/numa/blob/main/LICENSE" target="_blank" rel="noopener">MIT License</a>
</div>
<p class="closing reveal reveal-delay-2">Built from scratch in Rust. No dependencies on trust.</p>

View File

@@ -9,12 +9,19 @@ use axum::{Json, Router};
use serde::{Deserialize, Serialize};
use crate::ctx::ServerCtx;
use crate::forward::forward_query;
use crate::forward::{forward_query, Upstream};
use crate::query_log::QueryLogFilter;
use crate::question::QueryType;
use crate::stats::QueryPath;
const DASHBOARD_HTML: &str = include_str!("../site/dashboard.html");
const FONTS_CSS: &str = include_str!("../site/fonts/fonts.css");
const FONT_DM_SANS: &[u8] = include_bytes!("../site/fonts/dm-sans-latin.woff2");
const FONT_DM_SANS_ITALIC: &[u8] = include_bytes!("../site/fonts/dm-sans-italic-latin.woff2");
const FONT_INSTRUMENT: &[u8] = include_bytes!("../site/fonts/instrument-serif-latin.woff2");
const FONT_INSTRUMENT_ITALIC: &[u8] =
include_bytes!("../site/fonts/instrument-serif-italic-latin.woff2");
const FONT_JETBRAINS: &[u8] = include_bytes!("../site/fonts/jetbrains-mono-latin.woff2");
pub fn router(ctx: Arc<ServerCtx>) -> Router {
Router::new()
@@ -46,6 +53,31 @@ pub fn router(ctx: Arc<ServerCtx>) -> Router {
.route("/services", get(list_services))
.route("/services", post(create_service))
.route("/services/{name}", delete(remove_service))
.route("/services/{name}/routes", get(list_routes))
.route("/services/{name}/routes", post(add_route))
.route("/services/{name}/routes", delete(remove_route))
.route("/ca.pem", get(serve_ca))
.route("/fonts/fonts.css", get(serve_fonts_css))
.route(
"/fonts/dm-sans-latin.woff2",
get(|| async { serve_font(FONT_DM_SANS) }),
)
.route(
"/fonts/dm-sans-italic-latin.woff2",
get(|| async { serve_font(FONT_DM_SANS_ITALIC) }),
)
.route(
"/fonts/instrument-serif-latin.woff2",
get(|| async { serve_font(FONT_INSTRUMENT) }),
)
.route(
"/fonts/instrument-serif-italic-latin.woff2",
get(|| async { serve_font(FONT_INSTRUMENT_ITALIC) }),
)
.route(
"/fonts/jetbrains-mono-latin.woff2",
get(|| async { serve_font(FONT_JETBRAINS) }),
)
.with_state(ctx)
}
@@ -121,22 +153,38 @@ struct QueryLogResponse {
path: String,
rescode: String,
latency_ms: f64,
dnssec: String,
}
#[derive(Serialize)]
struct StatsResponse {
uptime_secs: u64,
upstream: String,
mode: &'static str, // "recursive" or "forward" — never "auto" at runtime
config_path: String,
data_dir: String,
dnssec: bool,
srtt: bool,
queries: QueriesStats,
cache: CacheStats,
overrides: OverrideStats,
blocking: BlockingStatsResponse,
lan: LanStatsResponse,
memory: MemoryStats,
}
#[derive(Serialize)]
struct LanStatsResponse {
enabled: bool,
peers: usize,
}
#[derive(Serialize)]
struct QueriesStats {
total: u64,
forwarded: u64,
recursive: u64,
coalesced: u64,
cached: u64,
local: u64,
overridden: u64,
@@ -163,6 +211,19 @@ struct BlockingStatsResponse {
allowlist_size: usize,
}
#[derive(Serialize)]
struct MemoryStats {
cache_bytes: usize,
blocklist_bytes: usize,
query_log_bytes: usize,
query_log_entries: usize,
srtt_bytes: usize,
srtt_entries: usize,
overrides_bytes: usize,
total_estimated_bytes: usize,
process_memory_bytes: usize,
}
#[derive(Serialize)]
struct DiagnoseResponse {
domain: String,
@@ -207,7 +268,7 @@ async fn create_overrides(
})
.collect::<Result<Vec<_>, (StatusCode, String)>>()?;
let mut store = ctx.overrides.lock().unwrap();
let mut store = ctx.overrides.write().unwrap();
let mut responses = Vec::with_capacity(parsed.len());
for (domain, target, ttl, duration_secs) in parsed {
@@ -228,7 +289,7 @@ async fn create_overrides(
}
async fn list_overrides(State(ctx): State<Arc<ServerCtx>>) -> Json<Vec<OverrideResponse>> {
let store = ctx.overrides.lock().unwrap();
let store = ctx.overrides.read().unwrap();
let entries: Vec<OverrideResponse> = store
.list()
.into_iter()
@@ -241,7 +302,7 @@ async fn get_override(
State(ctx): State<Arc<ServerCtx>>,
Path(domain): Path<String>,
) -> Result<Json<OverrideResponse>, StatusCode> {
let store = ctx.overrides.lock().unwrap();
let store = ctx.overrides.read().unwrap();
let entry = store.get(&domain).ok_or(StatusCode::NOT_FOUND)?;
Ok(Json(OverrideResponse::from(entry)))
}
@@ -250,7 +311,7 @@ async fn remove_override(
State(ctx): State<Arc<ServerCtx>>,
Path(domain): Path<String>,
) -> StatusCode {
let mut store = ctx.overrides.lock().unwrap();
let mut store = ctx.overrides.write().unwrap();
if store.remove(&domain) {
StatusCode::NO_CONTENT
} else {
@@ -259,7 +320,7 @@ async fn remove_override(
}
async fn clear_overrides(State(ctx): State<Arc<ServerCtx>>) -> StatusCode {
ctx.overrides.lock().unwrap().clear();
ctx.overrides.write().unwrap().clear();
StatusCode::NO_CONTENT
}
@@ -267,7 +328,7 @@ async fn load_environment(
State(ctx): State<Arc<ServerCtx>>,
Json(req): Json<EnvironmentRequest>,
) -> Result<(StatusCode, Json<EnvironmentResponse>), (StatusCode, String)> {
let mut store = ctx.overrides.lock().unwrap();
let mut store = ctx.overrides.write().unwrap();
for entry in &req.overrides {
let duration = entry.duration_secs.or(req.duration_secs);
@@ -294,7 +355,7 @@ async fn diagnose(
// Check overrides
{
let store = ctx.overrides.lock().unwrap();
let store = ctx.overrides.read().unwrap();
let entry = store.get(&domain_lower);
steps.push(DiagnoseStep {
source: "override".to_string(),
@@ -306,7 +367,7 @@ async fn diagnose(
// Check blocklist
{
let bl = ctx.blocklist.lock().unwrap();
let bl = ctx.blocklist.read().unwrap();
let blocked = bl.is_blocked(&domain_lower);
steps.push(DiagnoseStep {
source: "blocklist".to_string(),
@@ -332,7 +393,7 @@ async fn diagnose(
// Check cache
{
let mut cache = ctx.cache.lock().unwrap();
let cache = ctx.cache.read().unwrap();
let cached = cache.lookup(&domain_lower, qtype);
steps.push(DiagnoseStep {
source: "cache".to_string(),
@@ -342,9 +403,9 @@ async fn diagnose(
}
// Check upstream (async, no locks held)
let upstream = *ctx.upstream.lock().unwrap();
let upstream = ctx.upstream.lock().unwrap().clone();
let (upstream_matched, upstream_detail) =
forward_query_for_diagnose(&domain_lower, upstream, ctx.timeout).await;
forward_query_for_diagnose(&domain_lower, &upstream, ctx.timeout).await;
steps.push(DiagnoseStep {
source: "upstream".to_string(),
matched: upstream_matched,
@@ -360,18 +421,12 @@ async fn diagnose(
async fn forward_query_for_diagnose(
domain: &str,
upstream: std::net::SocketAddr,
upstream: &Upstream,
timeout: std::time::Duration,
) -> (bool, String) {
use crate::packet::DnsPacket;
use crate::question::DnsQuestion;
let mut query = DnsPacket::new();
query.header.id = 0xBEEF;
query.header.recursion_desired = true;
query
.questions
.push(DnsQuestion::new(domain.to_string(), QueryType::A));
let query = DnsPacket::query(0xBEEF, domain, QueryType::A);
match forward_query(&query, upstream, timeout).await {
Ok(resp) => (
@@ -419,6 +474,7 @@ async fn query_log(
path: e.path.as_str().to_string(),
rescode: e.rescode.as_str().to_string(),
latency_ms: e.latency_us as f64 / 1000.0,
dnssec: e.dnssec.as_str().to_string(),
}
})
.collect()
@@ -429,21 +485,49 @@ async fn query_log(
async fn stats(State(ctx): State<Arc<ServerCtx>>) -> Json<StatsResponse> {
let snap = ctx.stats.lock().unwrap().snapshot();
let (cache_len, cache_max) = {
let cache = ctx.cache.lock().unwrap();
(cache.len(), cache.max_entries())
let (cache_len, cache_max, cache_bytes) = {
let cache = ctx.cache.read().unwrap();
(cache.len(), cache.max_entries(), cache.heap_bytes())
};
let (override_count, overrides_bytes) = {
let ov = ctx.overrides.read().unwrap();
(ov.active_count(), ov.heap_bytes())
};
let (bl_stats, blocklist_bytes) = {
let bl = ctx.blocklist.read().unwrap();
(bl.stats(), bl.heap_bytes())
};
let (query_log_bytes, query_log_entries) = {
let log = ctx.query_log.lock().unwrap();
(log.heap_bytes(), log.len())
};
let (srtt_bytes, srtt_entries, srtt_enabled) = {
let s = ctx.srtt.read().unwrap();
(s.heap_bytes(), s.len(), s.is_enabled())
};
let override_count = ctx.overrides.lock().unwrap().active_count();
let bl_stats = ctx.blocklist.lock().unwrap().stats();
let upstream = ctx.upstream.lock().unwrap().to_string();
let total_estimated =
cache_bytes + blocklist_bytes + query_log_bytes + srtt_bytes + overrides_bytes;
let upstream = if ctx.upstream_mode == crate::config::UpstreamMode::Recursive {
"recursive (root hints)".to_string()
} else {
ctx.upstream.lock().unwrap().to_string()
};
Json(StatsResponse {
uptime_secs: snap.uptime_secs,
upstream,
mode: ctx.upstream_mode.as_str(),
config_path: ctx.config_path.clone(),
data_dir: ctx.data_dir.to_string_lossy().to_string(),
dnssec: ctx.dnssec_enabled,
srtt: srtt_enabled,
queries: QueriesStats {
total: snap.total,
forwarded: snap.forwarded,
recursive: snap.recursive,
coalesced: snap.coalesced,
cached: snap.cached,
local: snap.local,
overridden: snap.overridden,
@@ -463,11 +547,26 @@ async fn stats(State(ctx): State<Arc<ServerCtx>>) -> Json<StatsResponse> {
domains_loaded: bl_stats.domains_loaded,
allowlist_size: bl_stats.allowlist_size,
},
lan: LanStatsResponse {
enabled: ctx.lan_enabled,
peers: ctx.lan_peers.lock().unwrap().list().len(),
},
memory: MemoryStats {
cache_bytes,
blocklist_bytes,
query_log_bytes,
query_log_entries,
srtt_bytes,
srtt_entries,
overrides_bytes,
total_estimated_bytes: total_estimated,
process_memory_bytes: crate::stats::process_memory_bytes(),
},
})
}
async fn list_cache(State(ctx): State<Arc<ServerCtx>>) -> Json<Vec<CacheEntryResponse>> {
let cache = ctx.cache.lock().unwrap();
let cache = ctx.cache.read().unwrap();
let entries: Vec<CacheEntryResponse> = cache
.list()
.into_iter()
@@ -481,7 +580,7 @@ async fn list_cache(State(ctx): State<Arc<ServerCtx>>) -> Json<Vec<CacheEntryRes
}
async fn flush_cache(State(ctx): State<Arc<ServerCtx>>) -> StatusCode {
ctx.cache.lock().unwrap().clear();
ctx.cache.write().unwrap().clear();
StatusCode::NO_CONTENT
}
@@ -489,7 +588,7 @@ async fn flush_cache_domain(
State(ctx): State<Arc<ServerCtx>>,
Path(domain): Path<String>,
) -> StatusCode {
ctx.cache.lock().unwrap().remove(&domain);
ctx.cache.write().unwrap().remove(&domain);
StatusCode::NO_CONTENT
}
@@ -500,7 +599,7 @@ async fn health() -> Json<serde_json::Value> {
// --- Blocking handlers ---
async fn blocking_stats(State(ctx): State<Arc<ServerCtx>>) -> Json<serde_json::Value> {
let stats = ctx.blocklist.lock().unwrap().stats();
let stats = ctx.blocklist.read().unwrap().stats();
Json(serde_json::json!({
"enabled": stats.enabled,
"paused": stats.paused,
@@ -520,7 +619,7 @@ async fn blocking_toggle(
State(ctx): State<Arc<ServerCtx>>,
Json(req): Json<BlockingToggleRequest>,
) -> Json<serde_json::Value> {
ctx.blocklist.lock().unwrap().set_enabled(req.enabled);
ctx.blocklist.write().unwrap().set_enabled(req.enabled);
Json(serde_json::json!({ "enabled": req.enabled }))
}
@@ -538,12 +637,12 @@ async fn blocking_pause(
State(ctx): State<Arc<ServerCtx>>,
Json(req): Json<BlockingPauseRequest>,
) -> Json<serde_json::Value> {
ctx.blocklist.lock().unwrap().pause(req.minutes * 60);
ctx.blocklist.write().unwrap().pause(req.minutes * 60);
Json(serde_json::json!({ "paused_minutes": req.minutes }))
}
async fn blocking_unpause(State(ctx): State<Arc<ServerCtx>>) -> Json<serde_json::Value> {
ctx.blocklist.lock().unwrap().unpause();
ctx.blocklist.write().unwrap().unpause();
Json(serde_json::json!({ "paused": false }))
}
@@ -551,12 +650,12 @@ async fn blocking_check(
State(ctx): State<Arc<ServerCtx>>,
Path(domain): Path<String>,
) -> Json<crate::blocklist::BlockCheckResult> {
let result = ctx.blocklist.lock().unwrap().check(&domain);
let result = ctx.blocklist.read().unwrap().check(&domain);
Json(result)
}
async fn blocking_allowlist(State(ctx): State<Arc<ServerCtx>>) -> Json<Vec<String>> {
let list = ctx.blocklist.lock().unwrap().allowlist();
let list = ctx.blocklist.read().unwrap().allowlist();
Json(list)
}
@@ -569,7 +668,7 @@ async fn blocking_allowlist_add(
State(ctx): State<Arc<ServerCtx>>,
Json(req): Json<AllowlistRequest>,
) -> (StatusCode, Json<serde_json::Value>) {
ctx.blocklist.lock().unwrap().add_to_allowlist(&req.domain);
ctx.blocklist.write().unwrap().add_to_allowlist(&req.domain);
(
StatusCode::CREATED,
Json(serde_json::json!({ "allowed": req.domain })),
@@ -580,7 +679,12 @@ async fn blocking_allowlist_remove(
State(ctx): State<Arc<ServerCtx>>,
Path(domain): Path<String>,
) -> StatusCode {
if ctx.blocklist.lock().unwrap().remove_from_allowlist(&domain) {
if ctx
.blocklist
.write()
.unwrap()
.remove_from_allowlist(&domain)
{
StatusCode::NO_CONTENT
} else {
StatusCode::NOT_FOUND
@@ -596,6 +700,9 @@ struct ServiceResponse {
url: String,
healthy: bool,
lan_accessible: bool,
#[serde(skip_serializing_if = "Vec::is_empty")]
routes: Vec<crate::service_store::RouteEntry>,
source: String,
}
#[derive(Deserialize)]
@@ -610,7 +717,19 @@ async fn list_services(State(ctx): State<Arc<ServerCtx>>) -> Json<Vec<ServiceRes
store
.list()
.into_iter()
.map(|e| (e.name.clone(), e.target_port))
.map(|e| {
let source = if store.is_config_service(&e.name) {
"config"
} else {
"api"
};
(
e.name.clone(),
e.target_port,
e.routes.clone(),
source.to_string(),
)
})
.collect()
};
let tld = &ctx.proxy_tld;
@@ -619,7 +738,7 @@ async fn list_services(State(ctx): State<Arc<ServerCtx>>) -> Json<Vec<ServiceRes
let check_futures: Vec<_> = entries
.iter()
.map(|(_, port)| {
.map(|(_, port, _, _)| {
let port = *port;
let localhost = std::net::SocketAddr::from(([127, 0, 0, 1], port));
let lan_addr = lan_ip.map(|ip| std::net::SocketAddr::new(ip.into(), port));
@@ -639,12 +758,14 @@ async fn list_services(State(ctx): State<Arc<ServerCtx>>) -> Json<Vec<ServiceRes
.into_iter()
.zip(check_results)
.map(
|((name, port), (healthy, lan_accessible))| ServiceResponse {
|((name, port, routes, source), (healthy, lan_accessible))| ServiceResponse {
url: format!("http://{}.{}", name, tld),
name,
target_port: port,
healthy,
lan_accessible,
routes,
source,
},
)
.collect();
@@ -675,7 +796,11 @@ async fn create_service(
}
let tld = &ctx.proxy_tld;
let is_new = !ctx.services.lock().unwrap().has_name(&name);
ctx.services.lock().unwrap().insert(&name, req.target_port);
if is_new {
crate::tls::regenerate_tls(&ctx);
}
let localhost = std::net::SocketAddr::from(([127, 0, 0, 1], req.target_port));
let lan_addr =
@@ -694,6 +819,8 @@ async fn create_service(
target_port: req.target_port,
healthy,
lan_accessible,
routes: Vec::new(),
source: "api".to_string(),
}),
))
}
@@ -702,14 +829,121 @@ async fn remove_service(State(ctx): State<Arc<ServerCtx>>, Path(name): Path<Stri
if name.eq_ignore_ascii_case("numa") {
return StatusCode::FORBIDDEN;
}
let mut store = ctx.services.lock().unwrap();
if store.remove(&name) {
let removed = ctx.services.lock().unwrap().remove(&name);
if removed {
crate::tls::regenerate_tls(&ctx);
StatusCode::NO_CONTENT
} else {
StatusCode::NOT_FOUND
}
}
// --- Route handlers ---
#[derive(Deserialize)]
struct AddRouteRequest {
path: String,
port: u16,
#[serde(default)]
strip: bool,
}
#[derive(Deserialize)]
struct RemoveRouteRequest {
path: String,
}
async fn list_routes(
State(ctx): State<Arc<ServerCtx>>,
Path(name): Path<String>,
) -> Result<Json<Vec<crate::service_store::RouteEntry>>, StatusCode> {
let store = ctx.services.lock().unwrap();
match store.lookup(&name) {
Some(entry) => Ok(Json(entry.routes.clone())),
None => Err(StatusCode::NOT_FOUND),
}
}
async fn add_route(
State(ctx): State<Arc<ServerCtx>>,
Path(name): Path<String>,
Json(req): Json<AddRouteRequest>,
) -> Result<StatusCode, (StatusCode, String)> {
if req.path.is_empty() || !req.path.starts_with('/') {
return Err((StatusCode::BAD_REQUEST, "path must start with /".into()));
}
if req.path.contains("/../") || req.path.ends_with("/..") || req.path.contains("%") {
return Err((
StatusCode::BAD_REQUEST,
"path must not contain '..' or percent-encoding".into(),
));
}
if req.port == 0 {
return Err((StatusCode::BAD_REQUEST, "port must be > 0".into()));
}
let mut store = ctx.services.lock().unwrap();
if store.add_route(&name, req.path, req.port, req.strip) {
Ok(StatusCode::CREATED)
} else {
Err((
StatusCode::NOT_FOUND,
format!("service '{}' not found", name),
))
}
}
async fn remove_route(
State(ctx): State<Arc<ServerCtx>>,
Path(name): Path<String>,
Json(req): Json<RemoveRouteRequest>,
) -> StatusCode {
let mut store = ctx.services.lock().unwrap();
if store.remove_route(&name, &req.path) {
StatusCode::NO_CONTENT
} else {
StatusCode::NOT_FOUND
}
}
async fn serve_ca(State(ctx): State<Arc<ServerCtx>>) -> Result<impl IntoResponse, StatusCode> {
let ca_path = ctx.data_dir.join("ca.pem");
let bytes = tokio::task::spawn_blocking(move || std::fs::read(ca_path))
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
.map_err(|_| StatusCode::NOT_FOUND)?;
Ok((
[
(header::CONTENT_TYPE, "application/x-pem-file"),
(
header::CONTENT_DISPOSITION,
"attachment; filename=\"numa-ca.pem\"",
),
(header::CACHE_CONTROL, "public, max-age=86400"),
],
bytes,
))
}
async fn serve_fonts_css() -> impl IntoResponse {
(
[
(header::CONTENT_TYPE, "text/css"),
(header::CACHE_CONTROL, "public, max-age=31536000"),
],
FONTS_CSS,
)
}
fn serve_font(data: &'static [u8]) -> impl IntoResponse {
(
[
(header::CONTENT_TYPE, "font/woff2"),
(header::CACHE_CONTROL, "public, max-age=31536000"),
],
data,
)
}
async fn check_tcp(addr: std::net::SocketAddr) -> bool {
tokio::time::timeout(
std::time::Duration::from_millis(100),
@@ -719,3 +953,254 @@ async fn check_tcp(addr: std::net::SocketAddr) -> bool {
.map(|r| r.is_ok())
.unwrap_or(false)
}
#[cfg(test)]
mod tests {
use super::*;
use axum::body::Body;
use http::Request;
use std::sync::{Mutex, RwLock};
use tower::ServiceExt;
async fn test_ctx() -> Arc<ServerCtx> {
let socket = tokio::net::UdpSocket::bind("127.0.0.1:0").await.unwrap();
Arc::new(ServerCtx {
socket,
zone_map: std::collections::HashMap::new(),
cache: RwLock::new(crate::cache::DnsCache::new(100, 60, 86400)),
stats: Mutex::new(crate::stats::ServerStats::new()),
overrides: RwLock::new(crate::override_store::OverrideStore::new()),
blocklist: RwLock::new(crate::blocklist::BlocklistStore::new()),
query_log: Mutex::new(crate::query_log::QueryLog::new(100)),
services: Mutex::new(crate::service_store::ServiceStore::new()),
lan_peers: Mutex::new(crate::lan::PeerStore::new(90)),
forwarding_rules: Vec::new(),
upstream: Mutex::new(crate::forward::Upstream::Udp(
"127.0.0.1:53".parse().unwrap(),
)),
upstream_auto: false,
upstream_port: 53,
lan_ip: Mutex::new(std::net::Ipv4Addr::LOCALHOST),
timeout: std::time::Duration::from_secs(3),
proxy_tld: "numa".to_string(),
proxy_tld_suffix: ".numa".to_string(),
lan_enabled: false,
config_path: "/tmp/test-numa.toml".to_string(),
config_found: false,
config_dir: std::path::PathBuf::from("/tmp"),
data_dir: std::path::PathBuf::from("/tmp"),
tls_config: None,
upstream_mode: crate::config::UpstreamMode::Forward,
root_hints: Vec::new(),
srtt: RwLock::new(crate::srtt::SrttCache::new(true)),
inflight: Mutex::new(std::collections::HashMap::new()),
dnssec_enabled: false,
dnssec_strict: false,
})
}
#[tokio::test]
async fn health_returns_ok() {
let ctx = test_ctx().await;
let resp = router(ctx)
.oneshot(Request::get("/health").body(Body::empty()).unwrap())
.await
.unwrap();
assert_eq!(resp.status(), 200);
let body = axum::body::to_bytes(resp.into_body(), 1000).await.unwrap();
let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert_eq!(json["status"], "ok");
}
#[tokio::test]
async fn stats_returns_json() {
let ctx = test_ctx().await;
let resp = router(ctx)
.oneshot(Request::get("/stats").body(Body::empty()).unwrap())
.await
.unwrap();
assert_eq!(resp.status(), 200);
let body = axum::body::to_bytes(resp.into_body(), 10000).await.unwrap();
let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert!(json["uptime_secs"].is_number());
assert!(json["queries"]["total"].is_number());
}
#[tokio::test]
async fn query_log_empty() {
let ctx = test_ctx().await;
let resp = router(ctx)
.oneshot(
Request::get("/query-log?limit=10")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
assert_eq!(resp.status(), 200);
let body = axum::body::to_bytes(resp.into_body(), 10000).await.unwrap();
let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert!(json.as_array().unwrap().is_empty());
}
#[tokio::test]
async fn overrides_crud() {
let ctx = test_ctx().await;
let a = router(ctx.clone());
// Create
let resp = a
.clone()
.oneshot(
Request::post("/overrides")
.header("content-type", "application/json")
.body(Body::from(
r#"{"domain":"test.dev","target":"1.2.3.4","duration_secs":60}"#,
))
.unwrap(),
)
.await
.unwrap();
assert!(resp.status().is_success());
// List
let resp = a
.clone()
.oneshot(Request::get("/overrides").body(Body::empty()).unwrap())
.await
.unwrap();
let body = axum::body::to_bytes(resp.into_body(), 10000).await.unwrap();
assert!(String::from_utf8_lossy(&body).contains("test.dev"));
// Get
let resp = a
.clone()
.oneshot(
Request::get("/overrides/test.dev")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
assert_eq!(resp.status(), 200);
// Delete
let resp = a
.clone()
.oneshot(
Request::delete("/overrides/test.dev")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
assert!(resp.status().is_success());
// Verify deleted
let resp = a
.oneshot(
Request::get("/overrides/test.dev")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
assert_eq!(resp.status(), 404);
}
#[tokio::test]
async fn cache_list_and_flush() {
let ctx = test_ctx().await;
let a = router(ctx.clone());
// List (empty)
let resp = a
.clone()
.oneshot(Request::get("/cache").body(Body::empty()).unwrap())
.await
.unwrap();
assert_eq!(resp.status(), 200);
// Flush
let resp = a
.oneshot(Request::delete("/cache").body(Body::empty()).unwrap())
.await
.unwrap();
assert!(resp.status().is_success());
}
#[tokio::test]
async fn blocking_stats_returns_json() {
let ctx = test_ctx().await;
let resp = router(ctx)
.oneshot(Request::get("/blocking/stats").body(Body::empty()).unwrap())
.await
.unwrap();
assert_eq!(resp.status(), 200);
let body = axum::body::to_bytes(resp.into_body(), 10000).await.unwrap();
let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert!(json["enabled"].is_boolean());
}
#[tokio::test]
async fn services_crud() {
let ctx = test_ctx().await;
let a = router(ctx);
// Add service
let resp = a
.clone()
.oneshot(
Request::post("/services")
.header("content-type", "application/json")
.body(Body::from(r#"{"name":"testapp","target_port":3000}"#))
.unwrap(),
)
.await
.unwrap();
assert!(resp.status().is_success());
// List
let resp = a
.clone()
.oneshot(Request::get("/services").body(Body::empty()).unwrap())
.await
.unwrap();
let body = axum::body::to_bytes(resp.into_body(), 10000).await.unwrap();
assert!(String::from_utf8_lossy(&body).contains("testapp"));
// Delete
let resp = a
.clone()
.oneshot(
Request::delete("/services/testapp")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
assert!(resp.status().is_success());
// Verify deleted
let resp = a
.oneshot(Request::get("/services").body(Body::empty()).unwrap())
.await
.unwrap();
let body = axum::body::to_bytes(resp.into_body(), 10000).await.unwrap();
assert!(!String::from_utf8_lossy(&body).contains("testapp"));
}
#[tokio::test]
async fn dashboard_returns_html() {
let ctx = test_ctx().await;
let resp = router(ctx)
.oneshot(Request::get("/").body(Body::empty()).unwrap())
.await
.unwrap();
assert_eq!(resp.status(), 200);
let body = axum::body::to_bytes(resp.into_body(), 100000)
.await
.unwrap();
assert!(String::from_utf8_lossy(&body).contains("Numa"));
}
}

View File

@@ -183,6 +183,15 @@ impl BlocklistStore {
self.allowlist.iter().cloned().collect()
}
pub fn heap_bytes(&self) -> usize {
let per_slot_overhead = std::mem::size_of::<u64>() + std::mem::size_of::<String>() + 1;
let domains_table = self.domains.capacity() * per_slot_overhead;
let domains_heap: usize = self.domains.iter().map(|d| d.capacity()).sum();
let allow_table = self.allowlist.capacity() * per_slot_overhead;
let allow_heap: usize = self.allowlist.iter().map(|d| d.capacity()).sum();
domains_table + domains_heap + allow_table + allow_heap
}
pub fn stats(&self) -> BlocklistStats {
BlocklistStats {
enabled: self.is_enabled(),
@@ -234,6 +243,23 @@ pub fn parse_blocklist(text: &str) -> HashSet<String> {
domains
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn heap_bytes_grows_with_domains() {
let mut store = BlocklistStore::new();
let empty = store.heap_bytes();
let domains: HashSet<String> = ["example.com", "example.org", "test.net"]
.iter()
.map(|s| s.to_string())
.collect();
store.swap_domains(domains, vec![]);
assert!(store.heap_bytes() > empty);
}
}
pub async fn download_blocklists(lists: &[String]) -> Vec<(String, String)> {
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(30))

View File

@@ -21,6 +21,13 @@ impl BytePacketBuffer {
}
}
pub fn from_bytes(data: &[u8]) -> Self {
let mut buf = Self::new();
let len = data.len().min(BUF_SIZE);
buf.buf[..len].copy_from_slice(&data[..len]);
buf
}
pub fn pos(&self) -> usize {
self.pos
}
@@ -157,8 +164,16 @@ impl BytePacketBuffer {
}
pub fn write_qname(&mut self, qname: &str) -> Result<()> {
if qname.is_empty() || qname == "." {
self.write_u8(0)?;
return Ok(());
}
for label in qname.split('.') {
let len = label.len();
if len == 0 {
continue; // skip empty labels from trailing dot
}
if len > 0x3f {
return Err("Single label exceeds 63 characters of length".into());
}
@@ -173,6 +188,16 @@ impl BytePacketBuffer {
Ok(())
}
pub fn write_bytes(&mut self, data: &[u8]) -> Result<()> {
let end = self.pos + data.len();
if end > BUF_SIZE {
return Err("End of buffer".into());
}
self.buf[self.pos..end].copy_from_slice(data);
self.pos = end;
Ok(())
}
pub fn set(&mut self, pos: usize, val: u8) -> Result<()> {
if pos >= BUF_SIZE {
return Err("End of buffer".into());

View File

@@ -5,10 +5,31 @@ use crate::packet::DnsPacket;
use crate::question::QueryType;
use crate::record::DnsRecord;
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq)]
pub enum DnssecStatus {
Secure,
Insecure,
Bogus,
#[default]
Indeterminate,
}
impl DnssecStatus {
pub fn as_str(&self) -> &'static str {
match self {
DnssecStatus::Secure => "secure",
DnssecStatus::Insecure => "insecure",
DnssecStatus::Bogus => "bogus",
DnssecStatus::Indeterminate => "indeterminate",
}
}
}
struct CacheEntry {
packet: DnsPacket,
inserted_at: Instant,
ttl: Duration,
dnssec_status: DnssecStatus,
}
/// DNS cache using a two-level map (domain -> query_type -> entry) so that
@@ -19,7 +40,6 @@ pub struct DnsCache {
max_entries: usize,
min_ttl: u32,
max_ttl: u32,
query_count: u64,
}
impl DnsCache {
@@ -30,29 +50,24 @@ impl DnsCache {
max_entries,
min_ttl,
max_ttl,
query_count: 0,
}
}
pub fn lookup(&mut self, domain: &str, qtype: QueryType) -> Option<DnsPacket> {
self.query_count += 1;
if self.query_count.is_multiple_of(1000) {
self.evict_expired();
}
/// Read-only lookup — expired entries are left in place (cleaned up on insert).
pub fn lookup(&self, domain: &str, qtype: QueryType) -> Option<DnsPacket> {
self.lookup_with_status(domain, qtype).map(|(pkt, _)| pkt)
}
pub fn lookup_with_status(
&self,
domain: &str,
qtype: QueryType,
) -> Option<(DnsPacket, DnssecStatus)> {
let type_map = self.entries.get(domain)?;
let entry = type_map.get(&qtype)?;
let elapsed = entry.inserted_at.elapsed();
if elapsed >= entry.ttl {
// Expired: remove this entry
let type_map = self.entries.get_mut(domain).unwrap();
type_map.remove(&qtype);
self.entry_count -= 1;
if type_map.is_empty() {
self.entries.remove(domain);
}
return None;
}
@@ -64,10 +79,20 @@ impl DnsCache {
adjust_ttls(&mut packet.authorities, remaining);
adjust_ttls(&mut packet.resources, remaining);
Some(packet)
Some((packet, entry.dnssec_status))
}
pub fn insert(&mut self, domain: &str, qtype: QueryType, packet: &DnsPacket) {
self.insert_with_status(domain, qtype, packet, DnssecStatus::Indeterminate);
}
pub fn insert_with_status(
&mut self,
domain: &str,
qtype: QueryType,
packet: &DnsPacket,
dnssec_status: DnssecStatus,
) {
if self.entry_count >= self.max_entries {
self.evict_expired();
if self.entry_count >= self.max_entries {
@@ -95,6 +120,7 @@ impl DnsCache {
packet: packet.clone(),
inserted_at: Instant::now(),
ttl: Duration::from_secs(min_ttl as u64),
dnssec_status,
},
);
}
@@ -116,6 +142,26 @@ impl DnsCache {
self.entry_count = 0;
}
pub fn heap_bytes(&self) -> usize {
let outer_slot = std::mem::size_of::<u64>()
+ std::mem::size_of::<String>()
+ std::mem::size_of::<HashMap<QueryType, CacheEntry>>()
+ 1;
let mut total = self.entries.capacity() * outer_slot;
for (domain, type_map) in &self.entries {
total += domain.capacity();
let inner_slot = std::mem::size_of::<u64>()
+ std::mem::size_of::<QueryType>()
+ std::mem::size_of::<CacheEntry>()
+ 1;
total += type_map.capacity() * inner_slot;
for entry in type_map.values() {
total += entry.packet.heap_bytes();
}
}
total
}
pub fn remove(&mut self, domain: &str) {
let domain_lower = domain.to_lowercase();
if let Some(type_map) = self.entries.remove(&domain_lower) {
@@ -168,3 +214,23 @@ fn adjust_ttls(records: &mut [DnsRecord], new_ttl: u32) {
record.set_ttl(new_ttl);
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::packet::DnsPacket;
#[test]
fn heap_bytes_grows_with_entries() {
let mut cache = DnsCache::new(100, 1, 3600);
let empty = cache.heap_bytes();
let mut pkt = DnsPacket::new();
pkt.answers.push(DnsRecord::A {
domain: "example.com".into(),
addr: "1.2.3.4".parse().unwrap(),
ttl: 300,
});
cache.insert("example.com", QueryType::A, &pkt);
assert!(cache.heap_bytes() > empty);
}
}

View File

@@ -27,6 +27,8 @@ pub struct Config {
pub services: Vec<ServiceConfig>,
#[serde(default)]
pub lan: LanConfig,
#[serde(default)]
pub dnssec: DnssecConfig,
}
#[derive(Deserialize)]
@@ -35,6 +37,8 @@ pub struct ServerConfig {
pub bind_addr: String,
#[serde(default = "default_api_port")]
pub api_port: u16,
#[serde(default = "default_api_bind_addr")]
pub api_bind_addr: String,
}
impl Default for ServerConfig {
@@ -42,38 +46,153 @@ impl Default for ServerConfig {
ServerConfig {
bind_addr: default_bind_addr(),
api_port: default_api_port(),
api_bind_addr: default_api_bind_addr(),
}
}
}
fn default_api_bind_addr() -> String {
"127.0.0.1".to_string()
}
fn default_bind_addr() -> String {
"0.0.0.0:53".to_string()
}
pub const DEFAULT_API_PORT: u16 = 5380;
fn default_api_port() -> u16 {
5380
DEFAULT_API_PORT
}
#[derive(Deserialize, Default, PartialEq, Eq, Clone, Copy)]
#[serde(rename_all = "lowercase")]
pub enum UpstreamMode {
Auto,
#[default]
Forward,
Recursive,
}
impl UpstreamMode {
pub fn as_str(&self) -> &'static str {
match self {
UpstreamMode::Auto => "auto",
UpstreamMode::Forward => "forward",
UpstreamMode::Recursive => "recursive",
}
}
}
#[derive(Deserialize)]
pub struct UpstreamConfig {
#[serde(default)]
pub mode: UpstreamMode,
#[serde(default = "default_upstream_addr")]
pub address: String,
#[serde(default = "default_upstream_port")]
pub port: u16,
#[serde(default = "default_timeout_ms")]
pub timeout_ms: u64,
#[serde(default = "default_root_hints")]
pub root_hints: Vec<String>,
#[serde(default = "default_prime_tlds")]
pub prime_tlds: Vec<String>,
#[serde(default = "default_srtt")]
pub srtt: bool,
}
impl Default for UpstreamConfig {
fn default() -> Self {
UpstreamConfig {
mode: UpstreamMode::default(),
address: default_upstream_addr(),
port: default_upstream_port(),
timeout_ms: default_timeout_ms(),
root_hints: default_root_hints(),
prime_tlds: default_prime_tlds(),
srtt: default_srtt(),
}
}
}
fn default_true() -> bool {
true
}
fn default_srtt() -> bool {
default_true()
}
fn default_prime_tlds() -> Vec<String> {
vec![
// gTLDs
"com".into(),
"net".into(),
"org".into(),
"info".into(),
"io".into(),
"dev".into(),
"app".into(),
"xyz".into(),
"me".into(),
// EU + European ccTLDs
"eu".into(),
"uk".into(),
"de".into(),
"fr".into(),
"nl".into(),
"it".into(),
"es".into(),
"pl".into(),
"se".into(),
"no".into(),
"dk".into(),
"fi".into(),
"at".into(),
"be".into(),
"ie".into(),
"pt".into(),
"cz".into(),
"ro".into(),
"gr".into(),
"hu".into(),
"bg".into(),
"hr".into(),
"sk".into(),
"si".into(),
"lt".into(),
"lv".into(),
"ee".into(),
"ch".into(),
"is".into(),
// Other major ccTLDs
"co".into(),
"br".into(),
"au".into(),
"ca".into(),
"jp".into(),
]
}
fn default_root_hints() -> Vec<String> {
vec![
"198.41.0.4".into(), // a.root-servers.net
"199.9.14.201".into(), // b.root-servers.net
"192.33.4.12".into(), // c.root-servers.net
"199.7.91.13".into(), // d.root-servers.net
"192.203.230.10".into(), // e.root-servers.net
"192.5.5.241".into(), // f.root-servers.net
"192.112.36.4".into(), // g.root-servers.net
"198.97.190.53".into(), // h.root-servers.net
"192.36.148.17".into(), // i.root-servers.net
"192.58.128.30".into(), // j.root-servers.net
"193.0.14.129".into(), // k.root-servers.net
"199.7.83.42".into(), // l.root-servers.net
"202.12.27.33".into(), // m.root-servers.net
]
}
fn default_upstream_addr() -> String {
String::new() // empty = auto-detect from system resolver
}
@@ -81,7 +200,7 @@ fn default_upstream_port() -> u16 {
53
}
fn default_timeout_ms() -> u64 {
3000
5000
}
#[derive(Deserialize)]
@@ -172,6 +291,8 @@ pub struct ProxyConfig {
pub tls_port: u16,
#[serde(default = "default_proxy_tld")]
pub tld: String,
#[serde(default = "default_proxy_bind_addr")]
pub bind_addr: String,
}
impl Default for ProxyConfig {
@@ -181,10 +302,15 @@ impl Default for ProxyConfig {
port: default_proxy_port(),
tls_port: default_proxy_tls_port(),
tld: default_proxy_tld(),
bind_addr: default_proxy_bind_addr(),
}
}
}
fn default_proxy_bind_addr() -> String {
"127.0.0.1".to_string()
}
fn default_proxy_enabled() -> bool {
true
}
@@ -202,16 +328,14 @@ fn default_proxy_tld() -> String {
pub struct ServiceConfig {
pub name: String,
pub target_port: u16,
#[serde(default)]
pub routes: Vec<crate::service_store::RouteEntry>,
}
#[derive(Deserialize, Clone)]
pub struct LanConfig {
#[serde(default = "default_lan_enabled")]
pub enabled: bool,
#[serde(default = "default_lan_multicast_group")]
pub multicast_group: String,
#[serde(default = "default_lan_port")]
pub port: u16,
#[serde(default = "default_lan_broadcast_interval")]
pub broadcast_interval_secs: u64,
#[serde(default = "default_lan_peer_timeout")]
@@ -222,8 +346,6 @@ impl Default for LanConfig {
fn default() -> Self {
LanConfig {
enabled: default_lan_enabled(),
multicast_group: default_lan_multicast_group(),
port: default_lan_port(),
broadcast_interval_secs: default_lan_broadcast_interval(),
peer_timeout_secs: default_lan_peer_timeout(),
}
@@ -231,13 +353,7 @@ impl Default for LanConfig {
}
fn default_lan_enabled() -> bool {
true
}
fn default_lan_multicast_group() -> String {
"239.255.70.78".to_string()
}
fn default_lan_port() -> u16 {
5390
false
}
fn default_lan_broadcast_interval() -> u64 {
30
@@ -246,13 +362,136 @@ fn default_lan_peer_timeout() -> u64 {
90
}
pub fn load_config(path: &str) -> Result<Config> {
if !Path::new(path).exists() {
return Ok(Config::default());
#[derive(Deserialize, Clone, Default)]
pub struct DnssecConfig {
#[serde(default)]
pub enabled: bool,
#[serde(default)]
pub strict: bool,
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn lan_disabled_by_default() {
assert!(!LanConfig::default().enabled);
}
let contents = std::fs::read_to_string(path)?;
let config: Config = toml::from_str(&contents)?;
Ok(config)
#[test]
fn api_binds_localhost_by_default() {
assert_eq!(ServerConfig::default().api_bind_addr, "127.0.0.1");
}
#[test]
fn proxy_binds_localhost_by_default() {
assert_eq!(ProxyConfig::default().bind_addr, "127.0.0.1");
}
#[test]
fn empty_toml_gives_defaults() {
let config: Config = toml::from_str("").unwrap();
assert!(!config.lan.enabled);
assert_eq!(config.server.api_bind_addr, "127.0.0.1");
assert_eq!(config.proxy.bind_addr, "127.0.0.1");
assert_eq!(config.server.api_port, ServerConfig::default().api_port);
}
#[test]
fn lan_enabled_parses() {
let config: Config = toml::from_str("[lan]\nenabled = true").unwrap();
assert!(config.lan.enabled);
}
#[test]
fn custom_bind_addrs_parse() {
let toml = r#"
[server]
api_bind_addr = "0.0.0.0"
[proxy]
bind_addr = "0.0.0.0"
"#;
let config: Config = toml::from_str(toml).unwrap();
assert_eq!(config.server.api_bind_addr, "0.0.0.0");
assert_eq!(config.proxy.bind_addr, "0.0.0.0");
}
#[test]
fn service_routes_parse_from_toml() {
let toml = r#"
[[services]]
name = "app"
target_port = 3000
routes = [
{ path = "/api", port = 4000, strip = true },
{ path = "/static", port = 5000 },
]
"#;
let config: Config = toml::from_str(toml).unwrap();
assert_eq!(config.services.len(), 1);
assert_eq!(config.services[0].routes.len(), 2);
assert!(config.services[0].routes[0].strip);
assert!(!config.services[0].routes[1].strip); // default false
}
}
pub struct ConfigLoad {
pub config: Config,
pub path: String,
pub found: bool,
}
fn resolve_path(path: &str) -> String {
// canonicalize gives the real absolute path for existing files;
// for non-existent files, build an absolute path manually
std::fs::canonicalize(path)
.or_else(|_| std::env::current_dir().map(|cwd| cwd.join(path)))
.unwrap_or_else(|_| Path::new(path).to_path_buf())
.to_string_lossy()
.to_string()
}
pub fn load_config(path: &str) -> Result<ConfigLoad> {
// Try the given path first, then well-known locations (for service mode where cwd is /)
let candidates: Vec<std::path::PathBuf> = {
let p = Path::new(path);
let mut v = vec![p.to_path_buf()];
if p.is_relative() {
let filename = p.file_name().unwrap_or(p.as_os_str());
v.push(crate::config_dir().join(filename));
v.push(crate::data_dir().join(filename));
}
v
};
for candidate in &candidates {
match std::fs::read_to_string(candidate) {
Ok(contents) => {
let resolved = resolve_path(&candidate.to_string_lossy());
let config: Config = toml::from_str(&contents)?;
return Ok(ConfigLoad {
config,
path: resolved,
found: true,
});
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e.into()),
}
}
// Show config_dir candidate as the "expected" path — it's actionable
let display_path = candidates
.get(1)
.map(|p| p.to_string_lossy().to_string())
.unwrap_or_else(|| resolve_path(path));
log::info!("config not found, using defaults (create {})", display_path);
Ok(ConfigLoad {
config: Config::default(),
path: display_path,
found: false,
})
}
pub type ZoneMap = HashMap<String, HashMap<QueryType, Vec<DnsRecord>>>;

View File

@@ -1,15 +1,22 @@
use std::collections::HashMap;
use std::net::SocketAddr;
use std::sync::Mutex;
use std::path::PathBuf;
use std::sync::{Mutex, RwLock};
use std::time::{Duration, Instant, SystemTime};
use arc_swap::ArcSwap;
use log::{debug, error, info, warn};
use rustls::ServerConfig;
use tokio::net::UdpSocket;
use tokio::sync::broadcast;
type InflightMap = HashMap<(String, QueryType), broadcast::Sender<Option<DnsPacket>>>;
use crate::blocklist::BlocklistStore;
use crate::buffer::BytePacketBuffer;
use crate::cache::DnsCache;
use crate::config::ZoneMap;
use crate::forward::forward_query;
use crate::cache::{DnsCache, DnssecStatus};
use crate::config::{UpstreamMode, ZoneMap};
use crate::forward::{forward_query, Upstream};
use crate::header::ResultCode;
use crate::lan::PeerStore;
use crate::override_store::OverrideStore;
@@ -18,27 +25,41 @@ use crate::query_log::{QueryLog, QueryLogEntry};
use crate::question::QueryType;
use crate::record::DnsRecord;
use crate::service_store::ServiceStore;
use crate::srtt::SrttCache;
use crate::stats::{QueryPath, ServerStats};
use crate::system_dns::ForwardingRule;
pub struct ServerCtx {
pub socket: UdpSocket,
pub zone_map: ZoneMap,
pub cache: Mutex<DnsCache>,
/// std::sync::RwLock (not tokio) — locks must never be held across .await points.
pub cache: RwLock<DnsCache>,
pub stats: Mutex<ServerStats>,
pub overrides: Mutex<OverrideStore>,
pub blocklist: Mutex<BlocklistStore>,
pub overrides: RwLock<OverrideStore>,
pub blocklist: RwLock<BlocklistStore>,
pub query_log: Mutex<QueryLog>,
pub services: Mutex<ServiceStore>,
pub lan_peers: Mutex<PeerStore>,
pub forwarding_rules: Vec<ForwardingRule>,
pub upstream: Mutex<SocketAddr>,
pub upstream: Mutex<Upstream>,
pub upstream_auto: bool,
pub upstream_port: u16,
pub lan_ip: Mutex<std::net::Ipv4Addr>,
pub timeout: Duration,
pub proxy_tld: String,
pub proxy_tld_suffix: String, // pre-computed ".{tld}" to avoid per-query allocation
pub lan_enabled: bool,
pub config_path: String,
pub config_found: bool,
pub config_dir: PathBuf,
pub data_dir: PathBuf,
pub tls_config: Option<ArcSwap<ServerConfig>>,
pub upstream_mode: UpstreamMode,
pub root_hints: Vec<SocketAddr>,
pub srtt: RwLock<SrttCache>,
pub inflight: Mutex<InflightMap>,
pub dnssec_enabled: bool,
pub dnssec_strict: bool,
}
pub async fn handle_query(
@@ -63,21 +84,41 @@ pub async fn handle_query(
// Pipeline: overrides -> .tld interception -> blocklist -> local zones -> cache -> upstream
// Each lock is scoped to avoid holding MutexGuard across await points.
let (response, path) = {
let override_record = ctx.overrides.lock().unwrap().lookup(&qname);
let (response, path, dnssec) = {
let override_record = ctx.overrides.read().unwrap().lookup(&qname);
if let Some(record) = override_record {
let mut resp = DnsPacket::response_from(&query, ResultCode::NOERROR);
resp.answers.push(record);
(resp, QueryPath::Overridden)
(resp, QueryPath::Overridden, DnssecStatus::Indeterminate)
} else if qname == "localhost" || qname.ends_with(".localhost") {
// RFC 6761: .localhost always resolves to loopback
let mut resp = DnsPacket::response_from(&query, ResultCode::NOERROR);
resp.answers.push(sinkhole_record(
&qname,
qtype,
std::net::Ipv4Addr::LOCALHOST,
std::net::Ipv6Addr::LOCALHOST,
300,
));
(resp, QueryPath::Local, DnssecStatus::Indeterminate)
} else if is_special_use_domain(&qname) {
// RFC 6761/8880: private PTR, DDR, NAT64 — answer locally
let resp = special_use_response(&query, &qname, qtype);
(resp, QueryPath::Local, DnssecStatus::Indeterminate)
} else if !ctx.proxy_tld_suffix.is_empty()
&& (qname.ends_with(&ctx.proxy_tld_suffix) || qname == ctx.proxy_tld)
{
// Resolve .numa: local services → 127.0.0.1, LAN peers → peer IP
// Resolve .numa: remote clients get LAN IP (can't reach 127.0.0.1), local get loopback
let service_name = qname.strip_suffix(&ctx.proxy_tld_suffix).unwrap_or(&qname);
let is_remote = !src_addr.ip().is_loopback();
let resolve_ip = {
let local = ctx.services.lock().unwrap();
if local.lookup(service_name).is_some() {
std::net::Ipv4Addr::LOCALHOST
if is_remote {
*ctx.lan_ip.lock().unwrap()
} else {
std::net::Ipv4Addr::LOCALHOST
}
} else {
let mut peers = ctx.lan_peers.lock().unwrap();
peers
@@ -89,57 +130,73 @@ pub async fn handle_query(
.unwrap_or(std::net::Ipv4Addr::LOCALHOST)
}
};
let v6 = if resolve_ip == std::net::Ipv4Addr::LOCALHOST {
std::net::Ipv6Addr::LOCALHOST
} else {
resolve_ip.to_ipv6_mapped()
};
let mut resp = DnsPacket::response_from(&query, ResultCode::NOERROR);
match qtype {
QueryType::AAAA => resp.answers.push(DnsRecord::AAAA {
domain: qname.clone(),
addr: if resolve_ip == std::net::Ipv4Addr::LOCALHOST {
std::net::Ipv6Addr::LOCALHOST
} else {
resolve_ip.to_ipv6_mapped()
},
ttl: 300,
}),
_ => resp.answers.push(DnsRecord::A {
domain: qname.clone(),
addr: resolve_ip,
ttl: 300,
}),
}
(resp, QueryPath::Local)
} else if ctx.blocklist.lock().unwrap().is_blocked(&qname) {
resp.answers
.push(sinkhole_record(&qname, qtype, resolve_ip, v6, 300));
(resp, QueryPath::Local, DnssecStatus::Indeterminate)
} else if ctx.blocklist.read().unwrap().is_blocked(&qname) {
let mut resp = DnsPacket::response_from(&query, ResultCode::NOERROR);
match qtype {
QueryType::AAAA => resp.answers.push(DnsRecord::AAAA {
domain: qname.clone(),
addr: std::net::Ipv6Addr::UNSPECIFIED,
ttl: 60,
}),
_ => resp.answers.push(DnsRecord::A {
domain: qname.clone(),
addr: std::net::Ipv4Addr::UNSPECIFIED,
ttl: 60,
}),
}
(resp, QueryPath::Blocked)
resp.answers.push(sinkhole_record(
&qname,
qtype,
std::net::Ipv4Addr::UNSPECIFIED,
std::net::Ipv6Addr::UNSPECIFIED,
60,
));
(resp, QueryPath::Blocked, DnssecStatus::Indeterminate)
} else if let Some(records) = ctx.zone_map.get(qname.as_str()).and_then(|m| m.get(&qtype)) {
let mut resp = DnsPacket::response_from(&query, ResultCode::NOERROR);
resp.answers = records.clone();
(resp, QueryPath::Local)
(resp, QueryPath::Local, DnssecStatus::Indeterminate)
} else {
let cached = ctx.cache.lock().unwrap().lookup(&qname, qtype);
if let Some(cached) = cached {
let cached = ctx.cache.read().unwrap().lookup_with_status(&qname, qtype);
if let Some((cached, cached_dnssec)) = cached {
let mut resp = cached;
resp.header.id = query.header.id;
(resp, QueryPath::Cached)
if cached_dnssec == DnssecStatus::Secure {
resp.header.authed_data = true;
}
(resp, QueryPath::Cached, cached_dnssec)
} else if ctx.upstream_mode == UpstreamMode::Recursive {
let key = (qname.clone(), qtype);
let (resp, path, err) = resolve_coalesced(&ctx.inflight, key, &query, || {
crate::recursive::resolve_recursive(
&qname,
qtype,
&ctx.cache,
&query,
&ctx.root_hints,
&ctx.srtt,
)
})
.await;
if path == QueryPath::Coalesced {
debug!("{} | {:?} {} | COALESCED", src_addr, qtype, qname);
} else if path == QueryPath::UpstreamError {
error!(
"{} | {:?} {} | RECURSIVE ERROR | {}",
src_addr,
qtype,
qname,
err.as_deref().unwrap_or("leader failed")
);
}
(resp, path, DnssecStatus::Indeterminate)
} else {
let upstream =
crate::system_dns::match_forwarding_rule(&qname, &ctx.forwarding_rules)
.unwrap_or_else(|| *ctx.upstream.lock().unwrap());
match forward_query(&query, upstream, ctx.timeout).await {
match crate::system_dns::match_forwarding_rule(&qname, &ctx.forwarding_rules) {
Some(addr) => Upstream::Udp(addr),
None => ctx.upstream.lock().unwrap().clone(),
};
match forward_query(&query, &upstream, ctx.timeout).await {
Ok(resp) => {
ctx.cache.lock().unwrap().insert(&qname, qtype, &resp);
(resp, QueryPath::Forwarded)
ctx.cache.write().unwrap().insert(&qname, qtype, &resp);
(resp, QueryPath::Forwarded, DnssecStatus::Indeterminate)
}
Err(e) => {
error!(
@@ -149,6 +206,7 @@ pub async fn handle_query(
(
DnsPacket::response_from(&query, ResultCode::SERVFAIL),
QueryPath::UpstreamError,
DnssecStatus::Indeterminate,
)
}
}
@@ -156,6 +214,56 @@ pub async fn handle_query(
}
};
let client_do = query.edns.as_ref().is_some_and(|e| e.do_bit);
let mut response = response;
// DNSSEC validation (recursive/forwarded responses only)
let mut dnssec = dnssec;
if ctx.dnssec_enabled && path == QueryPath::Recursive {
let (status, vstats) =
crate::dnssec::validate_response(&response, &ctx.cache, &ctx.root_hints, &ctx.srtt)
.await;
debug!(
"DNSSEC | {} | {:?} | {}ms | dnskey_hit={} dnskey_fetch={} ds_hit={} ds_fetch={}",
qname,
status,
vstats.elapsed_ms,
vstats.dnskey_cache_hits,
vstats.dnskey_fetches,
vstats.ds_cache_hits,
vstats.ds_fetches,
);
dnssec = status;
if status == DnssecStatus::Secure {
response.header.authed_data = true;
}
if status == DnssecStatus::Bogus && ctx.dnssec_strict {
response = DnsPacket::response_from(&query, ResultCode::SERVFAIL);
}
ctx.cache
.write()
.unwrap()
.insert_with_status(&qname, qtype, &response, status);
}
// Strip DNSSEC records if client didn't set DO bit
if !client_do {
strip_dnssec_records(&mut response);
}
// Echo EDNS back if client sent it
if query.edns.is_some() {
response.edns = Some(crate::packet::EdnsOpt {
do_bit: client_do,
..Default::default()
});
}
let elapsed = start.elapsed();
info!(
@@ -205,7 +313,579 @@ pub async fn handle_query(
path,
rescode: response.header.rescode,
latency_us: elapsed.as_micros() as u64,
dnssec,
});
Ok(())
}
fn is_dnssec_record(r: &DnsRecord) -> bool {
matches!(
r.query_type(),
QueryType::RRSIG | QueryType::DNSKEY | QueryType::DS | QueryType::NSEC | QueryType::NSEC3
)
}
fn strip_dnssec_records(pkt: &mut DnsPacket) {
pkt.answers.retain(|r| !is_dnssec_record(r));
pkt.authorities.retain(|r| !is_dnssec_record(r));
pkt.resources.retain(|r| !is_dnssec_record(r));
}
fn is_special_use_domain(qname: &str) -> bool {
if qname.ends_with(".in-addr.arpa") {
// RFC 6303: private + loopback + link-local reverse DNS
if qname.ends_with(".10.in-addr.arpa")
|| qname.ends_with(".168.192.in-addr.arpa")
|| qname.ends_with(".127.in-addr.arpa")
|| qname.ends_with(".254.169.in-addr.arpa")
|| qname.ends_with(".0.in-addr.arpa")
|| qname.contains("_dns-sd._udp")
{
return true;
}
// 172.16-31.x.x (RFC 1918) — extract second octet from reverse name
if qname.ends_with(".172.in-addr.arpa") {
if let Some(octet_str) = qname
.strip_suffix(".172.in-addr.arpa")
.and_then(|s| s.rsplit('.').next())
{
if let Ok(octet) = octet_str.parse::<u8>() {
return (16..=31).contains(&octet);
}
}
}
return false;
}
// DDR (RFC 9462)
if qname == "_dns.resolver.arpa" || qname.ends_with("._dns.resolver.arpa") {
return true;
}
// NAT64 (RFC 8880)
if qname == "ipv4only.arpa" {
return true;
}
// RFC 6762: .local is reserved for mDNS — never forward to upstream
qname == "local" || qname.ends_with(".local")
}
fn sinkhole_record(
domain: &str,
qtype: QueryType,
v4: std::net::Ipv4Addr,
v6: std::net::Ipv6Addr,
ttl: u32,
) -> DnsRecord {
match qtype {
QueryType::AAAA => DnsRecord::AAAA {
domain: domain.to_string(),
addr: v6,
ttl,
},
_ => DnsRecord::A {
domain: domain.to_string(),
addr: v4,
ttl,
},
}
}
enum Disposition {
Leader(broadcast::Sender<Option<DnsPacket>>),
Follower(broadcast::Receiver<Option<DnsPacket>>),
}
fn acquire_inflight(inflight: &Mutex<InflightMap>, key: (String, QueryType)) -> Disposition {
let mut map = inflight.lock().unwrap();
if let Some(tx) = map.get(&key) {
Disposition::Follower(tx.subscribe())
} else {
let (tx, _) = broadcast::channel::<Option<DnsPacket>>(1);
map.insert(key, tx.clone());
Disposition::Leader(tx)
}
}
/// Run a resolve function with in-flight coalescing. Multiple concurrent calls
/// for the same key share a single resolution — the first caller (leader)
/// executes `resolve_fn`, and followers wait for the broadcast result.
async fn resolve_coalesced<F, Fut>(
inflight: &Mutex<InflightMap>,
key: (String, QueryType),
query: &DnsPacket,
resolve_fn: F,
) -> (DnsPacket, QueryPath, Option<String>)
where
F: FnOnce() -> Fut,
Fut: std::future::Future<Output = crate::Result<DnsPacket>>,
{
let disposition = acquire_inflight(inflight, key.clone());
match disposition {
Disposition::Follower(mut rx) => match rx.recv().await {
Ok(Some(mut resp)) => {
resp.header.id = query.header.id;
(resp, QueryPath::Coalesced, None)
}
_ => (
DnsPacket::response_from(query, ResultCode::SERVFAIL),
QueryPath::UpstreamError,
None,
),
},
Disposition::Leader(tx) => {
let guard = InflightGuard { inflight, key };
let result = resolve_fn().await;
drop(guard);
match result {
Ok(resp) => {
let _ = tx.send(Some(resp.clone()));
(resp, QueryPath::Recursive, None)
}
Err(e) => {
let _ = tx.send(None);
let err_msg = e.to_string();
(
DnsPacket::response_from(query, ResultCode::SERVFAIL),
QueryPath::UpstreamError,
Some(err_msg),
)
}
}
}
}
}
struct InflightGuard<'a> {
inflight: &'a Mutex<InflightMap>,
key: (String, QueryType),
}
impl Drop for InflightGuard<'_> {
fn drop(&mut self) {
self.inflight.lock().unwrap().remove(&self.key);
}
}
fn special_use_response(query: &DnsPacket, qname: &str, qtype: QueryType) -> DnsPacket {
use std::net::{Ipv4Addr, Ipv6Addr};
if qname == "ipv4only.arpa" {
// RFC 8880: well-known NAT64 addresses
let mut resp = DnsPacket::response_from(query, ResultCode::NOERROR);
let domain = qname.to_string();
match qtype {
QueryType::A => {
resp.answers.push(DnsRecord::A {
domain: domain.clone(),
addr: Ipv4Addr::new(192, 0, 0, 170),
ttl: 300,
});
resp.answers.push(DnsRecord::A {
domain,
addr: Ipv4Addr::new(192, 0, 0, 171),
ttl: 300,
});
}
QueryType::AAAA => {
resp.answers.push(DnsRecord::AAAA {
domain,
addr: Ipv6Addr::new(0x0064, 0xff9b, 0, 0, 0, 0, 0xc000, 0x00aa),
ttl: 300,
});
}
_ => {}
}
resp
} else {
DnsPacket::response_from(query, ResultCode::NXDOMAIN)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::collections::HashMap;
use std::net::Ipv4Addr;
use std::sync::{Arc, Mutex};
use tokio::sync::broadcast;
// ---- InflightGuard unit tests ----
#[test]
fn inflight_guard_removes_key_on_drop() {
let map: Mutex<InflightMap> = Mutex::new(HashMap::new());
let key = ("example.com".to_string(), QueryType::A);
let (tx, _) = broadcast::channel::<Option<DnsPacket>>(1);
map.lock().unwrap().insert(key.clone(), tx);
assert_eq!(map.lock().unwrap().len(), 1);
{
let _guard = InflightGuard {
inflight: &map,
key: key.clone(),
};
} // guard dropped here
assert!(map.lock().unwrap().is_empty());
}
#[test]
fn inflight_guard_only_removes_own_key() {
let map: Mutex<InflightMap> = Mutex::new(HashMap::new());
let key_a = ("a.com".to_string(), QueryType::A);
let key_b = ("b.com".to_string(), QueryType::A);
let (tx_a, _) = broadcast::channel::<Option<DnsPacket>>(1);
let (tx_b, _) = broadcast::channel::<Option<DnsPacket>>(1);
map.lock().unwrap().insert(key_a.clone(), tx_a);
map.lock().unwrap().insert(key_b.clone(), tx_b);
{
let _guard = InflightGuard {
inflight: &map,
key: key_a,
};
}
let m = map.lock().unwrap();
assert_eq!(m.len(), 1);
assert!(m.contains_key(&key_b));
}
#[test]
fn inflight_guard_same_domain_different_qtype_independent() {
let map: Mutex<InflightMap> = Mutex::new(HashMap::new());
let key_a = ("example.com".to_string(), QueryType::A);
let key_aaaa = ("example.com".to_string(), QueryType::AAAA);
let (tx_a, _) = broadcast::channel::<Option<DnsPacket>>(1);
let (tx_aaaa, _) = broadcast::channel::<Option<DnsPacket>>(1);
map.lock().unwrap().insert(key_a.clone(), tx_a);
map.lock().unwrap().insert(key_aaaa.clone(), tx_aaaa);
{
let _guard = InflightGuard {
inflight: &map,
key: key_a,
};
}
let m = map.lock().unwrap();
assert_eq!(m.len(), 1);
assert!(m.contains_key(&key_aaaa));
}
// ---- Coalescing disposition tests (via acquire_inflight) ----
#[test]
fn first_caller_becomes_leader() {
let map: Mutex<InflightMap> = Mutex::new(HashMap::new());
let key = ("test.com".to_string(), QueryType::A);
let d = acquire_inflight(&map, key.clone());
assert!(matches!(d, Disposition::Leader(_)));
assert_eq!(map.lock().unwrap().len(), 1);
}
#[test]
fn second_caller_becomes_follower() {
let map: Mutex<InflightMap> = Mutex::new(HashMap::new());
let key = ("test.com".to_string(), QueryType::A);
let _leader = acquire_inflight(&map, key.clone());
let follower = acquire_inflight(&map, key);
assert!(matches!(follower, Disposition::Follower(_)));
// Map still has exactly 1 entry — follower subscribes, doesn't insert
assert_eq!(map.lock().unwrap().len(), 1);
}
#[tokio::test]
async fn leader_broadcast_reaches_follower() {
let map: Mutex<InflightMap> = Mutex::new(HashMap::new());
let key = ("test.com".to_string(), QueryType::A);
let leader = acquire_inflight(&map, key.clone());
let follower = acquire_inflight(&map, key);
let tx = match leader {
Disposition::Leader(tx) => tx,
_ => panic!("expected leader"),
};
let mut rx = match follower {
Disposition::Follower(rx) => rx,
_ => panic!("expected follower"),
};
let mut resp = DnsPacket::new();
resp.header.id = 42;
resp.answers.push(DnsRecord::A {
domain: "test.com".into(),
addr: Ipv4Addr::new(1, 2, 3, 4),
ttl: 300,
});
let _ = tx.send(Some(resp));
let received = rx.recv().await.unwrap().unwrap();
assert_eq!(received.header.id, 42);
assert_eq!(received.answers.len(), 1);
}
#[tokio::test]
async fn leader_none_signals_failure_to_follower() {
let map: Mutex<InflightMap> = Mutex::new(HashMap::new());
let key = ("test.com".to_string(), QueryType::A);
let leader = acquire_inflight(&map, key.clone());
let follower = acquire_inflight(&map, key);
let tx = match leader {
Disposition::Leader(tx) => tx,
_ => panic!("expected leader"),
};
let mut rx = match follower {
Disposition::Follower(rx) => rx,
_ => panic!("expected follower"),
};
let _ = tx.send(None);
assert!(rx.recv().await.unwrap().is_none());
}
#[tokio::test]
async fn multiple_followers_all_receive_via_acquire() {
let map: Mutex<InflightMap> = Mutex::new(HashMap::new());
let key = ("multi.com".to_string(), QueryType::A);
let leader = acquire_inflight(&map, key.clone());
let f1 = acquire_inflight(&map, key.clone());
let f2 = acquire_inflight(&map, key.clone());
let f3 = acquire_inflight(&map, key);
let tx = match leader {
Disposition::Leader(tx) => tx,
_ => panic!("expected leader"),
};
let mut resp = DnsPacket::new();
resp.answers.push(DnsRecord::A {
domain: "multi.com".into(),
addr: Ipv4Addr::new(10, 0, 0, 1),
ttl: 60,
});
let _ = tx.send(Some(resp));
for f in [f1, f2, f3] {
let mut rx = match f {
Disposition::Follower(rx) => rx,
_ => panic!("expected follower"),
};
let r = rx.recv().await.unwrap().unwrap();
assert_eq!(r.answers.len(), 1);
}
}
// ---- Integration: resolve_coalesced with mock futures ----
fn mock_response(domain: &str) -> DnsPacket {
let mut resp = DnsPacket::new();
resp.header.response = true;
resp.header.rescode = ResultCode::NOERROR;
resp.answers.push(DnsRecord::A {
domain: domain.to_string(),
addr: Ipv4Addr::new(10, 0, 0, 1),
ttl: 300,
});
resp
}
#[tokio::test]
async fn concurrent_queries_coalesce_to_single_resolution() {
let inflight = Arc::new(Mutex::new(HashMap::new()));
let resolve_count = Arc::new(std::sync::atomic::AtomicU32::new(0));
let mut handles = Vec::new();
for i in 0..5u16 {
let count = resolve_count.clone();
let inf = inflight.clone();
let key = ("coalesce.test".to_string(), QueryType::A);
let query = DnsPacket::query(100 + i, "coalesce.test", QueryType::A);
handles.push(tokio::spawn(async move {
resolve_coalesced(&inf, key, &query, || async {
count.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
tokio::time::sleep(Duration::from_millis(200)).await;
Ok(mock_response("coalesce.test"))
})
.await
}));
}
let mut paths = Vec::new();
for h in handles {
let (_, path, _) = h.await.unwrap();
paths.push(path);
}
let actual = resolve_count.load(std::sync::atomic::Ordering::Relaxed);
assert_eq!(actual, 1, "expected 1 resolution, got {}", actual);
let recursive = paths.iter().filter(|p| **p == QueryPath::Recursive).count();
let coalesced = paths.iter().filter(|p| **p == QueryPath::Coalesced).count();
assert_eq!(recursive, 1, "expected 1 RECURSIVE, got {}", recursive);
assert_eq!(coalesced, 4, "expected 4 COALESCED, got {}", coalesced);
assert!(inflight.lock().unwrap().is_empty());
}
#[tokio::test]
async fn different_qtypes_not_coalesced() {
let inflight = Arc::new(Mutex::new(HashMap::new()));
let resolve_count = Arc::new(std::sync::atomic::AtomicU32::new(0));
let inf1 = inflight.clone();
let inf2 = inflight.clone();
let count1 = resolve_count.clone();
let count2 = resolve_count.clone();
let query_a = DnsPacket::query(200, "same.domain", QueryType::A);
let query_aaaa = DnsPacket::query(201, "same.domain", QueryType::AAAA);
let h1 = tokio::spawn(async move {
resolve_coalesced(
&inf1,
("same.domain".to_string(), QueryType::A),
&query_a,
|| async {
count1.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
tokio::time::sleep(Duration::from_millis(100)).await;
Ok(mock_response("same.domain"))
},
)
.await
});
let h2 = tokio::spawn(async move {
resolve_coalesced(
&inf2,
("same.domain".to_string(), QueryType::AAAA),
&query_aaaa,
|| async {
count2.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
tokio::time::sleep(Duration::from_millis(100)).await;
Ok(mock_response("same.domain"))
},
)
.await
});
let (_, path1, _) = h1.await.unwrap();
let (_, path2, _) = h2.await.unwrap();
let actual = resolve_count.load(std::sync::atomic::Ordering::Relaxed);
assert_eq!(actual, 2, "A and AAAA should each resolve, got {}", actual);
assert_eq!(path1, QueryPath::Recursive);
assert_eq!(path2, QueryPath::Recursive);
assert!(inflight.lock().unwrap().is_empty());
}
#[tokio::test]
async fn inflight_map_cleaned_after_error() {
let inflight: Mutex<InflightMap> = Mutex::new(HashMap::new());
let query = DnsPacket::query(300, "will-fail.test", QueryType::A);
let (_, path, _) = resolve_coalesced(
&inflight,
("will-fail.test".to_string(), QueryType::A),
&query,
|| async { Err::<DnsPacket, _>("upstream timeout".into()) },
)
.await;
assert_eq!(path, QueryPath::UpstreamError);
assert!(inflight.lock().unwrap().is_empty());
}
#[tokio::test]
async fn follower_gets_servfail_when_leader_fails() {
let inflight = Arc::new(Mutex::new(HashMap::new()));
let mut handles = Vec::new();
for i in 0..3u16 {
let inf = inflight.clone();
let query = DnsPacket::query(400 + i, "fail.test", QueryType::A);
handles.push(tokio::spawn(async move {
resolve_coalesced(
&inf,
("fail.test".to_string(), QueryType::A),
&query,
|| async {
tokio::time::sleep(Duration::from_millis(200)).await;
Err::<DnsPacket, _>("upstream error".into())
},
)
.await
}));
}
let mut paths = Vec::new();
for h in handles {
let (resp, path, _) = h.await.unwrap();
assert_eq!(resp.header.rescode, ResultCode::SERVFAIL);
assert_eq!(
resp.questions.len(),
1,
"SERVFAIL must echo question section"
);
assert_eq!(resp.questions[0].name, "fail.test");
paths.push(path);
}
let errors = paths
.iter()
.filter(|p| **p == QueryPath::UpstreamError)
.count();
assert_eq!(errors, 3, "all 3 should be UpstreamError, got {}", errors);
assert!(inflight.lock().unwrap().is_empty());
}
#[tokio::test]
async fn servfail_leader_includes_question_section() {
let inflight: Mutex<InflightMap> = Mutex::new(HashMap::new());
let query = DnsPacket::query(500, "question.test", QueryType::A);
let (resp, _, _) = resolve_coalesced(
&inflight,
("question.test".to_string(), QueryType::A),
&query,
|| async { Err::<DnsPacket, _>("fail".into()) },
)
.await;
assert_eq!(resp.header.rescode, ResultCode::SERVFAIL);
assert_eq!(
resp.questions.len(),
1,
"SERVFAIL must echo question section"
);
assert_eq!(resp.questions[0].name, "question.test");
assert_eq!(resp.questions[0].qtype, QueryType::A);
assert_eq!(resp.header.id, 500);
}
#[tokio::test]
async fn leader_error_preserves_message() {
let inflight: Mutex<InflightMap> = Mutex::new(HashMap::new());
let query = DnsPacket::query(700, "err-msg.test", QueryType::A);
let (_, path, err) = resolve_coalesced(
&inflight,
("err-msg.test".to_string(), QueryType::A),
&query,
|| async { Err::<DnsPacket, _>("connection refused by upstream".into()) },
)
.await;
assert_eq!(path, QueryPath::UpstreamError);
assert_eq!(
err.as_deref(),
Some("connection refused by upstream"),
"error message must be preserved for logging"
);
}
}

1690
src/dnssec.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
use std::fmt;
use std::net::SocketAddr;
use std::time::Duration;
@@ -8,7 +9,46 @@ use crate::buffer::BytePacketBuffer;
use crate::packet::DnsPacket;
use crate::Result;
#[derive(Clone)]
pub enum Upstream {
Udp(SocketAddr),
Doh {
url: String,
client: reqwest::Client,
},
}
impl PartialEq for Upstream {
fn eq(&self, other: &Self) -> bool {
match (self, other) {
(Self::Udp(a), Self::Udp(b)) => a == b,
(Self::Doh { url: a, .. }, Self::Doh { url: b, .. }) => a == b,
_ => false,
}
}
}
impl fmt::Display for Upstream {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Upstream::Udp(addr) => write!(f, "{}", addr),
Upstream::Doh { url, .. } => f.write_str(url),
}
}
}
pub async fn forward_query(
query: &DnsPacket,
upstream: &Upstream,
timeout_duration: Duration,
) -> Result<DnsPacket> {
match upstream {
Upstream::Udp(addr) => forward_udp(query, *addr, timeout_duration).await,
Upstream::Doh { url, client } => forward_doh(query, url, client, timeout_duration).await,
}
}
pub(crate) async fn forward_udp(
query: &DnsPacket,
upstream: SocketAddr,
timeout_duration: Duration,
@@ -33,3 +73,202 @@ pub async fn forward_query(
DnsPacket::from_buffer(&mut recv_buffer)
}
/// DNS over TCP (RFC 1035 §4.2.2): 2-byte length prefix, then the DNS message.
pub(crate) async fn forward_tcp(
query: &DnsPacket,
upstream: SocketAddr,
timeout_duration: Duration,
) -> Result<DnsPacket> {
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpStream;
let mut send_buffer = BytePacketBuffer::new();
query.write(&mut send_buffer)?;
let msg = send_buffer.filled();
let mut stream = timeout(timeout_duration, TcpStream::connect(upstream)).await??;
// Single write: Microsoft/Azure DNS servers close TCP connections on split segments
let mut outbuf = Vec::with_capacity(2 + msg.len());
outbuf.extend_from_slice(&(msg.len() as u16).to_be_bytes());
outbuf.extend_from_slice(msg);
stream.write_all(&outbuf).await?;
// Read length-prefixed response
let mut len_buf = [0u8; 2];
timeout(timeout_duration, stream.read_exact(&mut len_buf)).await??;
let resp_len = u16::from_be_bytes(len_buf) as usize;
let mut data = vec![0u8; resp_len];
timeout(timeout_duration, stream.read_exact(&mut data)).await??;
let mut recv_buffer = BytePacketBuffer::from_bytes(&data);
DnsPacket::from_buffer(&mut recv_buffer)
}
async fn forward_doh(
query: &DnsPacket,
url: &str,
client: &reqwest::Client,
timeout_duration: Duration,
) -> Result<DnsPacket> {
let mut send_buffer = BytePacketBuffer::new();
query.write(&mut send_buffer)?;
let resp = timeout(
timeout_duration,
client
.post(url)
.header("content-type", "application/dns-message")
.header("accept", "application/dns-message")
.body(send_buffer.filled().to_vec())
.send(),
)
.await??
.error_for_status()?;
let bytes = resp.bytes().await?;
log::debug!("DoH response: {} bytes", bytes.len());
let mut recv_buffer = BytePacketBuffer::from_bytes(&bytes);
DnsPacket::from_buffer(&mut recv_buffer)
}
#[cfg(test)]
mod tests {
use super::*;
use std::future::IntoFuture;
use crate::header::ResultCode;
use crate::question::QueryType;
use crate::record::DnsRecord;
#[test]
fn upstream_display_udp() {
let u = Upstream::Udp("9.9.9.9:53".parse().unwrap());
assert_eq!(u.to_string(), "9.9.9.9:53");
}
#[test]
fn upstream_display_doh() {
let u = Upstream::Doh {
url: "https://dns.quad9.net/dns-query".to_string(),
client: reqwest::Client::new(),
};
assert_eq!(u.to_string(), "https://dns.quad9.net/dns-query");
}
fn make_query() -> DnsPacket {
DnsPacket::query(0xABCD, "example.com", QueryType::A)
}
fn make_response(query: &DnsPacket) -> DnsPacket {
let mut resp = DnsPacket::response_from(query, ResultCode::NOERROR);
resp.answers.push(DnsRecord::A {
domain: "example.com".to_string(),
addr: "93.184.216.34".parse().unwrap(),
ttl: 300,
});
resp
}
fn to_wire(pkt: &DnsPacket) -> Vec<u8> {
let mut buf = BytePacketBuffer::new();
pkt.write(&mut buf).unwrap();
buf.filled().to_vec()
}
#[tokio::test]
async fn doh_mock_server_resolves() {
let query = make_query();
let response_bytes = to_wire(&make_response(&query));
let app = axum::Router::new().route(
"/dns-query",
axum::routing::post(move || {
let body = response_bytes.clone();
async move {
(
[(axum::http::header::CONTENT_TYPE, "application/dns-message")],
body,
)
}
}),
);
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
tokio::spawn(axum::serve(listener, app).into_future());
let upstream = Upstream::Doh {
url: format!("http://{}/dns-query", addr),
client: reqwest::Client::new(),
};
let result = forward_query(&query, &upstream, Duration::from_secs(2))
.await
.expect("DoH forward should succeed");
assert_eq!(result.header.id, 0xABCD);
assert!(result.header.response);
assert_eq!(result.header.rescode, ResultCode::NOERROR);
assert_eq!(result.answers.len(), 1);
match &result.answers[0] {
DnsRecord::A { domain, addr, ttl } => {
assert_eq!(domain, "example.com");
assert_eq!(
*addr,
"93.184.216.34".parse::<std::net::Ipv4Addr>().unwrap()
);
assert_eq!(*ttl, 300);
}
other => panic!("expected A record, got {:?}", other),
}
}
#[tokio::test]
async fn doh_http_error_propagates() {
let app = axum::Router::new().route(
"/dns-query",
axum::routing::post(|| async {
(axum::http::StatusCode::INTERNAL_SERVER_ERROR, "bad")
}),
);
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
tokio::spawn(axum::serve(listener, app).into_future());
let upstream = Upstream::Doh {
url: format!("http://{}/dns-query", addr),
client: reqwest::Client::new(),
};
let result = forward_query(&make_query(), &upstream, Duration::from_secs(2)).await;
assert!(result.is_err());
}
#[tokio::test]
async fn doh_timeout() {
let app = axum::Router::new().route(
"/dns-query",
axum::routing::post(|| async {
tokio::time::sleep(Duration::from_secs(10)).await;
"never"
}),
);
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
tokio::spawn(axum::serve(listener, app).into_future());
let upstream = Upstream::Doh {
url: format!("http://{}/dns-query", addr),
client: reqwest::Client::new(),
};
let result = forward_query(&make_query(), &upstream, Duration::from_millis(100)).await;
assert!(result.is_err());
}
}

View File

@@ -1,13 +1,22 @@
use std::collections::HashMap;
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::net::{IpAddr, Ipv4Addr, SocketAddr, SocketAddrV4};
use std::sync::Arc;
use std::time::{Duration, Instant};
use log::{debug, info, warn};
use serde::{Deserialize, Serialize};
use crate::buffer::BytePacketBuffer;
use crate::config::LanConfig;
use crate::ctx::ServerCtx;
use crate::header::DnsHeader;
use crate::question::{DnsQuestion, QueryType};
// --- Constants ---
const MDNS_ADDR: Ipv4Addr = Ipv4Addr::new(224, 0, 0, 251);
const MDNS_PORT: u16 = 5353;
const SERVICE_TYPE: &str = "_numa._tcp.local";
const MDNS_TTL: u32 = 120;
// --- Peer Store ---
@@ -24,11 +33,18 @@ impl PeerStore {
}
}
pub fn update(&mut self, host: IpAddr, services: &[(String, u16)]) {
/// Returns true if a previously-unseen name was inserted.
pub fn update(&mut self, host: IpAddr, services: &[(String, u16)]) -> bool {
let now = Instant::now();
let mut changed = false;
for (name, port) in services {
self.peers.insert(name.to_lowercase(), (host, *port, now));
let key = name.to_lowercase();
if !self.peers.contains_key(&key) {
changed = true;
}
self.peers.insert(key, (host, *port, now));
}
changed
}
pub fn lookup(&mut self, name: &str) -> Option<(IpAddr, u16)> {
@@ -58,25 +74,19 @@ impl PeerStore {
.collect()
}
pub fn names(&mut self) -> Vec<String> {
let now = Instant::now();
self.peers
.retain(|_, (_, _, seen)| now.duration_since(*seen) < self.timeout);
self.peers.keys().cloned().collect()
}
pub fn clear(&mut self) {
self.peers.clear();
}
}
// --- Multicast ---
#[derive(Serialize, Deserialize)]
struct Announcement {
instance_id: u64,
host: String,
services: Vec<AnnouncedService>,
}
#[derive(Serialize, Deserialize)]
struct AnnouncedService {
name: String,
port: u16,
}
// --- mDNS Discovery ---
pub fn detect_lan_ip() -> Option<Ipv4Addr> {
let socket = std::net::UdpSocket::bind("0.0.0.0:0").ok()?;
@@ -87,44 +97,45 @@ pub fn detect_lan_ip() -> Option<Ipv4Addr> {
}
}
pub async fn start_lan_discovery(ctx: Arc<ServerCtx>, config: &LanConfig) {
let multicast_group: Ipv4Addr = match config.multicast_group.parse::<Ipv4Addr>() {
Ok(g) if g.is_multicast() => g,
Ok(g) => {
warn!("LAN: {} is not a multicast address (224.0.0.0/4)", g);
return;
}
Err(e) => {
warn!(
"LAN: invalid multicast group {}: {}",
config.multicast_group, e
);
return;
}
};
let port = config.port;
let interval = Duration::from_secs(config.broadcast_interval_secs);
fn get_hostname() -> String {
std::process::Command::new("hostname")
.output()
.ok()
.and_then(|o| String::from_utf8(o.stdout).ok())
.map(|h| h.trim().split('.').next().unwrap_or("numa").to_string())
.filter(|h| !h.is_empty())
.unwrap_or_else(|| "numa".to_string())
}
let instance_id: u64 = {
let pid = std::process::id() as u64;
let ts = std::time::SystemTime::now()
/// Generate a per-process instance ID for self-filtering on multi-instance hosts
fn instance_id() -> String {
format!(
"{}:{}",
std::process::id(),
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_nanos() as u64;
pid ^ ts
};
.as_nanos()
% 1_000_000
)
}
pub async fn start_lan_discovery(ctx: Arc<ServerCtx>, config: &LanConfig) {
let interval = Duration::from_secs(config.broadcast_interval_secs);
let local_ip = *ctx.lan_ip.lock().unwrap();
let hostname = get_hostname();
let our_instance_id = instance_id();
info!(
"LAN discovery on {}:{}, local IP {}, instance {:016x}",
multicast_group, port, local_ip, instance_id
"LAN discovery via mDNS on {}:{}, local IP {}, instance {}._numa._tcp.local",
MDNS_ADDR, MDNS_PORT, local_ip, hostname
);
// Create socket with SO_REUSEADDR for multicast
let std_socket = match create_multicast_socket(multicast_group, port) {
let std_socket = match create_mdns_socket() {
Ok(s) => s,
Err(e) => {
warn!(
"LAN: could not bind multicast socket: {} — LAN discovery disabled",
"LAN: could not bind mDNS socket: {} — LAN discovery disabled",
e
);
return;
@@ -138,81 +149,312 @@ pub async fn start_lan_discovery(ctx: Arc<ServerCtx>, config: &LanConfig) {
}
};
let socket = Arc::new(socket);
let dest = SocketAddr::new(IpAddr::V4(MDNS_ADDR), MDNS_PORT);
// Spawn sender
// Spawn sender: announce our services periodically
let sender_ctx = Arc::clone(&ctx);
let sender_socket = Arc::clone(&socket);
let dest = SocketAddr::new(IpAddr::V4(multicast_group), port);
let sender_hostname = hostname.clone();
let sender_instance_id = our_instance_id.clone();
tokio::spawn(async move {
let mut ticker = tokio::time::interval(interval);
loop {
ticker.tick().await;
let services: Vec<AnnouncedService> = {
let services: Vec<(String, u16)> = {
let store = sender_ctx.services.lock().unwrap();
store
.list()
.iter()
.map(|e| AnnouncedService {
name: e.name.clone(),
port: e.target_port,
})
.map(|e| (e.name.clone(), e.target_port))
.collect()
};
if services.is_empty() {
continue;
}
let current_ip = sender_ctx.lan_ip.lock().unwrap().to_string();
let announcement = Announcement {
instance_id,
host: current_ip,
services,
};
if let Ok(json) = serde_json::to_vec(&announcement) {
let _ = sender_socket.send_to(&json, dest).await;
let current_ip = *sender_ctx.lan_ip.lock().unwrap();
if let Ok(pkt) =
build_announcement(&sender_hostname, current_ip, &services, &sender_instance_id)
{
let _ = sender_socket.send_to(pkt.filled(), dest).await;
}
}
});
// Receiver loop
// Send initial browse query
if let Ok(pkt) = build_browse_query() {
let _ = socket.send_to(pkt.filled(), dest).await;
}
// Receiver loop: parse mDNS responses for _numa._tcp
let mut buf = vec![0u8; 4096];
loop {
let (len, src) = match socket.recv_from(&mut buf).await {
let (len, _src) = match socket.recv_from(&mut buf).await {
Ok(r) => r,
Err(e) => {
debug!("LAN recv error: {}", e);
debug!("mDNS recv error: {}", e);
continue;
}
};
let announcement: Announcement = match serde_json::from_slice(&buf[..len]) {
Ok(a) => a,
Err(_) => continue,
};
// Skip self-announcements
if announcement.instance_id == instance_id {
continue;
let data = &buf[..len];
if let Some(ann) = parse_mdns_response(data) {
// Skip our own announcements via instance ID (works on multi-instance same-host)
if ann.instance_id.as_deref() == Some(our_instance_id.as_str()) {
continue;
}
if !ann.services.is_empty() {
let changed = ctx
.lan_peers
.lock()
.unwrap()
.update(ann.peer_ip, &ann.services);
if changed {
crate::tls::regenerate_tls(&ctx);
}
debug!(
"LAN: {} services from {} (mDNS)",
ann.services.len(),
ann.peer_ip
);
}
}
let peer_ip: IpAddr = match announcement.host.parse() {
Ok(ip) => ip,
Err(_) => continue,
};
let services: Vec<(String, u16)> = announcement
.services
.iter()
.map(|s| (s.name.clone(), s.port))
.collect();
let count = services.len();
ctx.lan_peers.lock().unwrap().update(peer_ip, &services);
debug!(
"LAN: {} services from {} (via {})",
count, announcement.host, src
);
}
}
fn create_multicast_socket(group: Ipv4Addr, port: u16) -> std::io::Result<std::net::UdpSocket> {
use std::net::SocketAddrV4;
// --- mDNS Packet Building ---
let addr = SocketAddrV4::new(Ipv4Addr::UNSPECIFIED, port);
fn build_browse_query() -> crate::Result<BytePacketBuffer> {
let mut buf = BytePacketBuffer::new();
let mut header = DnsHeader::new();
header.questions = 1;
header.write(&mut buf)?;
DnsQuestion::new(SERVICE_TYPE.to_string(), QueryType::PTR).write(&mut buf)?;
Ok(buf)
}
fn build_announcement(
hostname: &str,
ip: Ipv4Addr,
services: &[(String, u16)],
inst_id: &str,
) -> crate::Result<BytePacketBuffer> {
let mut buf = BytePacketBuffer::new();
let instance_name = format!("{}._numa._tcp.local", hostname);
let host_local = format!("{}.local", hostname);
let mut header = DnsHeader::new();
header.response = true;
header.authoritative_answer = true;
header.answers = 4; // PTR + SRV + TXT + A
header.write(&mut buf)?;
// PTR: _numa._tcp.local → <hostname>._numa._tcp.local
write_record_header(&mut buf, SERVICE_TYPE, QueryType::PTR.to_num(), 1, MDNS_TTL)?;
let rdlen_pos = buf.pos();
buf.write_u16(0)?;
let rdata_start = buf.pos();
buf.write_qname(&instance_name)?;
patch_rdlen(&mut buf, rdlen_pos, rdata_start)?;
// SRV: <instance>._numa._tcp.local → <hostname>.local
// Port in SRV is informational; actual service ports are in TXT
write_record_header(
&mut buf,
&instance_name,
QueryType::SRV.to_num(),
0x8001,
MDNS_TTL,
)?;
let rdlen_pos = buf.pos();
buf.write_u16(0)?;
let rdata_start = buf.pos();
buf.write_u16(0)?; // priority
buf.write_u16(0)?; // weight
buf.write_u16(services.first().map(|(_, p)| *p).unwrap_or(0))?; // first service port for SRV display
buf.write_qname(&host_local)?;
patch_rdlen(&mut buf, rdlen_pos, rdata_start)?;
// TXT: services + instance ID for self-filtering
write_record_header(
&mut buf,
&instance_name,
QueryType::TXT.to_num(),
0x8001,
MDNS_TTL,
)?;
let rdlen_pos = buf.pos();
buf.write_u16(0)?;
let rdata_start = buf.pos();
let svc_str = services
.iter()
.map(|(name, port)| format!("{}:{}", name, port))
.collect::<Vec<_>>()
.join(",");
write_txt_string(&mut buf, &format!("services={}", svc_str))?;
write_txt_string(&mut buf, &format!("id={}", inst_id))?;
patch_rdlen(&mut buf, rdlen_pos, rdata_start)?;
// A: <hostname>.local → IP
write_record_header(
&mut buf,
&host_local,
QueryType::A.to_num(),
0x8001,
MDNS_TTL,
)?;
buf.write_u16(4)?;
for &b in &ip.octets() {
buf.write_u8(b)?;
}
Ok(buf)
}
fn write_record_header(
buf: &mut BytePacketBuffer,
name: &str,
rtype: u16,
class: u16,
ttl: u32,
) -> crate::Result<()> {
buf.write_qname(name)?;
buf.write_u16(rtype)?;
buf.write_u16(class)?;
buf.write_u32(ttl)?;
Ok(())
}
fn patch_rdlen(
buf: &mut BytePacketBuffer,
rdlen_pos: usize,
rdata_start: usize,
) -> crate::Result<()> {
let rdlen = (buf.pos() - rdata_start) as u16;
buf.set_u16(rdlen_pos, rdlen)
}
fn write_txt_string(buf: &mut BytePacketBuffer, s: &str) -> crate::Result<()> {
let bytes = s.as_bytes();
for chunk in bytes.chunks(255) {
buf.write_u8(chunk.len() as u8)?;
for &b in chunk {
buf.write_u8(b)?;
}
}
Ok(())
}
// --- mDNS Packet Parsing ---
struct MdnsAnnouncement {
services: Vec<(String, u16)>,
peer_ip: IpAddr,
instance_id: Option<String>,
}
fn parse_mdns_response(data: &[u8]) -> Option<MdnsAnnouncement> {
if data.len() < 12 {
return None;
}
let mut buf = BytePacketBuffer::new();
buf.buf[..data.len()].copy_from_slice(data);
let mut header = DnsHeader::new();
header.read(&mut buf).ok()?;
if !header.response || header.answers == 0 {
return None;
}
// Skip questions
for _ in 0..header.questions {
let mut q = DnsQuestion::new(String::new(), QueryType::UNKNOWN(0));
q.read(&mut buf).ok()?;
}
let total = header.answers + header.authoritative_entries + header.resource_entries;
let mut txt_services: Option<Vec<(String, u16)>> = None;
let mut peer_instance_id: Option<String> = None;
let mut a_ip: Option<IpAddr> = None;
let mut name = String::with_capacity(64);
for _ in 0..total {
if buf.pos() >= data.len() {
break;
}
name.clear();
if buf.read_qname(&mut name).is_err() {
break;
}
let rtype = buf.read_u16().unwrap_or(0);
let _rclass = buf.read_u16().unwrap_or(0);
let _ttl = buf.read_u32().unwrap_or(0);
let rdlength = buf.read_u16().unwrap_or(0) as usize;
let rdata_start = buf.pos();
match rtype {
t if t == QueryType::TXT.to_num() && name.contains("_numa._tcp") => {
let mut pos = rdata_start;
while pos < rdata_start + rdlength && pos < data.len() {
let txt_len = data[pos] as usize;
pos += 1;
if pos + txt_len > data.len() {
break;
}
if let Ok(txt) = std::str::from_utf8(&data[pos..pos + txt_len]) {
if let Some(val) = txt.strip_prefix("services=") {
let svcs: Vec<(String, u16)> = val
.split(',')
.filter_map(|s| {
let mut parts = s.splitn(2, ':');
let svc_name = parts.next()?.to_string();
let port = parts.next()?.parse().ok()?;
Some((svc_name, port))
})
.collect();
if !svcs.is_empty() {
txt_services = Some(svcs);
}
} else if let Some(id) = txt.strip_prefix("id=") {
peer_instance_id = Some(id.to_string());
}
}
pos += txt_len;
}
}
t if t == QueryType::A.to_num() && rdlength == 4 && rdata_start + 4 <= data.len() => {
a_ip = Some(IpAddr::V4(Ipv4Addr::new(
data[rdata_start],
data[rdata_start + 1],
data[rdata_start + 2],
data[rdata_start + 3],
)));
}
_ => {}
}
buf.seek(rdata_start + rdlength).ok();
}
let services = txt_services?;
// Trust the A record IP if present, otherwise this isn't a complete announcement
let peer_ip = a_ip?;
Some(MdnsAnnouncement {
services,
peer_ip,
instance_id: peer_instance_id,
})
}
fn create_mdns_socket() -> std::io::Result<std::net::UdpSocket> {
let addr = SocketAddrV4::new(Ipv4Addr::UNSPECIFIED, MDNS_PORT);
let socket = socket2::Socket::new(
socket2::Domain::IPV4,
socket2::Type::DGRAM,
@@ -223,6 +465,6 @@ fn create_multicast_socket(group: Ipv4Addr, port: u16) -> std::io::Result<std::n
socket.set_reuse_port(true)?;
socket.set_nonblocking(true)?;
socket.bind(&socket2::SockAddr::from(addr))?;
socket.join_multicast_v4(&group, &Ipv4Addr::UNSPECIFIED)?;
socket.join_multicast_v4(&MDNS_ADDR, &Ipv4Addr::UNSPECIFIED)?;
Ok(socket.into())
}

View File

@@ -4,6 +4,7 @@ pub mod buffer;
pub mod cache;
pub mod config;
pub mod ctx;
pub mod dnssec;
pub mod forward;
pub mod header;
pub mod lan;
@@ -13,7 +14,9 @@ pub mod proxy;
pub mod query_log;
pub mod question;
pub mod record;
pub mod recursive;
pub mod service_store;
pub mod srtt;
pub mod stats;
pub mod system_dns;
pub mod tls;

View File

@@ -1,22 +1,23 @@
use std::net::SocketAddr;
use std::sync::{Arc, Mutex};
use std::sync::{Arc, Mutex, RwLock};
use std::time::Duration;
use arc_swap::ArcSwap;
use log::{error, info};
use tokio::net::UdpSocket;
use numa::blocklist::{download_blocklists, parse_blocklist, BlocklistStore};
use numa::buffer::BytePacketBuffer;
use numa::cache::DnsCache;
use numa::config::{build_zone_map, load_config};
use numa::config::{build_zone_map, load_config, ConfigLoad};
use numa::ctx::{handle_query, ServerCtx};
use numa::forward::Upstream;
use numa::override_store::OverrideStore;
use numa::query_log::QueryLog;
use numa::service_store::ServiceStore;
use numa::stats::ServerStats;
use numa::system_dns::{
discover_system_dns, install_service, install_system_dns, restart_service, service_status,
uninstall_service, uninstall_system_dns,
discover_system_dns, install_service, restart_service, service_status, uninstall_service,
};
#[tokio::main]
@@ -29,12 +30,12 @@ async fn main() -> numa::Result<()> {
let arg1 = std::env::args().nth(1).unwrap_or_default();
match arg1.as_str() {
"install" => {
eprintln!("\x1b[1;38;2;192;98;58mNuma\x1b[0m — configuring system DNS\n");
return install_system_dns().map_err(|e| e.into());
eprintln!("\x1b[1;38;2;192;98;58mNuma\x1b[0m — installing\n");
return install_service().map_err(|e| e.into());
}
"uninstall" => {
eprintln!("\x1b[1;38;2;192;98;58mNuma\x1b[0m — restoring system DNS\n");
return uninstall_system_dns().map_err(|e| e.into());
eprintln!("\x1b[1;38;2;192;98;58mNuma\x1b[0m — uninstalling\n");
return uninstall_service().map_err(|e| e.into());
}
"service" => {
let sub = std::env::args().nth(2).unwrap_or_default();
@@ -50,6 +51,20 @@ async fn main() -> numa::Result<()> {
}
};
}
"lan" => {
let sub = std::env::args().nth(2).unwrap_or_default();
let config_path = std::env::args()
.nth(3)
.unwrap_or_else(|| "numa.toml".to_string());
return match sub.as_str() {
"on" => set_lan_enabled(true, &config_path),
"off" => set_lan_enabled(false, &config_path),
_ => {
eprintln!("Usage: numa lan <on|off> [config-path]");
Ok(())
}
};
}
"version" | "--version" | "-V" => {
eprintln!("numa {}", env!("CARGO_PKG_VERSION"));
return Ok(());
@@ -65,6 +80,8 @@ async fn main() -> numa::Result<()> {
eprintln!(" service stop Uninstall the system service");
eprintln!(" service restart Restart the service with updated binary");
eprintln!(" service status Check if the service is running");
eprintln!(" lan on Enable LAN service discovery (mDNS)");
eprintln!(" lan off Disable LAN service discovery");
eprintln!(" help Show this help");
eprintln!();
eprintln!("Config path defaults to numa.toml");
@@ -80,23 +97,90 @@ async fn main() -> numa::Result<()> {
} else {
arg1 // treat as config path for backwards compatibility
};
let config = load_config(&config_path)?;
let ConfigLoad {
config,
path: resolved_config_path,
found: config_found,
} = load_config(&config_path)?;
// Discover system DNS in a single pass (upstream + forwarding rules)
let system_dns = discover_system_dns();
let upstream_addr = if config.upstream.address.is_empty() {
system_dns
.default_upstream
.or_else(numa::system_dns::detect_dhcp_dns)
.unwrap_or_else(|| {
info!("could not detect system DNS, falling back to 9.9.9.9 (Quad9)");
"9.9.9.9".to_string()
})
} else {
config.upstream.address.clone()
let root_hints = numa::recursive::parse_root_hints(&config.upstream.root_hints);
let (resolved_mode, upstream_auto, upstream, upstream_label) = match config.upstream.mode {
numa::config::UpstreamMode::Auto => {
info!("auto mode: probing recursive resolution...");
if numa::recursive::probe_recursive(&root_hints).await {
info!("recursive probe succeeded — self-sovereign mode");
let dummy = Upstream::Udp("0.0.0.0:0".parse().unwrap());
(
numa::config::UpstreamMode::Recursive,
false,
dummy,
"recursive (root hints)".to_string(),
)
} else {
log::warn!("recursive probe failed — falling back to Quad9 DoH");
let client = reqwest::Client::builder()
.use_rustls_tls()
.build()
.unwrap_or_default();
let url = "https://dns.quad9.net/dns-query".to_string();
let label = url.clone();
(
numa::config::UpstreamMode::Forward,
false,
Upstream::Doh { url, client },
label,
)
}
}
numa::config::UpstreamMode::Recursive => {
let dummy = Upstream::Udp("0.0.0.0:0".parse().unwrap());
(
numa::config::UpstreamMode::Recursive,
false,
dummy,
"recursive (root hints)".to_string(),
)
}
numa::config::UpstreamMode::Forward => {
let upstream_addr = if config.upstream.address.is_empty() {
system_dns
.default_upstream
.or_else(numa::system_dns::detect_dhcp_dns)
.unwrap_or_else(|| {
info!("could not detect system DNS, falling back to Quad9 DoH");
"https://dns.quad9.net/dns-query".to_string()
})
} else {
config.upstream.address.clone()
};
let upstream: Upstream = if upstream_addr.starts_with("https://") {
let client = reqwest::Client::builder()
.use_rustls_tls()
.build()
.unwrap_or_default();
Upstream::Doh {
url: upstream_addr,
client,
}
} else {
let addr: SocketAddr =
format!("{}:{}", upstream_addr, config.upstream.port).parse()?;
Upstream::Udp(addr)
};
let label = upstream.to_string();
(
numa::config::UpstreamMode::Forward,
config.upstream.address.is_empty(),
upstream,
label,
)
}
};
let upstream: SocketAddr = format!("{}:{}", upstream_addr, config.upstream.port).parse()?;
let api_port = config.server.api_port;
let mut blocklist = BlocklistStore::new();
@@ -109,31 +193,45 @@ async fn main() -> numa::Result<()> {
// Build service store: config services + persisted user services
let mut service_store = ServiceStore::new();
service_store.insert_from_config("numa", config.server.api_port);
service_store.insert_from_config("numa", config.server.api_port, Vec::new());
for svc in &config.services {
service_store.insert_from_config(&svc.name, svc.target_port);
service_store.insert_from_config(&svc.name, svc.target_port, svc.routes.clone());
}
service_store.load_persisted();
let forwarding_rules = system_dns.forwarding_rules;
// Build initial TLS config before ServerCtx (so ArcSwap is ready at construction)
let initial_tls = if config.proxy.enabled && config.proxy.tls_port > 0 {
let service_names = service_store.names();
match numa::tls::build_tls_config(&config.proxy.tld, &service_names) {
Ok(tls_config) => Some(ArcSwap::from(tls_config)),
Err(e) => {
log::warn!("TLS setup failed, HTTPS proxy disabled: {}", e);
None
}
}
} else {
None
};
let ctx = Arc::new(ServerCtx {
socket: UdpSocket::bind(&config.server.bind_addr).await?,
zone_map: build_zone_map(&config.zones)?,
cache: Mutex::new(DnsCache::new(
cache: RwLock::new(DnsCache::new(
config.cache.max_entries,
config.cache.min_ttl,
config.cache.max_ttl,
)),
stats: Mutex::new(ServerStats::new()),
overrides: Mutex::new(OverrideStore::new()),
blocklist: Mutex::new(blocklist),
overrides: RwLock::new(OverrideStore::new()),
blocklist: RwLock::new(blocklist),
query_log: Mutex::new(QueryLog::new(1000)),
services: Mutex::new(service_store),
lan_peers: Mutex::new(numa::lan::PeerStore::new(config.lan.peer_timeout_secs)),
forwarding_rules,
upstream: Mutex::new(upstream),
upstream_auto: config.upstream.address.is_empty(),
upstream_auto,
upstream_port: config.upstream.port,
lan_ip: Mutex::new(numa::lan::detect_lan_ip().unwrap_or(std::net::Ipv4Addr::LOCALHOST)),
timeout: Duration::from_millis(config.upstream.timeout_ms),
@@ -143,44 +241,151 @@ async fn main() -> numa::Result<()> {
format!(".{}", config.proxy.tld)
},
proxy_tld: config.proxy.tld.clone(),
lan_enabled: config.lan.enabled,
config_path: resolved_config_path,
config_found,
config_dir: numa::config_dir(),
data_dir: numa::data_dir(),
tls_config: initial_tls,
upstream_mode: resolved_mode,
root_hints,
srtt: std::sync::RwLock::new(numa::srtt::SrttCache::new(config.upstream.srtt)),
inflight: std::sync::Mutex::new(std::collections::HashMap::new()),
dnssec_enabled: config.dnssec.enabled,
dnssec_strict: config.dnssec.strict,
});
let zone_count: usize = ctx.zone_map.values().map(|m| m.len()).sum();
eprintln!("\n\x1b[38;2;192;98;58m ╔══════════════════════════════════════════╗\x1b[0m");
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[1;38;2;192;98;58mNUMA\x1b[0m \x1b[3;38;2;163;152;136mDNS that governs itself\x1b[0m \x1b[38;2;163;152;136mv{}\x1b[0m \x1b[38;2;192;98;58m║\x1b[0m", env!("CARGO_PKG_VERSION"));
eprintln!("\x1b[38;2;192;98;58m ╠══════════════════════════════════════════╣\x1b[0m");
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mDNS\x1b[0m {:<30}\x1b[38;2;192;98;58m║\x1b[0m", config.server.bind_addr);
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mAPI\x1b[0m http://localhost:{:<16}\x1b[38;2;192;98;58m║\x1b[0m", api_port);
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mDashboard\x1b[0m http://localhost:{:<16}\x1b[38;2;192;98;58m║\x1b[0m", api_port);
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mUpstream\x1b[0m {:<30}\x1b[38;2;192;98;58m║\x1b[0m", upstream);
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mZones\x1b[0m {:<30}\x1b[38;2;192;98;58m║\x1b[0m", format!("{} records", zone_count));
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mCache\x1b[0m {:<30}\x1b[38;2;192;98;58m║\x1b[0m", format!("max {} entries", config.cache.max_entries));
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mBlocking\x1b[0m {:<30}\x1b[38;2;192;98;58m║\x1b[0m",
if config.blocking.enabled { format!("{} lists", config.blocking.lists.len()) } else { "disabled".to_string() });
if config.proxy.enabled {
let schemes = if config.proxy.tls_port > 0 {
format!(
// Build banner rows, then size the box to fit the longest value
let api_url = format!("http://localhost:{}", api_port);
let proxy_label = if config.proxy.enabled {
if config.proxy.tls_port > 0 {
Some(format!(
"http://:{} https://:{}",
config.proxy.port, config.proxy.tls_port
)
))
} else {
format!("http://*.{} on :{}", config.proxy.tld, config.proxy.port)
};
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mProxy\x1b[0m {:<30}\x1b[38;2;192;98;58m║\x1b[0m", schemes);
Some(format!(
"http://*.{} on :{}",
config.proxy.tld, config.proxy.port
))
}
} else {
None
};
let config_label = if ctx.config_found {
ctx.config_path.clone()
} else {
format!("{} (defaults)", ctx.config_path)
};
let data_label = ctx.data_dir.display().to_string();
let services_label = ctx.config_dir.join("services.json").display().to_string();
// label (10) + value + padding (2) = inner width; minimum 40 for the title row
let val_w = [
config.server.bind_addr.len(),
api_url.len(),
upstream_label.len(),
config_label.len(),
data_label.len(),
services_label.len(),
]
.into_iter()
.chain(proxy_label.as_ref().map(|s| s.len()))
.max()
.unwrap_or(30);
let w = (val_w + 12).max(42); // 10 label + 2 padding, min 42 for title
let o = "\x1b[38;2;192;98;58m"; // orange
let g = "\x1b[38;2;107;124;78m"; // green
let d = "\x1b[38;2;163;152;136m"; // dim
let r = "\x1b[0m"; // reset
let b = "\x1b[1;38;2;192;98;58m"; // bold orange
let it = "\x1b[3;38;2;163;152;136m"; // italic dim
let bar_top = "".repeat(w);
let bar_mid = "".repeat(w);
let row = |label: &str, color: &str, value: &str| {
eprintln!(
"{o}{r} {color}{:<9}{r} {:<vw$}{o}{r}",
label,
value,
vw = w - 12
);
};
// Title row: center within the box
let title = format!(
"{b}NUMA{r} {it}DNS that governs itself{r} {d}v{}{r}",
env!("CARGO_PKG_VERSION")
);
// The title contains ANSI codes; visible length is ~38 chars. Pad to fill the box.
let title_visible_len = 4 + 2 + 24 + 2 + 1 + env!("CARGO_PKG_VERSION").len() + 1;
let title_pad = w.saturating_sub(title_visible_len);
eprintln!("\n{o}{bar_top}{r}");
eprint!("{o}{r} {title}");
eprintln!("{}{o}{r}", " ".repeat(title_pad));
eprintln!("{o}{bar_top}{r}");
row("DNS", g, &config.server.bind_addr);
row("API", g, &api_url);
row("Dashboard", g, &api_url);
row(
"Upstream",
g,
if ctx.upstream_mode == numa::config::UpstreamMode::Recursive {
"recursive (root hints)"
} else {
&upstream_label
},
);
row("Zones", g, &format!("{} records", zone_count));
row(
"Cache",
g,
&format!("max {} entries", config.cache.max_entries),
);
row(
"Blocking",
g,
&if config.blocking.enabled {
format!("{} lists", config.blocking.lists.len())
} else {
"disabled".to_string()
},
);
if let Some(ref label) = proxy_label {
row("Proxy", g, label);
if config.proxy.bind_addr == "127.0.0.1" {
let y = "\x1b[38;2;204;176;59m"; // yellow
row(
"",
y,
&format!(
"⚠ proxy on 127.0.0.1 — .{} not LAN reachable",
config.proxy.tld
),
);
}
}
if config.lan.enabled {
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mLAN\x1b[0m {:<30}\x1b[38;2;192;98;58m║\x1b[0m",
format!("{}:{}", config.lan.multicast_group, config.lan.port));
row("LAN", g, "mDNS (_numa._tcp.local)");
}
if !ctx.forwarding_rules.is_empty() {
eprintln!("\x1b[38;2;192;98;58m ║\x1b[0m \x1b[38;2;107;124;78mRouting\x1b[0m {:<30}\x1b[38;2;192;98;58m║\x1b[0m",
format!("{} conditional rules", ctx.forwarding_rules.len()));
row(
"Routing",
g,
&format!("{} conditional rules", ctx.forwarding_rules.len()),
);
}
eprintln!("\x1b[38;2;192;98;58m ╚══════════════════════════════════════════╝\x1b[0m\n");
eprintln!("{o}{bar_mid}{r}");
row("Config", d, &config_label);
row("Data", d, &data_label);
row("Services", d, &services_label);
eprintln!("{o}{bar_top}{r}\n");
info!(
"numa listening on {}, upstream {}, {} zone records, cache max {}, API on port {}",
config.server.bind_addr, upstream, zone_count, config.cache.max_entries, api_port,
config.server.bind_addr, upstream_label, zone_count, config.cache.max_entries, api_port,
);
// Download blocklists on startup
@@ -203,9 +408,24 @@ async fn main() -> numa::Result<()> {
});
}
// Prime TLD cache (recursive mode only)
if ctx.upstream_mode == numa::config::UpstreamMode::Recursive {
let prime_ctx = Arc::clone(&ctx);
let prime_tlds = config.upstream.prime_tlds;
tokio::spawn(async move {
numa::recursive::prime_tld_cache(
&prime_ctx.cache,
&prime_ctx.root_hints,
&prime_tlds,
&prime_ctx.srtt,
)
.await;
});
}
// Spawn HTTP API server
let api_ctx = Arc::clone(&ctx);
let api_addr: SocketAddr = format!("0.0.0.0:{}", api_port).parse()?;
let api_addr: SocketAddr = format!("{}:{}", config.server.api_bind_addr, api_port).parse()?;
tokio::spawn(async move {
let app = numa::api::router(api_ctx);
let listener = tokio::net::TcpListener::bind(api_addr).await.unwrap();
@@ -213,37 +433,28 @@ async fn main() -> numa::Result<()> {
axum::serve(listener, app).await.unwrap();
});
let proxy_bind: std::net::Ipv4Addr = config
.proxy
.bind_addr
.parse()
.unwrap_or(std::net::Ipv4Addr::LOCALHOST);
// Spawn HTTP reverse proxy for .numa domains
if config.proxy.enabled {
let proxy_ctx = Arc::clone(&ctx);
let proxy_port = config.proxy.port;
tokio::spawn(async move {
numa::proxy::start_proxy(proxy_ctx, proxy_port).await;
numa::proxy::start_proxy(proxy_ctx, proxy_port, proxy_bind).await;
});
}
// Spawn HTTPS reverse proxy with TLS termination
if config.proxy.enabled && config.proxy.tls_port > 0 {
let service_names: Vec<String> = ctx
.services
.lock()
.unwrap()
.list()
.iter()
.map(|e| e.name.clone())
.collect();
match numa::tls::build_tls_config(&config.proxy.tld, &service_names) {
Ok(tls_config) => {
let proxy_ctx = Arc::clone(&ctx);
let tls_port = config.proxy.tls_port;
tokio::spawn(async move {
numa::proxy::start_proxy_tls(proxy_ctx, tls_port, tls_config).await;
});
}
Err(e) => {
log::warn!("TLS setup failed, HTTPS proxy disabled: {}", e);
}
}
if config.proxy.enabled && config.proxy.tls_port > 0 && ctx.tls_config.is_some() {
let proxy_ctx = Arc::clone(&ctx);
let tls_port = config.proxy.tls_port;
tokio::spawn(async move {
numa::proxy::start_proxy_tls(proxy_ctx, tls_port, proxy_bind).await;
});
}
// Spawn network change watcher (upstream re-detection, LAN IP update, peer flush)
@@ -279,37 +490,45 @@ async fn main() -> numa::Result<()> {
}
async fn network_watch_loop(ctx: Arc<numa::ctx::ServerCtx>) {
let mut interval = tokio::time::interval(Duration::from_secs(30));
let mut tick: u64 = 0;
let mut interval = tokio::time::interval(Duration::from_secs(5));
interval.tick().await; // skip immediate tick
loop {
interval.tick().await;
tick += 1;
let mut changed = false;
// Check LAN IP change
// Check LAN IP change (every 5s — cheap, one UDP socket call)
if let Some(new_ip) = numa::lan::detect_lan_ip() {
let mut current_ip = ctx.lan_ip.lock().unwrap();
if new_ip != *current_ip {
info!("LAN IP changed: {} → {}", current_ip, new_ip);
*current_ip = new_ip;
changed = true;
numa::recursive::reset_udp_state();
}
}
// Check upstream change (only for auto-detected upstream)
if ctx.upstream_auto {
// Re-detect upstream every 30s or on LAN IP change (UDP only
// DoH upstreams are explicitly configured via URL, not auto-detected)
if ctx.upstream_auto
&& matches!(*ctx.upstream.lock().unwrap(), Upstream::Udp(_))
&& (changed || tick.is_multiple_of(6))
{
let dns_info = numa::system_dns::discover_system_dns();
// Use detected upstream, or try DHCP-provided DNS, or fall back to Quad9
let new_addr = dns_info
.default_upstream
.or_else(numa::system_dns::detect_dhcp_dns)
.unwrap_or_else(|| "9.9.9.9".to_string());
if let Ok(new_upstream) =
if let Ok(new_sock) =
format!("{}:{}", new_addr, ctx.upstream_port).parse::<SocketAddr>()
{
let new_upstream = Upstream::Udp(new_sock);
let mut upstream = ctx.upstream.lock().unwrap();
if new_upstream != *upstream {
info!("upstream changed: {} → {}", *upstream, new_upstream);
if *upstream != new_upstream {
info!("upstream changed: {} → {}", upstream, new_upstream);
*upstream = new_upstream;
changed = true;
}
@@ -321,6 +540,76 @@ async fn network_watch_loop(ctx: Arc<numa::ctx::ServerCtx>) {
ctx.lan_peers.lock().unwrap().clear();
info!("flushed LAN peers after network change");
}
// Re-probe UDP every 5 minutes when disabled
if tick.is_multiple_of(60) {
numa::recursive::probe_udp(&ctx.root_hints).await;
}
}
}
fn set_lan_enabled(enabled: bool, path: &str) -> numa::Result<()> {
let contents = match std::fs::read_to_string(path) {
Ok(c) => c,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
std::fs::write(path, format!("[lan]\nenabled = {}\n", enabled))?;
print_lan_status(enabled);
return Ok(());
}
Err(e) => return Err(e.into()),
};
// Track current TOML section while scanning lines
let mut in_lan = false;
let mut found = false;
let mut lines: Vec<String> = contents
.lines()
.map(|line| {
let trimmed = line.trim();
if trimmed.starts_with('[') {
in_lan = trimmed == "[lan]";
}
if in_lan && !found {
if let Some((key, _)) = trimmed.split_once('=') {
if key.trim() == "enabled" {
found = true;
let indent = &line[..line.len() - trimmed.len()];
return format!("{}enabled = {}", indent, enabled);
}
}
}
line.to_string()
})
.collect();
if !found {
if let Some(i) = lines.iter().position(|l| l.trim() == "[lan]") {
lines.insert(i + 1, format!("enabled = {}", enabled));
} else {
lines.push(String::new());
lines.push("[lan]".to_string());
lines.push(format!("enabled = {}", enabled));
}
}
let mut result = lines.join("\n");
if !result.ends_with('\n') {
result.push('\n');
}
std::fs::write(path, result)?;
print_lan_status(enabled);
Ok(())
}
fn print_lan_status(enabled: bool) {
let label = if enabled { "enabled" } else { "disabled" };
let color = if enabled { "32" } else { "33" };
eprintln!(
"\x1b[1;38;2;192;98;58mNuma\x1b[0m — LAN discovery \x1b[{}m{}\x1b[0m",
color, label
);
if enabled {
eprintln!(" Restart Numa to start mDNS discovery");
}
}
@@ -340,7 +629,7 @@ async fn load_blocklists(ctx: &ServerCtx, lists: &[String]) {
// Swap under lock — sub-microsecond
ctx.blocklist
.lock()
.write()
.unwrap()
.swap_domains(all_domains, sources);
info!(

View File

@@ -64,6 +64,9 @@ impl OverrideStore {
ttl: u32,
duration_secs: Option<u64>,
) -> Result<QueryType> {
// Clean up expired entries on write
self.entries.retain(|_, e| !e.is_expired());
let domain_lower = domain.to_lowercase();
let (qtype, record) = parse_target(&domain_lower, target, ttl)?;
@@ -84,10 +87,10 @@ impl OverrideStore {
}
/// Hot path: assumes `domain` is already lowercased (the parser does this).
pub fn lookup(&mut self, domain: &str) -> Option<DnsRecord> {
/// Read-only — expired entries are left in place (cleaned up on write operations).
pub fn lookup(&self, domain: &str) -> Option<DnsRecord> {
let entry = self.entries.get(domain)?;
if entry.is_expired() {
self.entries.remove(domain);
return None;
}
Some(entry.record.clone())
@@ -114,6 +117,22 @@ impl OverrideStore {
self.entries.clear();
}
pub fn heap_bytes(&self) -> usize {
let per_slot = std::mem::size_of::<u64>()
+ std::mem::size_of::<String>()
+ std::mem::size_of::<OverrideEntry>()
+ 1;
let table = self.entries.capacity() * per_slot;
let heap: usize = self
.entries
.iter()
.map(|(k, v)| {
k.capacity() + v.domain.capacity() + v.target.capacity() + v.record.heap_bytes()
})
.sum();
table + heap
}
pub fn active_count(&self) -> usize {
self.entries.values().filter(|e| !e.is_expired()).count()
}
@@ -151,3 +170,16 @@ fn parse_target(domain: &str, target: &str, ttl: u32) -> Result<(QueryType, DnsR
},
))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn heap_bytes_grows_with_entries() {
let mut store = OverrideStore::new();
let empty = store.heap_bytes();
store.insert("example.com", "1.2.3.4", 300, None).unwrap();
assert!(store.heap_bytes() > empty);
}
}

View File

@@ -4,6 +4,31 @@ use crate::question::{DnsQuestion, QueryType};
use crate::record::DnsRecord;
use crate::Result;
/// Recommended EDNS0 UDP payload size (DNS Flag Day 2020) — avoids IP fragmentation.
pub const DEFAULT_EDNS_PAYLOAD: u16 = 1232;
/// EDNS0 OPT pseudo-record (RFC 6891)
#[derive(Clone, Debug)]
pub struct EdnsOpt {
pub udp_payload_size: u16,
pub extended_rcode: u8,
pub version: u8,
pub do_bit: bool,
pub options: Vec<u8>,
}
impl Default for EdnsOpt {
fn default() -> Self {
EdnsOpt {
udp_payload_size: DEFAULT_EDNS_PAYLOAD,
extended_rcode: 0,
version: 0,
do_bit: false,
options: Vec::new(),
}
}
}
#[derive(Clone, Debug)]
pub struct DnsPacket {
pub header: DnsHeader,
@@ -11,6 +36,7 @@ pub struct DnsPacket {
pub answers: Vec<DnsRecord>,
pub authorities: Vec<DnsRecord>,
pub resources: Vec<DnsRecord>,
pub edns: Option<EdnsOpt>,
}
impl Default for DnsPacket {
@@ -27,9 +53,38 @@ impl DnsPacket {
answers: Vec::new(),
authorities: Vec::new(),
resources: Vec::new(),
edns: None,
}
}
pub fn query(id: u16, domain: &str, qtype: crate::question::QueryType) -> DnsPacket {
let mut pkt = DnsPacket::new();
pkt.header.id = id;
pkt.header.recursion_desired = true;
pkt.questions
.push(crate::question::DnsQuestion::new(domain.to_string(), qtype));
pkt
}
pub fn heap_bytes(&self) -> usize {
fn records_heap(records: &[DnsRecord]) -> usize {
records
.iter()
.map(|r| std::mem::size_of::<DnsRecord>() + r.heap_bytes())
.sum::<usize>()
}
let questions: usize = self
.questions
.iter()
.map(|q| std::mem::size_of::<DnsQuestion>() + q.name.capacity())
.sum();
questions
+ records_heap(&self.answers)
+ records_heap(&self.authorities)
+ records_heap(&self.resources)
+ self.edns.as_ref().map_or(0, |e| e.options.capacity())
}
pub fn response_from(query: &DnsPacket, rescode: crate::header::ResultCode) -> DnsPacket {
let mut resp = DnsPacket::new();
resp.header.id = query.header.id;
@@ -46,7 +101,7 @@ impl DnsPacket {
result.header.read(buffer)?;
for _ in 0..result.header.questions {
let mut question = DnsQuestion::new("".to_string(), QueryType::UNKNOWN(0));
let mut question = DnsQuestion::new(String::with_capacity(64), QueryType::UNKNOWN(0));
question.read(buffer)?;
result.questions.push(question);
}
@@ -60,44 +115,83 @@ impl DnsPacket {
result.authorities.push(rec);
}
for _ in 0..result.header.resource_entries {
let rec = DnsRecord::read(buffer)?;
result.resources.push(rec);
// Peek at type field to detect OPT pseudo-records.
// OPT name is always root (0x00), so name byte + type field starts at pos+1.
let peek_pos = buffer.pos();
let name_byte = buffer.get(peek_pos)?;
let is_opt = if name_byte == 0 {
// Root name (single zero byte) — peek at type
let type_hi = buffer.get(peek_pos + 1)?;
let type_lo = buffer.get(peek_pos + 2)?;
u16::from_be_bytes([type_hi, type_lo]) == 41
} else {
false
};
if is_opt {
// Parse OPT manually to capture the class field (= UDP payload size)
buffer.step(1)?; // skip root name (0x00)
let _ = buffer.read_u16()?; // type (41)
let udp_payload_size = buffer.read_u16()?; // class = UDP payload size
let ttl_field = buffer.read_u32()?; // packed flags
let rdlength = buffer.read_u16()?;
let options = buffer.get_range(buffer.pos(), rdlength as usize)?.to_vec();
buffer.step(rdlength as usize)?;
result.edns = Some(EdnsOpt {
udp_payload_size,
extended_rcode: ((ttl_field >> 24) & 0xFF) as u8,
version: ((ttl_field >> 16) & 0xFF) as u8,
do_bit: (ttl_field >> 15) & 1 == 1,
options,
});
} else {
let rec = DnsRecord::read(buffer)?;
result.resources.push(rec);
}
}
Ok(result)
}
pub fn write(&self, buffer: &mut BytePacketBuffer) -> Result<()> {
// Filter out UNKNOWN records (e.g. EDNS OPT) that we can't re-serialize
let answers: Vec<_> = self.answers.iter().filter(|r| !r.is_unknown()).collect();
let authorities: Vec<_> = self
.authorities
.iter()
.filter(|r| !r.is_unknown())
.collect();
let resources: Vec<_> = self.resources.iter().filter(|r| !r.is_unknown()).collect();
let edns_count = if self.edns.is_some() { 1u16 } else { 0 };
let mut header = self.header.clone();
header.questions = self.questions.len() as u16;
header.answers = answers.len() as u16;
header.authoritative_entries = authorities.len() as u16;
header.resource_entries = resources.len() as u16;
header.answers = self.answers.len() as u16;
header.authoritative_entries = self.authorities.len() as u16;
header.resource_entries = self.resources.len() as u16 + edns_count;
header.write(buffer)?;
for question in &self.questions {
question.write(buffer)?;
}
for rec in answers {
for rec in &self.answers {
rec.write(buffer)?;
}
for rec in authorities {
for rec in &self.authorities {
rec.write(buffer)?;
}
for rec in resources {
for rec in &self.resources {
rec.write(buffer)?;
}
// Write EDNS0 OPT pseudo-record
if let Some(ref edns) = self.edns {
buffer.write_u8(0)?; // root name
buffer.write_u16(QueryType::OPT.to_num())?; // type 41
buffer.write_u16(edns.udp_payload_size)?; // class = UDP payload size
// TTL = extended_rcode(8) | version(8) | DO(1) | Z(15)
let ttl_field = ((edns.extended_rcode as u32) << 24)
| ((edns.version as u32) << 16)
| (if edns.do_bit { 1u32 << 15 } else { 0 });
buffer.write_u32(ttl_field)?;
buffer.write_u16(edns.options.len() as u16)?; // RDLENGTH
buffer.write_bytes(&edns.options)?;
}
Ok(())
}
@@ -116,5 +210,416 @@ impl DnsPacket {
for rec in &self.resources {
println!("{:#?}", rec);
}
if let Some(ref edns) = self.edns {
println!("EDNS: {:?}", edns);
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::header::ResultCode;
#[test]
fn edns_round_trip() {
let mut pkt = DnsPacket::new();
pkt.header.id = 0x1234;
pkt.header.response = true;
pkt.header.rescode = ResultCode::NOERROR;
pkt.edns = Some(EdnsOpt {
do_bit: true,
..Default::default()
});
let mut buf = BytePacketBuffer::new();
pkt.write(&mut buf).unwrap();
buf.seek(0).unwrap();
let parsed = DnsPacket::from_buffer(&mut buf).unwrap();
let edns = parsed.edns.expect("EDNS should be present");
assert_eq!(edns.udp_payload_size, DEFAULT_EDNS_PAYLOAD);
assert!(edns.do_bit);
assert_eq!(edns.version, 0);
}
#[test]
fn edns_do_bit_false() {
let mut pkt = DnsPacket::new();
pkt.header.id = 0x5678;
pkt.header.response = true;
pkt.edns = Some(EdnsOpt {
udp_payload_size: 1232,
do_bit: false,
..Default::default()
});
let mut buf = BytePacketBuffer::new();
pkt.write(&mut buf).unwrap();
buf.seek(0).unwrap();
let parsed = DnsPacket::from_buffer(&mut buf).unwrap();
let edns = parsed.edns.expect("EDNS should be present");
assert_eq!(edns.udp_payload_size, DEFAULT_EDNS_PAYLOAD);
assert!(!edns.do_bit);
}
#[test]
fn no_edns_by_default() {
let pkt = DnsPacket::new();
assert!(pkt.edns.is_none());
}
#[test]
fn packet_without_edns_round_trips() {
let mut pkt = DnsPacket::new();
pkt.header.id = 0xABCD;
pkt.header.response = true;
pkt.header.rescode = ResultCode::NOERROR;
pkt.answers.push(crate::record::DnsRecord::A {
domain: "example.com".into(),
addr: "1.2.3.4".parse().unwrap(),
ttl: 300,
});
let parsed = packet_round_trip(&pkt);
assert!(parsed.edns.is_none());
assert_eq!(parsed.answers.len(), 1);
}
fn packet_round_trip(pkt: &DnsPacket) -> DnsPacket {
let mut buf = BytePacketBuffer::new();
pkt.write(&mut buf).unwrap();
let wire_len = buf.pos();
buf.seek(0).unwrap();
let parsed = DnsPacket::from_buffer(&mut buf).unwrap();
// Verify we consumed exactly what was written
assert_eq!(
buf.pos(),
wire_len,
"parse did not consume all written bytes"
);
parsed
}
#[test]
fn nxdomain_with_nsec_authority_round_trips() {
use crate::question::DnsQuestion;
use crate::record::DnsRecord;
let mut pkt = DnsPacket::new();
pkt.header.id = 0x1111;
pkt.header.response = true;
pkt.header.rescode = ResultCode::NXDOMAIN;
pkt.questions.push(DnsQuestion::new(
"nonexistent.example.com".into(),
QueryType::A,
));
pkt.authorities.push(DnsRecord::NSEC {
domain: "alpha.example.com".into(),
next_domain: "gamma.example.com".into(),
type_bitmap: vec![0, 2, 0x40, 0x01], // A + MX
ttl: 3600,
});
pkt.authorities.push(DnsRecord::RRSIG {
domain: "alpha.example.com".into(),
type_covered: QueryType::NSEC.to_num(),
algorithm: 13,
labels: 3,
original_ttl: 3600,
expiration: 1700000000,
inception: 1690000000,
key_tag: 12345,
signer_name: "example.com".into(),
signature: vec![0xAA; 64],
ttl: 3600,
});
// Wildcard denial NSEC
pkt.authorities.push(DnsRecord::NSEC {
domain: "example.com".into(),
next_domain: "alpha.example.com".into(),
type_bitmap: vec![0, 3, 0x62, 0x01, 0x80], // A, NS, SOA, MX, RRSIG
ttl: 3600,
});
pkt.edns = Some(EdnsOpt {
do_bit: true,
..Default::default()
});
let parsed = packet_round_trip(&pkt);
assert_eq!(parsed.header.id, 0x1111);
assert_eq!(parsed.header.rescode, ResultCode::NXDOMAIN);
assert_eq!(parsed.questions.len(), 1);
assert_eq!(parsed.questions[0].name, "nonexistent.example.com");
assert_eq!(parsed.authorities.len(), 3);
// Verify NSEC records survived
if let DnsRecord::NSEC {
domain,
next_domain,
type_bitmap,
..
} = &parsed.authorities[0]
{
assert_eq!(domain, "alpha.example.com");
assert_eq!(next_domain, "gamma.example.com");
assert_eq!(type_bitmap, &[0, 2, 0x40, 0x01]);
} else {
panic!("expected NSEC, got {:?}", parsed.authorities[0]);
}
// Verify RRSIG survived
if let DnsRecord::RRSIG {
type_covered,
signer_name,
signature,
..
} = &parsed.authorities[1]
{
assert_eq!(*type_covered, QueryType::NSEC.to_num());
assert_eq!(signer_name, "example.com");
assert_eq!(signature.len(), 64);
} else {
panic!("expected RRSIG, got {:?}", parsed.authorities[1]);
}
// Verify EDNS survived
assert!(parsed.edns.as_ref().unwrap().do_bit);
}
#[test]
fn nxdomain_with_nsec3_authority_round_trips() {
use crate::question::DnsQuestion;
use crate::record::DnsRecord;
let mut pkt = DnsPacket::new();
pkt.header.id = 0x2222;
pkt.header.response = true;
pkt.header.rescode = ResultCode::NXDOMAIN;
pkt.questions
.push(DnsQuestion::new("no.example.com".into(), QueryType::AAAA));
// Three NSEC3 records (closest encloser, next closer, wildcard)
let salt = vec![0xAB, 0xCD];
pkt.authorities.push(DnsRecord::NSEC3 {
domain: "ABC123.example.com".into(),
hash_algorithm: 1,
flags: 0,
iterations: 5,
salt: salt.clone(),
next_hashed_owner: vec![
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E,
0x0F, 0x10, 0x11, 0x12, 0x13, 0x14,
],
type_bitmap: vec![0, 2, 0x60, 0x01], // NS, SOA, MX
ttl: 300,
});
pkt.authorities.push(DnsRecord::NSEC3 {
domain: "DEF456.example.com".into(),
hash_algorithm: 1,
flags: 0,
iterations: 5,
salt: salt.clone(),
next_hashed_owner: vec![0x20; 20],
type_bitmap: vec![0, 1, 0x40], // A
ttl: 300,
});
pkt.authorities.push(DnsRecord::RRSIG {
domain: "ABC123.example.com".into(),
type_covered: QueryType::NSEC3.to_num(),
algorithm: 8,
labels: 3,
original_ttl: 300,
expiration: 2000000000,
inception: 1600000000,
key_tag: 54321,
signer_name: "example.com".into(),
signature: vec![0xBB; 128],
ttl: 300,
});
pkt.edns = Some(EdnsOpt {
do_bit: true,
..Default::default()
});
let parsed = packet_round_trip(&pkt);
assert_eq!(parsed.header.rescode, ResultCode::NXDOMAIN);
assert_eq!(parsed.authorities.len(), 3);
// Verify first NSEC3 survived with all fields intact
if let DnsRecord::NSEC3 {
domain,
hash_algorithm,
flags,
iterations,
salt: parsed_salt,
next_hashed_owner,
type_bitmap,
..
} = &parsed.authorities[0]
{
assert_eq!(domain, "abc123.example.com");
assert_eq!(*hash_algorithm, 1);
assert_eq!(*flags, 0);
assert_eq!(*iterations, 5);
assert_eq!(parsed_salt, &salt);
assert_eq!(next_hashed_owner.len(), 20);
assert_eq!(type_bitmap, &[0, 2, 0x60, 0x01]);
} else {
panic!("expected NSEC3, got {:?}", parsed.authorities[0]);
}
// Verify RRSIG covering NSEC3
if let DnsRecord::RRSIG {
type_covered,
algorithm,
signature,
..
} = &parsed.authorities[2]
{
assert_eq!(*type_covered, QueryType::NSEC3.to_num());
assert_eq!(*algorithm, 8);
assert_eq!(signature.len(), 128);
} else {
panic!("expected RRSIG, got {:?}", parsed.authorities[2]);
}
}
#[test]
fn dnssec_answer_with_rrsig_round_trips() {
use crate::question::DnsQuestion;
use crate::record::DnsRecord;
let mut pkt = DnsPacket::new();
pkt.header.id = 0x3333;
pkt.header.response = true;
pkt.header.rescode = ResultCode::NOERROR;
pkt.header.authed_data = true;
pkt.questions
.push(DnsQuestion::new("example.com".into(), QueryType::A));
pkt.answers.push(DnsRecord::A {
domain: "example.com".into(),
addr: "93.184.216.34".parse().unwrap(),
ttl: 300,
});
pkt.answers.push(DnsRecord::RRSIG {
domain: "example.com".into(),
type_covered: QueryType::A.to_num(),
algorithm: 13,
labels: 2,
original_ttl: 300,
expiration: 1700000000,
inception: 1690000000,
key_tag: 11111,
signer_name: "example.com".into(),
signature: vec![0xCC; 64],
ttl: 300,
});
// Authority: NS + DS
pkt.authorities.push(DnsRecord::NS {
domain: "example.com".into(),
host: "ns1.example.com".into(),
ttl: 3600,
});
pkt.authorities.push(DnsRecord::DS {
domain: "example.com".into(),
key_tag: 22222,
algorithm: 8,
digest_type: 2,
digest: vec![0xDD; 32],
ttl: 86400,
});
// Additional: glue A + DNSKEY
pkt.resources.push(DnsRecord::A {
domain: "ns1.example.com".into(),
addr: "198.51.100.1".parse().unwrap(),
ttl: 3600,
});
pkt.resources.push(DnsRecord::DNSKEY {
domain: "example.com".into(),
flags: 257,
protocol: 3,
algorithm: 13,
public_key: vec![0xEE; 64],
ttl: 3600,
});
pkt.edns = Some(EdnsOpt {
do_bit: true,
..Default::default()
});
let parsed = packet_round_trip(&pkt);
assert_eq!(parsed.header.id, 0x3333);
assert!(parsed.header.authed_data);
assert_eq!(parsed.answers.len(), 2);
assert_eq!(parsed.authorities.len(), 2);
assert_eq!(parsed.resources.len(), 2);
// Verify A record
if let DnsRecord::A { addr, .. } = &parsed.answers[0] {
assert_eq!(addr.to_string(), "93.184.216.34");
} else {
panic!("expected A");
}
// Verify RRSIG in answers
if let DnsRecord::RRSIG {
type_covered,
key_tag,
signer_name,
..
} = &parsed.answers[1]
{
assert_eq!(*type_covered, 1); // A
assert_eq!(*key_tag, 11111);
assert_eq!(signer_name, "example.com");
} else {
panic!("expected RRSIG");
}
// Verify DS in authority
if let DnsRecord::DS {
key_tag, digest, ..
} = &parsed.authorities[1]
{
assert_eq!(*key_tag, 22222);
assert_eq!(digest.len(), 32);
} else {
panic!("expected DS");
}
// Verify DNSKEY in additional
if let DnsRecord::DNSKEY {
flags, public_key, ..
} = &parsed.resources[1]
{
assert_eq!(*flags, 257);
assert_eq!(public_key.len(), 64);
} else {
panic!("expected DNSKEY");
}
}
#[test]
fn heap_bytes_accounts_for_records() {
let mut pkt = DnsPacket::new();
let empty = pkt.heap_bytes();
pkt.answers.push(DnsRecord::A {
domain: "example.com".into(),
addr: "1.2.3.4".parse().unwrap(),
ttl: 300,
});
assert!(pkt.heap_bytes() > empty);
}
}

View File

@@ -1,4 +1,4 @@
use std::net::SocketAddr;
use std::net::{Ipv4Addr, SocketAddr};
use std::sync::Arc;
use axum::body::Body;
@@ -11,7 +11,6 @@ use hyper::StatusCode;
use hyper_util::client::legacy::Client;
use hyper_util::rt::TokioExecutor;
use log::{debug, error, info, warn};
use rustls::ServerConfig;
use tokio::io::copy_bidirectional;
use tokio_rustls::TlsAcceptor;
@@ -25,8 +24,8 @@ struct ProxyState {
client: HttpClient,
}
pub async fn start_proxy(ctx: Arc<ServerCtx>, port: u16) {
let addr: SocketAddr = ([0, 0, 0, 0], port).into();
pub async fn start_proxy(ctx: Arc<ServerCtx>, port: u16, bind_addr: Ipv4Addr) {
let addr: SocketAddr = (bind_addr, port).into();
let listener = match tokio::net::TcpListener::bind(addr).await {
Ok(l) => l,
Err(e) => {
@@ -50,8 +49,8 @@ pub async fn start_proxy(ctx: Arc<ServerCtx>, port: u16) {
axum::serve(listener, app).await.unwrap();
}
pub async fn start_proxy_tls(ctx: Arc<ServerCtx>, port: u16, tls_config: Arc<ServerConfig>) {
let addr: SocketAddr = ([0, 0, 0, 0], port).into();
pub async fn start_proxy_tls(ctx: Arc<ServerCtx>, port: u16, bind_addr: Ipv4Addr) {
let addr: SocketAddr = (bind_addr, port).into();
let listener = match tokio::net::TcpListener::bind(addr).await {
Ok(l) => l,
Err(e) => {
@@ -64,11 +63,17 @@ pub async fn start_proxy_tls(ctx: Arc<ServerCtx>, port: u16, tls_config: Arc<Ser
};
info!("HTTPS proxy listening on {}", addr);
let acceptor = TlsAcceptor::from(tls_config);
if ctx.tls_config.is_none() {
warn!("proxy: no TLS config — HTTPS proxy disabled");
return;
}
let client: HttpClient = Client::builder(TokioExecutor::new())
.http1_preserve_header_case(true)
.build_http();
// Hold a separate Arc so we can access tls_config after ctx moves into ProxyState
let tls_holder = Arc::clone(&ctx);
let state = ProxyState { ctx, client };
let app = Router::new().fallback(any(proxy_handler)).with_state(state);
@@ -82,7 +87,10 @@ pub async fn start_proxy_tls(ctx: Arc<ServerCtx>, port: u16, tls_config: Arc<Ser
}
};
let acceptor = acceptor.clone();
// Load the latest TLS config on each connection (picks up new service certs)
// unwrap safe: guarded by is_none() check above
let acceptor =
TlsAcceptor::from(Arc::clone(&*tls_holder.tls_config.as_ref().unwrap().load()));
let app = app.clone();
tokio::spawn(async move {
@@ -109,55 +117,15 @@ pub async fn start_proxy_tls(ctx: Arc<ServerCtx>, port: u16, tls_config: Arc<Ser
}
}
fn extract_host(req: &Request) -> Option<String> {
req.headers()
.get(hyper::header::HOST)
.and_then(|v| v.to_str().ok())
.map(|h| h.split(':').next().unwrap_or(h).to_lowercase())
}
async fn proxy_handler(State(state): State<ProxyState>, req: Request) -> axum::response::Response {
let hostname = match extract_host(&req) {
Some(h) => h,
None => {
return (StatusCode::BAD_REQUEST, "missing Host header").into_response();
}
};
let service_name = match hostname.strip_suffix(state.ctx.proxy_tld_suffix.as_str()) {
Some(name) => name.to_string(),
None => {
return (
StatusCode::BAD_GATEWAY,
format!("not a {} domain: {}", state.ctx.proxy_tld_suffix, hostname),
)
.into_response()
}
};
let (target_host, target_port) = {
let store = state.ctx.services.lock().unwrap();
if let Some(entry) = store.lookup(&service_name) {
("localhost".to_string(), entry.target_port)
} else {
let mut peers = state.ctx.lan_peers.lock().unwrap();
match peers.lookup(&service_name) {
Some((ip, port)) => (ip.to_string(), port),
None => {
return (
StatusCode::NOT_FOUND,
[(hyper::header::CONTENT_TYPE, "text/html; charset=utf-8")],
format!(
r##"<!DOCTYPE html>
fn error_page(title: &str, body: &str) -> String {
format!(
r##"<!DOCTYPE html>
<html lang="en"><head><meta charset="UTF-8"><meta name="viewport" content="width=device-width,initial-scale=1">
<title>404 — {0}{1}</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Instrument+Serif:ital@0;1&family=DM+Sans:opsz,wght@9..40,400;9..40,500&family=JetBrains+Mono:wght@400&display=swap" rel="stylesheet">
<title>{title} — Numa</title>
<style>
*,*::before,*::after {{ margin:0;padding:0;box-sizing:border-box }}
body {{
font-family: 'DM Sans', system-ui, sans-serif;
font-family: system-ui, -apple-system, sans-serif;
background: #f5f0e8;
color: #2c2418;
min-height: 100vh;
@@ -191,16 +159,24 @@ body::before {{
from {{ opacity:0; transform:translateY(20px) }}
to {{ opacity:1; transform:translateY(0) }}
}}
.code {{
font-family: 'Instrument Serif', Georgia, serif;
.hero-text {{
font-family: Georgia, 'Times New Roman', serif;
font-size: 6rem;
line-height: 1;
color: #c0623a;
letter-spacing: 0.04em;
opacity: 0.85;
}}
.label {{
font-family: ui-monospace, 'SF Mono', monospace;
font-size: 0.7rem;
letter-spacing: 0.12em;
text-transform: uppercase;
color: #b5443a;
margin-bottom: 1rem;
}}
.domain {{
font-family: 'JetBrains Mono', monospace;
font-family: ui-monospace, 'SF Mono', monospace;
font-size: 1.1rem;
color: #2c2418;
margin-top: 1rem;
@@ -228,7 +204,7 @@ pre {{
color: #e8e0d4;
padding: 1rem 1.2rem;
border-radius: 8px;
font-family: 'JetBrains Mono', monospace;
font-family: ui-monospace, 'SF Mono', monospace;
font-size: 0.78rem;
line-height: 1.7;
margin-top: 1.2rem;
@@ -237,9 +213,9 @@ pre {{
pre .prompt {{ color: #8baa6e }}
pre .flag {{ color: #8b9fbb }}
pre .str {{ color: #d48a5a }}
.lyrics {{
.aside {{
margin-top: 2.5rem;
font-family: 'Instrument Serif', Georgia, serif;
font-family: Georgia, 'Times New Roman', serif;
font-style: italic;
font-size: 0.85rem;
color: #a39888;
@@ -250,33 +226,103 @@ pre .str {{ color: #d48a5a }}
@keyframes fade {{ to {{ opacity: 1 }} }}
</style></head><body>
<div class="container">
<div class="code">404</div>
{body}
</div>
</body></html>"##
)
}
fn extract_host(req: &Request) -> Option<String> {
req.headers()
.get(hyper::header::HOST)
.and_then(|v| v.to_str().ok())
.map(|h| h.split(':').next().unwrap_or(h).to_lowercase())
}
async fn proxy_handler(State(state): State<ProxyState>, req: Request) -> axum::response::Response {
let hostname = match extract_host(&req) {
Some(h) => h,
None => {
return (StatusCode::BAD_REQUEST, "missing Host header").into_response();
}
};
let service_name = match hostname.strip_suffix(state.ctx.proxy_tld_suffix.as_str()) {
Some(name) => name.to_string(),
None => {
// Check if this domain was blocked — show a helpful styled page
if state.ctx.blocklist.read().unwrap().is_blocked(&hostname) {
let body = format!(
r#" <div class="hero-text">&#x1f6e1;</div>
<div class="label">Blocked by Numa</div>
<div class="domain">{0}</div>
<p class="message">This domain is on the ad &amp; tracker blocklist.<br>To allow it, use the <a href="http://numa.numa">dashboard</a> or:</p>
<pre><span class="prompt">$</span> <span class="str">curl</span> <span class="flag">-X POST</span> localhost:5380/blocking/allowlist \
<span class="flag">-d</span> '<span class="str">{{"domain":"{0}"}}</span>'</pre>"#,
hostname
);
return (
StatusCode::FORBIDDEN,
[(hyper::header::CONTENT_TYPE, "text/html; charset=utf-8")],
error_page(&format!("Blocked — {}", hostname), &body),
)
.into_response();
}
return (
StatusCode::BAD_GATEWAY,
format!("not a {} domain: {}", state.ctx.proxy_tld_suffix, hostname),
)
.into_response();
}
};
let request_path = req.uri().path().to_string();
let (target_host, target_port, rewritten_path) = {
let store = state.ctx.services.lock().unwrap();
if let Some(entry) = store.lookup(&service_name) {
let (port, path) = entry.resolve_route(&request_path);
("localhost".to_string(), port, path)
} else {
let mut peers = state.ctx.lan_peers.lock().unwrap();
match peers.lookup(&service_name) {
Some((ip, port)) => (ip.to_string(), port, request_path.clone()),
None => {
let body = format!(
r#" <div class="hero-text">404</div>
<div class="domain">{0}{1}</div>
<p class="message">This service isn't registered yet.<br>Add it from the <a href="http://numa.numa">dashboard</a> or:</p>
<pre><span class="prompt">$</span> <span class="str">curl</span> <span class="flag">-X POST</span> numa.numa:5380/services \
<span class="flag">-H</span> 'Content-Type: application/json' \
<span class="flag">-d</span> '<span class="str">{{"name":"{0}","target_port":3000}}</span>'</pre>
<div class="lyrics">ma-ia hii, ma-ia huu, ma-ia haa, ma-ia ha-ha</div>
</div>
</body></html>"##,
<div class="aside">ma-ia hii, ma-ia huu, ma-ia haa, ma-ia ha-ha</div>"#,
service_name, state.ctx.proxy_tld_suffix
),
)
.into_response()
);
return (
StatusCode::NOT_FOUND,
[(hyper::header::CONTENT_TYPE, "text/html; charset=utf-8")],
error_page(
&format!("404 — {}{}", service_name, state.ctx.proxy_tld_suffix),
&body,
),
)
.into_response();
}
}
}
};
let path_and_query = req
let query_string = req
.uri()
.path_and_query()
.map(|pq| pq.as_str())
.unwrap_or("/");
let target_uri: hyper::Uri =
format!("http://{}:{}{}", target_host, target_port, path_and_query)
.parse()
.unwrap();
.query()
.map(|q| format!("?{}", q))
.unwrap_or_default();
let target_uri: hyper::Uri = format!(
"http://{}:{}{}{}",
target_host, target_port, rewritten_path, query_string
)
.parse()
.unwrap();
// Check for upgrade request (WebSocket, etc.)
let is_upgrade = req.headers().get(hyper::header::UPGRADE).is_some();

View File

@@ -2,6 +2,7 @@ use std::collections::VecDeque;
use std::net::SocketAddr;
use std::time::SystemTime;
use crate::cache::DnssecStatus;
use crate::header::ResultCode;
use crate::question::QueryType;
use crate::stats::QueryPath;
@@ -14,6 +15,7 @@ pub struct QueryLogEntry {
pub path: QueryPath,
pub rescode: ResultCode,
pub latency_us: u64,
pub dnssec: DnssecStatus,
}
pub struct QueryLog {
@@ -36,6 +38,21 @@ impl QueryLog {
self.entries.push_back(entry);
}
pub fn len(&self) -> usize {
self.entries.len()
}
pub fn is_empty(&self) -> bool {
self.entries.is_empty()
}
pub fn heap_bytes(&self) -> usize {
self.entries
.iter()
.map(|e| std::mem::size_of::<QueryLogEntry>() + e.domain.capacity())
.sum()
}
pub fn query(&self, filter: &QueryLogFilter) -> Vec<&QueryLogEntry> {
self.entries
.iter()
@@ -75,3 +92,25 @@ pub struct QueryLogFilter {
pub since: Option<SystemTime>,
pub limit: Option<usize>,
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn heap_bytes_grows_with_entries() {
let mut log = QueryLog::new(100);
let empty = log.heap_bytes();
log.push(QueryLogEntry {
timestamp: SystemTime::now(),
src_addr: "127.0.0.1:1234".parse().unwrap(),
domain: "example.com".into(),
query_type: QueryType::A,
path: QueryPath::Forwarded,
rescode: ResultCode::NOERROR,
latency_us: 500,
dnssec: DnssecStatus::Indeterminate,
});
assert!(log.heap_bytes() > empty);
}
}

View File

@@ -4,16 +4,22 @@ use crate::Result;
#[derive(PartialEq, Eq, Debug, Clone, Hash, Copy)]
pub enum QueryType {
UNKNOWN(u16),
A, // 1
NS, // 2
CNAME, // 5
SOA, // 6
PTR, // 12
MX, // 15
TXT, // 16
AAAA, // 28
SRV, // 33
HTTPS, // 65
A, // 1
NS, // 2
CNAME, // 5
SOA, // 6
PTR, // 12
MX, // 15
TXT, // 16
AAAA, // 28
SRV, // 33
DS, // 43
RRSIG, // 46
NSEC, // 47
DNSKEY, // 48
NSEC3, // 50
OPT, // 41 (EDNS0 pseudo-type)
HTTPS, // 65
}
impl QueryType {
@@ -29,6 +35,12 @@ impl QueryType {
QueryType::TXT => 16,
QueryType::AAAA => 28,
QueryType::SRV => 33,
QueryType::OPT => 41,
QueryType::DS => 43,
QueryType::RRSIG => 46,
QueryType::NSEC => 47,
QueryType::DNSKEY => 48,
QueryType::NSEC3 => 50,
QueryType::HTTPS => 65,
}
}
@@ -44,6 +56,12 @@ impl QueryType {
16 => QueryType::TXT,
28 => QueryType::AAAA,
33 => QueryType::SRV,
41 => QueryType::OPT,
43 => QueryType::DS,
46 => QueryType::RRSIG,
47 => QueryType::NSEC,
48 => QueryType::DNSKEY,
50 => QueryType::NSEC3,
65 => QueryType::HTTPS,
_ => QueryType::UNKNOWN(num),
}
@@ -60,6 +78,12 @@ impl QueryType {
QueryType::TXT => "TXT",
QueryType::AAAA => "AAAA",
QueryType::SRV => "SRV",
QueryType::OPT => "OPT",
QueryType::DS => "DS",
QueryType::RRSIG => "RRSIG",
QueryType::NSEC => "NSEC",
QueryType::DNSKEY => "DNSKEY",
QueryType::NSEC3 => "NSEC3",
QueryType::HTTPS => "HTTPS",
QueryType::UNKNOWN(_) => "UNKNOWN",
}
@@ -76,6 +100,11 @@ impl QueryType {
"TXT" => Some(QueryType::TXT),
"AAAA" => Some(QueryType::AAAA),
"SRV" => Some(QueryType::SRV),
"DS" => Some(QueryType::DS),
"RRSIG" => Some(QueryType::RRSIG),
"DNSKEY" => Some(QueryType::DNSKEY),
"NSEC" => Some(QueryType::NSEC),
"NSEC3" => Some(QueryType::NSEC3),
"HTTPS" => Some(QueryType::HTTPS),
_ => None,
}

View File

@@ -11,7 +11,7 @@ pub enum DnsRecord {
UNKNOWN {
domain: String,
qtype: u16,
data_len: u16,
data: Vec<u8>,
ttl: u32,
},
A {
@@ -40,11 +40,84 @@ pub enum DnsRecord {
addr: Ipv6Addr,
ttl: u32,
},
DNSKEY {
domain: String,
flags: u16,
protocol: u8,
algorithm: u8,
public_key: Vec<u8>,
ttl: u32,
},
DS {
domain: String,
key_tag: u16,
algorithm: u8,
digest_type: u8,
digest: Vec<u8>,
ttl: u32,
},
RRSIG {
domain: String,
type_covered: u16,
algorithm: u8,
labels: u8,
original_ttl: u32,
expiration: u32,
inception: u32,
key_tag: u16,
signer_name: String,
signature: Vec<u8>,
ttl: u32,
},
NSEC {
domain: String,
next_domain: String,
type_bitmap: Vec<u8>,
ttl: u32,
},
NSEC3 {
domain: String,
hash_algorithm: u8,
flags: u8,
iterations: u16,
salt: Vec<u8>,
next_hashed_owner: Vec<u8>,
type_bitmap: Vec<u8>,
ttl: u32,
},
}
impl DnsRecord {
pub fn is_unknown(&self) -> bool {
matches!(self, DnsRecord::UNKNOWN { .. })
pub fn domain(&self) -> &str {
match self {
DnsRecord::A { domain, .. }
| DnsRecord::NS { domain, .. }
| DnsRecord::CNAME { domain, .. }
| DnsRecord::MX { domain, .. }
| DnsRecord::AAAA { domain, .. }
| DnsRecord::DNSKEY { domain, .. }
| DnsRecord::DS { domain, .. }
| DnsRecord::RRSIG { domain, .. }
| DnsRecord::NSEC { domain, .. }
| DnsRecord::NSEC3 { domain, .. }
| DnsRecord::UNKNOWN { domain, .. } => domain,
}
}
pub fn query_type(&self) -> QueryType {
match self {
DnsRecord::A { .. } => QueryType::A,
DnsRecord::AAAA { .. } => QueryType::AAAA,
DnsRecord::NS { .. } => QueryType::NS,
DnsRecord::CNAME { .. } => QueryType::CNAME,
DnsRecord::MX { .. } => QueryType::MX,
DnsRecord::DNSKEY { .. } => QueryType::DNSKEY,
DnsRecord::DS { .. } => QueryType::DS,
DnsRecord::RRSIG { .. } => QueryType::RRSIG,
DnsRecord::NSEC { .. } => QueryType::NSEC,
DnsRecord::NSEC3 { .. } => QueryType::NSEC3,
DnsRecord::UNKNOWN { qtype, .. } => QueryType::UNKNOWN(*qtype),
}
}
pub fn ttl(&self) -> u32 {
@@ -54,10 +127,55 @@ impl DnsRecord {
| DnsRecord::CNAME { ttl, .. }
| DnsRecord::MX { ttl, .. }
| DnsRecord::AAAA { ttl, .. }
| DnsRecord::DNSKEY { ttl, .. }
| DnsRecord::DS { ttl, .. }
| DnsRecord::RRSIG { ttl, .. }
| DnsRecord::NSEC { ttl, .. }
| DnsRecord::NSEC3 { ttl, .. }
| DnsRecord::UNKNOWN { ttl, .. } => *ttl,
}
}
pub fn heap_bytes(&self) -> usize {
match self {
DnsRecord::A { domain, .. } => domain.capacity(),
DnsRecord::NS { domain, host, .. } | DnsRecord::CNAME { domain, host, .. } => {
domain.capacity() + host.capacity()
}
DnsRecord::MX { domain, host, .. } => domain.capacity() + host.capacity(),
DnsRecord::AAAA { domain, .. } => domain.capacity(),
DnsRecord::DNSKEY {
domain, public_key, ..
} => domain.capacity() + public_key.capacity(),
DnsRecord::DS { domain, digest, .. } => domain.capacity() + digest.capacity(),
DnsRecord::RRSIG {
domain,
signer_name,
signature,
..
} => domain.capacity() + signer_name.capacity() + signature.capacity(),
DnsRecord::NSEC {
domain,
next_domain,
type_bitmap,
..
} => domain.capacity() + next_domain.capacity() + type_bitmap.capacity(),
DnsRecord::NSEC3 {
domain,
salt,
next_hashed_owner,
type_bitmap,
..
} => {
domain.capacity()
+ salt.capacity()
+ next_hashed_owner.capacity()
+ type_bitmap.capacity()
}
DnsRecord::UNKNOWN { domain, data, .. } => domain.capacity() + data.capacity(),
}
}
pub fn set_ttl(&mut self, new_ttl: u32) {
match self {
DnsRecord::A { ttl, .. }
@@ -65,19 +183,25 @@ impl DnsRecord {
| DnsRecord::CNAME { ttl, .. }
| DnsRecord::MX { ttl, .. }
| DnsRecord::AAAA { ttl, .. }
| DnsRecord::DNSKEY { ttl, .. }
| DnsRecord::DS { ttl, .. }
| DnsRecord::RRSIG { ttl, .. }
| DnsRecord::NSEC { ttl, .. }
| DnsRecord::NSEC3 { ttl, .. }
| DnsRecord::UNKNOWN { ttl, .. } => *ttl = new_ttl,
}
}
pub fn read(buffer: &mut BytePacketBuffer) -> Result<DnsRecord> {
let mut domain = String::new();
let mut domain = String::with_capacity(64);
buffer.read_qname(&mut domain)?;
let qtype_num = buffer.read_u16()?;
let qtype = QueryType::from_num(qtype_num);
let _ = buffer.read_u16()?;
let _ = buffer.read_u16()?; // class
let ttl = buffer.read_u32()?;
let data_len = buffer.read_u16()?;
let rdata_start = buffer.pos();
match qtype {
QueryType::A => {
@@ -88,7 +212,6 @@ impl DnsRecord {
((raw_addr >> 8) & 0xFF) as u8,
(raw_addr & 0xFF) as u8,
);
Ok(DnsRecord::A { domain, addr, ttl })
}
QueryType::AAAA => {
@@ -106,13 +229,11 @@ impl DnsRecord {
((raw_addr4 >> 16) & 0xFFFF) as u16,
(raw_addr4 & 0xFFFF) as u16,
);
Ok(DnsRecord::AAAA { domain, addr, ttl })
}
QueryType::NS => {
let mut ns = String::new();
let mut ns = String::with_capacity(64);
buffer.read_qname(&mut ns)?;
Ok(DnsRecord::NS {
domain,
host: ns,
@@ -120,9 +241,8 @@ impl DnsRecord {
})
}
QueryType::CNAME => {
let mut cname = String::new();
let mut cname = String::with_capacity(64);
buffer.read_qname(&mut cname)?;
Ok(DnsRecord::CNAME {
domain,
host: cname,
@@ -131,9 +251,8 @@ impl DnsRecord {
}
QueryType::MX => {
let priority = buffer.read_u16()?;
let mut mx = String::new();
let mut mx = String::with_capacity(64);
buffer.read_qname(&mut mx)?;
Ok(DnsRecord::MX {
domain,
priority,
@@ -141,13 +260,119 @@ impl DnsRecord {
ttl,
})
}
QueryType::DNSKEY => {
let flags = buffer.read_u16()?;
let protocol = buffer.read()?;
let algorithm = buffer.read()?;
let key_len = data_len as usize - 4; // flags(2) + protocol(1) + algorithm(1)
let public_key = buffer.get_range(buffer.pos(), key_len)?.to_vec();
buffer.step(key_len)?;
Ok(DnsRecord::DNSKEY {
domain,
flags,
protocol,
algorithm,
public_key,
ttl,
})
}
QueryType::DS => {
let key_tag = buffer.read_u16()?;
let algorithm = buffer.read()?;
let digest_type = buffer.read()?;
let digest_len = data_len as usize - 4; // key_tag(2) + algorithm(1) + digest_type(1)
let digest = buffer.get_range(buffer.pos(), digest_len)?.to_vec();
buffer.step(digest_len)?;
Ok(DnsRecord::DS {
domain,
key_tag,
algorithm,
digest_type,
digest,
ttl,
})
}
QueryType::RRSIG => {
let type_covered = buffer.read_u16()?;
let algorithm = buffer.read()?;
let labels = buffer.read()?;
let original_ttl = buffer.read_u32()?;
let expiration = buffer.read_u32()?;
let inception = buffer.read_u32()?;
let key_tag = buffer.read_u16()?;
let mut signer_name = String::with_capacity(64);
buffer.read_qname(&mut signer_name)?;
let rdata_end = rdata_start + data_len as usize;
let sig_len = rdata_end
.checked_sub(buffer.pos())
.ok_or("RRSIG data_len too short for fixed fields + signer_name")?;
let signature = buffer.get_range(buffer.pos(), sig_len)?.to_vec();
buffer.step(sig_len)?;
Ok(DnsRecord::RRSIG {
domain,
type_covered,
algorithm,
labels,
original_ttl,
expiration,
inception,
key_tag,
signer_name,
signature,
ttl,
})
}
QueryType::NSEC => {
let rdata_end = rdata_start + data_len as usize;
let mut next_domain = String::with_capacity(64);
buffer.read_qname(&mut next_domain)?;
let bitmap_len = rdata_end
.checked_sub(buffer.pos())
.ok_or("NSEC data_len too short for type bitmap")?;
let type_bitmap = buffer.get_range(buffer.pos(), bitmap_len)?.to_vec();
buffer.step(bitmap_len)?;
Ok(DnsRecord::NSEC {
domain,
next_domain,
type_bitmap,
ttl,
})
}
QueryType::NSEC3 => {
let rdata_end = rdata_start + data_len as usize;
let hash_algorithm = buffer.read()?;
let flags = buffer.read()?;
let iterations = buffer.read_u16()?;
let salt_length = buffer.read()? as usize;
let salt = buffer.get_range(buffer.pos(), salt_length)?.to_vec();
buffer.step(salt_length)?;
let hash_length = buffer.read()? as usize;
let next_hashed_owner = buffer.get_range(buffer.pos(), hash_length)?.to_vec();
buffer.step(hash_length)?;
let bitmap_len = rdata_end
.checked_sub(buffer.pos())
.ok_or("NSEC3 data_len too short for type bitmap")?;
let type_bitmap = buffer.get_range(buffer.pos(), bitmap_len)?.to_vec();
buffer.step(bitmap_len)?;
Ok(DnsRecord::NSEC3 {
domain,
hash_algorithm,
flags,
iterations,
salt,
next_hashed_owner,
type_bitmap,
ttl,
})
}
_ => {
// SOA, TXT, SRV, etc. — stored as opaque bytes until parsed natively
let data = buffer.get_range(buffer.pos(), data_len as usize)?.to_vec();
buffer.step(data_len as usize)?;
Ok(DnsRecord::UNKNOWN {
domain,
qtype: qtype_num,
data_len,
data,
ttl,
})
}
@@ -163,32 +388,19 @@ impl DnsRecord {
ref addr,
ttl,
} => {
buffer.write_qname(domain)?;
buffer.write_u16(QueryType::A.to_num())?;
buffer.write_u16(1)?;
buffer.write_u32(ttl)?;
write_header(buffer, domain, QueryType::A.to_num(), ttl)?;
buffer.write_u16(4)?;
let octets = addr.octets();
buffer.write_u8(octets[0])?;
buffer.write_u8(octets[1])?;
buffer.write_u8(octets[2])?;
buffer.write_u8(octets[3])?;
buffer.write_bytes(&addr.octets())?;
}
DnsRecord::NS {
ref domain,
ref host,
ttl,
} => {
buffer.write_qname(domain)?;
buffer.write_u16(QueryType::NS.to_num())?;
buffer.write_u16(1)?;
buffer.write_u32(ttl)?;
write_header(buffer, domain, QueryType::NS.to_num(), ttl)?;
let pos = buffer.pos();
buffer.write_u16(0)?;
buffer.write_qname(host)?;
let size = buffer.pos() - (pos + 2);
buffer.set_u16(pos, size as u16)?;
}
@@ -197,15 +409,10 @@ impl DnsRecord {
ref host,
ttl,
} => {
buffer.write_qname(domain)?;
buffer.write_u16(QueryType::CNAME.to_num())?;
buffer.write_u16(1)?;
buffer.write_u32(ttl)?;
write_header(buffer, domain, QueryType::CNAME.to_num(), ttl)?;
let pos = buffer.pos();
buffer.write_u16(0)?;
buffer.write_qname(host)?;
let size = buffer.pos() - (pos + 2);
buffer.set_u16(pos, size as u16)?;
}
@@ -215,16 +422,11 @@ impl DnsRecord {
ref host,
ttl,
} => {
buffer.write_qname(domain)?;
buffer.write_u16(QueryType::MX.to_num())?;
buffer.write_u16(1)?;
buffer.write_u32(ttl)?;
write_header(buffer, domain, QueryType::MX.to_num(), ttl)?;
let pos = buffer.pos();
buffer.write_u16(0)?;
buffer.write_u16(priority)?;
buffer.write_qname(host)?;
let size = buffer.pos() - (pos + 2);
buffer.set_u16(pos, size as u16)?;
}
@@ -233,21 +435,269 @@ impl DnsRecord {
ref addr,
ttl,
} => {
buffer.write_qname(domain)?;
buffer.write_u16(QueryType::AAAA.to_num())?;
buffer.write_u16(1)?;
buffer.write_u32(ttl)?;
write_header(buffer, domain, QueryType::AAAA.to_num(), ttl)?;
buffer.write_u16(16)?;
for octet in &addr.segments() {
buffer.write_u16(*octet)?;
}
}
DnsRecord::UNKNOWN { .. } => {
log::debug!("Skipping record: {:?}", self);
DnsRecord::DNSKEY {
ref domain,
flags,
protocol,
algorithm,
ref public_key,
ttl,
} => {
write_header(buffer, domain, QueryType::DNSKEY.to_num(), ttl)?;
buffer.write_u16((4 + public_key.len()) as u16)?;
buffer.write_u16(flags)?;
buffer.write_u8(protocol)?;
buffer.write_u8(algorithm)?;
buffer.write_bytes(public_key)?;
}
DnsRecord::DS {
ref domain,
key_tag,
algorithm,
digest_type,
ref digest,
ttl,
} => {
write_header(buffer, domain, QueryType::DS.to_num(), ttl)?;
buffer.write_u16((4 + digest.len()) as u16)?;
buffer.write_u16(key_tag)?;
buffer.write_u8(algorithm)?;
buffer.write_u8(digest_type)?;
buffer.write_bytes(digest)?;
}
DnsRecord::RRSIG {
ref domain,
type_covered,
algorithm,
labels,
original_ttl,
expiration,
inception,
key_tag,
ref signer_name,
ref signature,
ttl,
} => {
write_header(buffer, domain, QueryType::RRSIG.to_num(), ttl)?;
let rdlen_pos = buffer.pos();
buffer.write_u16(0)?; // RDLENGTH placeholder
buffer.write_u16(type_covered)?;
buffer.write_u8(algorithm)?;
buffer.write_u8(labels)?;
buffer.write_u32(original_ttl)?;
buffer.write_u32(expiration)?;
buffer.write_u32(inception)?;
buffer.write_u16(key_tag)?;
buffer.write_qname(signer_name)?;
buffer.write_bytes(signature)?;
let rdlen = buffer.pos() - (rdlen_pos + 2);
buffer.set_u16(rdlen_pos, rdlen as u16)?;
}
DnsRecord::NSEC {
ref domain,
ref next_domain,
ref type_bitmap,
ttl,
} => {
write_header(buffer, domain, QueryType::NSEC.to_num(), ttl)?;
let rdlen_pos = buffer.pos();
buffer.write_u16(0)?;
buffer.write_qname(next_domain)?;
buffer.write_bytes(type_bitmap)?;
let rdlen = buffer.pos() - (rdlen_pos + 2);
buffer.set_u16(rdlen_pos, rdlen as u16)?;
}
DnsRecord::NSEC3 {
ref domain,
hash_algorithm,
flags,
iterations,
ref salt,
ref next_hashed_owner,
ref type_bitmap,
ttl,
} => {
write_header(buffer, domain, QueryType::NSEC3.to_num(), ttl)?;
let rdlen =
1 + 1 + 2 + 1 + salt.len() + 1 + next_hashed_owner.len() + type_bitmap.len();
buffer.write_u16(rdlen as u16)?;
buffer.write_u8(hash_algorithm)?;
buffer.write_u8(flags)?;
buffer.write_u16(iterations)?;
buffer.write_u8(salt.len() as u8)?;
buffer.write_bytes(salt)?;
buffer.write_u8(next_hashed_owner.len() as u8)?;
buffer.write_bytes(next_hashed_owner)?;
buffer.write_bytes(type_bitmap)?;
}
DnsRecord::UNKNOWN {
ref domain,
qtype,
ref data,
ttl,
} => {
write_header(buffer, domain, qtype, ttl)?;
buffer.write_u16(data.len() as u16)?;
buffer.write_bytes(data)?;
}
}
Ok(buffer.pos() - start_pos)
}
}
fn write_header(buffer: &mut BytePacketBuffer, domain: &str, qtype: u16, ttl: u32) -> Result<()> {
buffer.write_qname(domain)?;
buffer.write_u16(qtype)?;
buffer.write_u16(1)?; // class IN
buffer.write_u32(ttl)?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
fn round_trip(record: &DnsRecord) -> DnsRecord {
let mut buf = BytePacketBuffer::new();
record.write(&mut buf).unwrap();
buf.seek(0).unwrap();
DnsRecord::read(&mut buf).unwrap()
}
#[test]
fn unknown_preserves_raw_bytes() {
let rec = DnsRecord::UNKNOWN {
domain: "example.com".into(),
qtype: 99,
data: vec![0xDE, 0xAD, 0xBE, 0xEF],
ttl: 300,
};
let parsed = round_trip(&rec);
if let DnsRecord::UNKNOWN { data, .. } = &parsed {
assert_eq!(data.len(), 4);
assert_eq!(data, &[0xDE, 0xAD, 0xBE, 0xEF]);
} else {
panic!("expected UNKNOWN");
}
}
#[test]
fn dnskey_round_trip() {
let rec = DnsRecord::DNSKEY {
domain: "example.com".into(),
flags: 257, // KSK
protocol: 3,
algorithm: 13, // ECDSAP256SHA256
public_key: vec![1, 2, 3, 4, 5, 6, 7, 8],
ttl: 3600,
};
let parsed = round_trip(&rec);
assert_eq!(rec, parsed);
}
#[test]
fn ds_round_trip() {
let rec = DnsRecord::DS {
domain: "example.com".into(),
key_tag: 12345,
algorithm: 8,
digest_type: 2,
digest: vec![0xAA, 0xBB, 0xCC, 0xDD],
ttl: 86400,
};
let parsed = round_trip(&rec);
assert_eq!(rec, parsed);
}
#[test]
fn rrsig_round_trip() {
let rec = DnsRecord::RRSIG {
domain: "example.com".into(),
type_covered: 1, // A
algorithm: 13,
labels: 2,
original_ttl: 300,
expiration: 1700000000,
inception: 1690000000,
key_tag: 54321,
signer_name: "example.com".into(),
signature: vec![0x01, 0x02, 0x03, 0x04, 0x05],
ttl: 300,
};
let parsed = round_trip(&rec);
assert_eq!(rec, parsed);
}
#[test]
fn query_type_method() {
assert_eq!(
DnsRecord::DNSKEY {
domain: String::new(),
flags: 0,
protocol: 3,
algorithm: 8,
public_key: vec![],
ttl: 0,
}
.query_type(),
QueryType::DNSKEY
);
assert_eq!(
DnsRecord::DS {
domain: String::new(),
key_tag: 0,
algorithm: 0,
digest_type: 0,
digest: vec![],
ttl: 0,
}
.query_type(),
QueryType::DS
);
}
#[test]
fn nsec_round_trip() {
let rec = DnsRecord::NSEC {
domain: "alpha.example.com".into(),
next_domain: "gamma.example.com".into(),
type_bitmap: vec![0, 2, 0x40, 0x01], // A(1), MX(15)
ttl: 3600,
};
let parsed = round_trip(&rec);
assert_eq!(rec, parsed);
}
#[test]
fn nsec3_round_trip() {
let rec = DnsRecord::NSEC3 {
domain: "abc123.example.com".into(),
hash_algorithm: 1,
flags: 0,
iterations: 10,
salt: vec![0xAB, 0xCD],
next_hashed_owner: vec![0x01, 0x02, 0x03, 0x04, 0x05],
type_bitmap: vec![0, 1, 0x40], // A(1)
ttl: 3600,
};
let parsed = round_trip(&rec);
assert_eq!(rec, parsed);
}
#[test]
fn heap_bytes_reflects_string_capacity() {
let rec = DnsRecord::CNAME {
domain: "a]".repeat(100),
host: "b".repeat(200),
ttl: 60,
};
assert!(rec.heap_bytes() >= 300);
}
}

1124
src/recursive.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
use std::collections::HashMap;
use std::collections::{HashMap, HashSet};
use std::path::PathBuf;
use log::{info, warn};
@@ -8,12 +8,56 @@ use serde::{Deserialize, Serialize};
pub struct ServiceEntry {
pub name: String,
pub target_port: u16,
#[serde(default)]
pub routes: Vec<RouteEntry>,
}
#[derive(Clone, Serialize, Deserialize)]
pub struct RouteEntry {
pub path: String,
pub port: u16,
#[serde(default)]
pub strip: bool,
}
impl ServiceEntry {
/// Resolve backend port and (possibly rewritten) path for a request
pub fn resolve_route(&self, request_path: &str) -> (u16, String) {
// Longest prefix match
let matched = self
.routes
.iter()
.filter(|r| {
request_path == r.path
|| (request_path.starts_with(&r.path)
&& (r.path.ends_with('/')
|| request_path.as_bytes().get(r.path.len()) == Some(&b'/')))
})
.max_by_key(|r| r.path.len());
match matched {
Some(route) => {
let path = if route.strip {
let stripped = &request_path[route.path.len()..];
if stripped.is_empty() || !stripped.starts_with('/') {
format!("/{}", stripped.trim_start_matches('/'))
} else {
stripped.to_string()
}
} else {
request_path.to_string()
};
(route.port, path)
}
None => (self.target_port, request_path.to_string()),
}
}
}
pub struct ServiceStore {
entries: HashMap<String, ServiceEntry>,
/// Services defined in numa.toml (not persisted to user file)
config_services: std::collections::HashSet<String>,
config_services: HashSet<String>,
persist_path: PathBuf,
}
@@ -28,13 +72,13 @@ impl ServiceStore {
let persist_path = dirs_path();
ServiceStore {
entries: HashMap::new(),
config_services: std::collections::HashSet::new(),
config_services: HashSet::new(),
persist_path,
}
}
/// Insert a service from numa.toml config (not persisted)
pub fn insert_from_config(&mut self, name: &str, target_port: u16) {
pub fn insert_from_config(&mut self, name: &str, target_port: u16, routes: Vec<RouteEntry>) {
let key = name.to_lowercase();
self.config_services.insert(key.clone());
self.entries.insert(
@@ -42,6 +86,7 @@ impl ServiceStore {
ServiceEntry {
name: key,
target_port,
routes,
},
);
}
@@ -54,11 +99,37 @@ impl ServiceStore {
ServiceEntry {
name: key,
target_port,
routes: Vec::new(),
},
);
self.save();
}
pub fn add_route(&mut self, service: &str, path: String, port: u16, strip: bool) -> bool {
let key = service.to_lowercase();
if let Some(entry) = self.entries.get_mut(&key) {
entry.routes.retain(|r| r.path != path);
entry.routes.push(RouteEntry { path, port, strip });
self.save();
true
} else {
false
}
}
pub fn remove_route(&mut self, service: &str, path: &str) -> bool {
let key = service.to_lowercase();
if let Some(entry) = self.entries.get_mut(&key) {
let before = entry.routes.len();
entry.routes.retain(|r| r.path != path);
if entry.routes.len() < before {
self.save();
return true;
}
}
false
}
pub fn lookup(&self, name: &str) -> Option<&ServiceEntry> {
self.entries.get(&name.to_lowercase())
}
@@ -72,12 +143,26 @@ impl ServiceStore {
removed
}
/// Names are always stored lowercased, so callers must pass lowercase keys.
pub fn is_config_service(&self, name: &str) -> bool {
self.config_services.contains(name)
}
pub fn list(&self) -> Vec<&ServiceEntry> {
let mut entries: Vec<_> = self.entries.values().collect();
entries.sort_by(|a, b| a.name.cmp(&b.name));
entries
}
pub fn names(&self) -> Vec<String> {
self.entries.keys().cloned().collect()
}
/// Returns true if the name is new (not already registered).
pub fn has_name(&self, name: &str) -> bool {
self.entries.contains_key(&name.to_lowercase())
}
/// Load user-defined services from ~/.config/numa/services.json
pub fn load_persisted(&mut self) {
if !self.persist_path.exists() {
@@ -133,3 +218,157 @@ impl ServiceStore {
fn dirs_path() -> PathBuf {
crate::config_dir().join("services.json")
}
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
fn entry(port: u16, routes: Vec<RouteEntry>) -> ServiceEntry {
ServiceEntry {
name: "app".into(),
target_port: port,
routes,
}
}
fn route(path: &str, port: u16, strip: bool) -> RouteEntry {
RouteEntry {
path: path.into(),
port,
strip,
}
}
fn test_store() -> ServiceStore {
ServiceStore {
entries: HashMap::new(),
config_services: HashSet::new(),
persist_path: PathBuf::from("/dev/null"),
}
}
// --- resolve_route ---
#[test]
fn no_routes_returns_default_port() {
let e = entry(3000, vec![]);
assert_eq!(e.resolve_route("/anything"), (3000, "/anything".into()));
}
#[test]
fn exact_match() {
let e = entry(3000, vec![route("/api", 4000, false)]);
assert_eq!(e.resolve_route("/api"), (4000, "/api".into()));
}
#[test]
fn prefix_match() {
let e = entry(3000, vec![route("/api", 4000, false)]);
assert_eq!(e.resolve_route("/api/users"), (4000, "/api/users".into()));
}
#[test]
fn segment_boundary_rejects_partial() {
let e = entry(3000, vec![route("/api", 4000, false)]);
// /apiary must NOT match /api — different segment
assert_eq!(e.resolve_route("/apiary"), (3000, "/apiary".into()));
}
#[test]
fn segment_boundary_rejects_apikey() {
let e = entry(3000, vec![route("/api", 4000, false)]);
assert_eq!(e.resolve_route("/apikey"), (3000, "/apikey".into()));
}
#[test]
fn longest_prefix_wins() {
let e = entry(
3000,
vec![route("/api", 4000, false), route("/api/v2", 5000, false)],
);
assert_eq!(
e.resolve_route("/api/v2/users"),
(5000, "/api/v2/users".into())
);
// shorter prefix still works for non-v2 paths
assert_eq!(
e.resolve_route("/api/v1/users"),
(4000, "/api/v1/users".into())
);
}
#[test]
fn strip_removes_prefix() {
let e = entry(3000, vec![route("/api", 4000, true)]);
assert_eq!(e.resolve_route("/api/users"), (4000, "/users".into()));
}
#[test]
fn strip_exact_path_gives_root() {
let e = entry(3000, vec![route("/api", 4000, true)]);
assert_eq!(e.resolve_route("/api"), (4000, "/".into()));
}
#[test]
fn trailing_slash_route_matches() {
let e = entry(3000, vec![route("/app/", 4000, false)]);
assert_eq!(
e.resolve_route("/app/dashboard"),
(4000, "/app/dashboard".into())
);
}
// --- ServiceStore: add_route / remove_route ---
#[test]
fn add_route_to_existing_service() {
let mut store = test_store();
store.insert_from_config("app", 3000, vec![]);
assert!(store.add_route("app", "/api".into(), 4000, false));
let entry = store.lookup("app").unwrap();
assert_eq!(entry.routes.len(), 1);
assert_eq!(entry.routes[0].path, "/api");
}
#[test]
fn add_route_to_missing_service_returns_false() {
let mut store = test_store();
assert!(!store.add_route("ghost", "/api".into(), 4000, false));
}
#[test]
fn add_route_deduplicates_by_path() {
let mut store = test_store();
store.insert_from_config("app", 3000, vec![]);
store.add_route("app", "/api".into(), 4000, false);
store.add_route("app", "/api".into(), 5000, true);
let entry = store.lookup("app").unwrap();
assert_eq!(entry.routes.len(), 1);
assert_eq!(entry.routes[0].port, 5000);
assert!(entry.routes[0].strip);
}
#[test]
fn remove_route_returns_true_when_found() {
let mut store = test_store();
store.insert_from_config("app", 3000, vec![route("/api", 4000, false)]);
assert!(store.remove_route("app", "/api"));
assert!(store.lookup("app").unwrap().routes.is_empty());
}
#[test]
fn remove_route_returns_false_when_missing() {
let mut store = test_store();
store.insert_from_config("app", 3000, vec![]);
assert!(!store.remove_route("app", "/nope"));
}
#[test]
fn lookup_is_case_insensitive() {
let mut store = test_store();
store.insert_from_config("MyApp", 3000, vec![]);
assert!(store.lookup("myapp").is_some());
assert!(store.lookup("MYAPP").is_some());
}
}

345
src/srtt.rs Normal file
View File

@@ -0,0 +1,345 @@
use std::collections::HashMap;
use std::net::{IpAddr, SocketAddr};
use std::time::Instant;
const INITIAL_SRTT_MS: u64 = 200;
const FAILURE_PENALTY_MS: u64 = 5000;
const TCP_PENALTY_MS: u64 = 100;
const DECAY_AFTER_SECS: u64 = 300;
const MAX_ENTRIES: usize = 4096;
const EVICT_BATCH: usize = 64;
struct SrttEntry {
srtt_ms: u64,
updated_at: Instant,
}
pub struct SrttCache {
entries: HashMap<IpAddr, SrttEntry>,
enabled: bool,
}
impl Default for SrttCache {
fn default() -> Self {
Self::new(true)
}
}
impl SrttCache {
pub fn new(enabled: bool) -> Self {
Self {
entries: HashMap::new(),
enabled,
}
}
pub fn is_enabled(&self) -> bool {
self.enabled
}
/// Get current SRTT for an IP, applying decay if stale. Returns INITIAL for unknown.
pub fn get(&self, ip: IpAddr) -> u64 {
match self.entries.get(&ip) {
Some(entry) => Self::decayed_srtt(entry),
None => INITIAL_SRTT_MS,
}
}
/// Apply time-based decay: each DECAY_AFTER_SECS period halves distance to INITIAL.
fn decayed_srtt(entry: &SrttEntry) -> u64 {
let age_secs = entry.updated_at.elapsed().as_secs();
if age_secs > DECAY_AFTER_SECS {
let periods = (age_secs / DECAY_AFTER_SECS).min(8);
let mut srtt = entry.srtt_ms;
for _ in 0..periods {
srtt = (srtt + INITIAL_SRTT_MS) / 2;
}
srtt
} else {
entry.srtt_ms
}
}
/// Record a successful query RTT. No-op when disabled.
pub fn record_rtt(&mut self, ip: IpAddr, rtt_ms: u64, tcp: bool) {
if !self.enabled {
return;
}
let effective = if tcp { rtt_ms + TCP_PENALTY_MS } else { rtt_ms };
self.maybe_evict();
let entry = self.entries.entry(ip).or_insert(SrttEntry {
srtt_ms: effective,
updated_at: Instant::now(),
});
// Apply decay before EWMA so recovered servers aren't stuck at stale penalties
let base = Self::decayed_srtt(entry);
// BIND EWMA: new = (old * 7 + sample) / 8
entry.srtt_ms = (base * 7 + effective) / 8;
entry.updated_at = Instant::now();
}
/// Record a failure (timeout or error). No-op when disabled.
pub fn record_failure(&mut self, ip: IpAddr) {
if !self.enabled {
return;
}
self.maybe_evict();
let entry = self.entries.entry(ip).or_insert(SrttEntry {
srtt_ms: FAILURE_PENALTY_MS,
updated_at: Instant::now(),
});
entry.srtt_ms = FAILURE_PENALTY_MS;
entry.updated_at = Instant::now();
}
/// Sort addresses by SRTT ascending (lowest/fastest first). No-op when disabled.
pub fn sort_by_rtt(&self, addrs: &mut [SocketAddr]) {
if !self.enabled {
return;
}
addrs.sort_by_key(|a| self.get(a.ip()));
}
pub fn heap_bytes(&self) -> usize {
let per_slot = std::mem::size_of::<u64>()
+ std::mem::size_of::<IpAddr>()
+ std::mem::size_of::<SrttEntry>()
+ 1;
self.entries.capacity() * per_slot
}
pub fn len(&self) -> usize {
self.entries.len()
}
pub fn is_empty(&self) -> bool {
self.entries.is_empty()
}
#[cfg(test)]
fn set_updated_at(&mut self, ip: IpAddr, at: Instant) {
if let Some(entry) = self.entries.get_mut(&ip) {
entry.updated_at = at;
}
}
fn maybe_evict(&mut self) {
if self.entries.len() < MAX_ENTRIES {
return;
}
// Batch eviction: remove the oldest EVICT_BATCH entries at once
let mut by_age: Vec<IpAddr> = self.entries.keys().copied().collect();
by_age.sort_by_key(|ip| self.entries[ip].updated_at);
for ip in by_age.into_iter().take(EVICT_BATCH) {
self.entries.remove(&ip);
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::net::Ipv4Addr;
fn ip(last: u8) -> IpAddr {
IpAddr::V4(Ipv4Addr::new(192, 0, 2, last))
}
fn sock(last: u8) -> SocketAddr {
SocketAddr::new(ip(last), 53)
}
#[test]
fn unknown_returns_initial() {
let cache = SrttCache::new(true);
assert_eq!(cache.get(ip(1)), INITIAL_SRTT_MS);
}
#[test]
fn ewma_converges() {
let mut cache = SrttCache::new(true);
for _ in 0..20 {
cache.record_rtt(ip(1), 100, false);
}
let srtt = cache.get(ip(1));
assert!(srtt >= 98 && srtt <= 102, "srtt={}", srtt);
}
#[test]
fn failure_sets_penalty() {
let mut cache = SrttCache::new(true);
cache.record_rtt(ip(1), 50, false);
cache.record_failure(ip(1));
assert_eq!(cache.get(ip(1)), FAILURE_PENALTY_MS);
}
#[test]
fn tcp_penalty_added() {
let mut cache = SrttCache::new(true);
for _ in 0..20 {
cache.record_rtt(ip(1), 50, true);
}
let srtt = cache.get(ip(1));
assert!(srtt >= 148 && srtt <= 152, "srtt={}", srtt);
}
#[test]
fn sort_by_rtt_orders_correctly() {
let mut cache = SrttCache::new(true);
for _ in 0..20 {
cache.record_rtt(ip(1), 500, false);
cache.record_rtt(ip(2), 100, false);
cache.record_rtt(ip(3), 10, false);
}
let mut addrs = vec![sock(1), sock(2), sock(3)];
cache.sort_by_rtt(&mut addrs);
assert_eq!(addrs, vec![sock(3), sock(2), sock(1)]);
}
#[test]
fn unknown_servers_sort_equal() {
let cache = SrttCache::new(true);
let mut addrs = vec![sock(1), sock(2), sock(3)];
let original = addrs.clone();
cache.sort_by_rtt(&mut addrs);
assert_eq!(addrs, original);
}
#[test]
fn disabled_is_noop() {
let mut cache = SrttCache::new(false);
cache.record_rtt(ip(1), 50, false);
cache.record_failure(ip(2));
assert_eq!(cache.len(), 0);
let mut addrs = vec![sock(2), sock(1)];
let original = addrs.clone();
cache.sort_by_rtt(&mut addrs);
assert_eq!(addrs, original);
}
fn age(secs: u64) -> Instant {
Instant::now() - std::time::Duration::from_secs(secs)
}
/// Cache with ip(1) saturated at FAILURE_PENALTY_MS
fn saturated_penalty_cache() -> SrttCache {
let mut cache = SrttCache::new(true);
for _ in 0..30 {
cache.record_rtt(ip(1), FAILURE_PENALTY_MS, false);
}
cache
}
#[test]
fn no_decay_within_threshold() {
let mut cache = SrttCache::new(true);
cache.record_rtt(ip(1), 5000, false);
cache.set_updated_at(ip(1), age(DECAY_AFTER_SECS));
assert_eq!(cache.get(ip(1)), cache.entries[&ip(1)].srtt_ms);
}
#[test]
fn one_decay_period() {
let mut cache = saturated_penalty_cache();
let raw = cache.entries[&ip(1)].srtt_ms;
cache.set_updated_at(ip(1), age(DECAY_AFTER_SECS + 1));
let expected = (raw + INITIAL_SRTT_MS) / 2;
assert_eq!(cache.get(ip(1)), expected);
}
#[test]
fn multiple_decay_periods() {
let mut cache = saturated_penalty_cache();
let raw = cache.entries[&ip(1)].srtt_ms;
cache.set_updated_at(ip(1), age(DECAY_AFTER_SECS * 4 + 1));
let mut expected = raw;
for _ in 0..4 {
expected = (expected + INITIAL_SRTT_MS) / 2;
}
assert_eq!(cache.get(ip(1)), expected);
}
#[test]
fn decay_caps_at_8_periods() {
// 9 periods and 100 periods should produce the same result (capped at 8)
let mut cache_a = saturated_penalty_cache();
let mut cache_b = saturated_penalty_cache();
cache_a.set_updated_at(ip(1), age(DECAY_AFTER_SECS * 9 + 1));
cache_b.set_updated_at(ip(1), age(DECAY_AFTER_SECS * 100));
assert_eq!(cache_a.get(ip(1)), cache_b.get(ip(1)));
}
#[test]
fn decay_converges_toward_initial() {
let mut cache = saturated_penalty_cache();
cache.set_updated_at(ip(1), age(DECAY_AFTER_SECS * 100));
let decayed = cache.get(ip(1));
let diff = decayed.abs_diff(INITIAL_SRTT_MS);
assert!(
diff < 25,
"expected near INITIAL_SRTT_MS, got {} (diff={})",
decayed,
diff
);
}
#[test]
fn record_rtt_applies_decay_before_ewma() {
let mut cache = saturated_penalty_cache();
cache.set_updated_at(ip(1), age(DECAY_AFTER_SECS * 8));
cache.record_rtt(ip(1), 50, false);
let srtt = cache.get(ip(1));
// Without decay-before-EWMA, result would be ~(5000*7+50)/8 ≈ 4381
assert!(srtt < 500, "expected decay before EWMA, got srtt={}", srtt);
}
#[test]
fn decay_reranks_stale_failures() {
let mut cache = saturated_penalty_cache();
for _ in 0..30 {
cache.record_rtt(ip(2), 300, false);
}
let mut addrs = vec![sock(1), sock(2)];
cache.sort_by_rtt(&mut addrs);
assert_eq!(addrs, vec![sock(2), sock(1)]);
// Age server 1 so it decays toward INITIAL (200ms) — below server 2's 300ms
cache.set_updated_at(ip(1), age(DECAY_AFTER_SECS * 100));
let mut addrs = vec![sock(1), sock(2)];
cache.sort_by_rtt(&mut addrs);
assert_eq!(addrs, vec![sock(1), sock(2)]);
}
#[test]
fn heap_bytes_grows_with_entries() {
let mut cache = SrttCache::new(true);
let empty = cache.heap_bytes();
for i in 1..=10u8 {
cache.record_rtt(ip(i), 100, false);
}
assert!(cache.heap_bytes() > empty);
}
#[test]
fn eviction_removes_oldest() {
let mut cache = SrttCache::new(true);
for i in 0..MAX_ENTRIES {
let octets = [
10,
((i >> 16) & 0xFF) as u8,
((i >> 8) & 0xFF) as u8,
(i & 0xFF) as u8,
];
cache.record_rtt(
IpAddr::V4(Ipv4Addr::new(octets[0], octets[1], octets[2], octets[3])),
100,
false,
);
}
assert_eq!(cache.len(), MAX_ENTRIES);
cache.record_rtt(ip(1), 100, false);
// Batch eviction removes EVICT_BATCH entries
assert!(cache.len() <= MAX_ENTRIES - EVICT_BATCH + 1);
}
}

View File

@@ -1,8 +1,97 @@
use std::time::Instant;
/// Returns the process memory footprint in bytes, or 0 if unavailable.
/// macOS: phys_footprint (matches Activity Monitor). Linux: RSS from /proc/self/statm.
pub fn process_memory_bytes() -> usize {
#[cfg(target_os = "macos")]
{
macos_rss()
}
#[cfg(target_os = "linux")]
{
linux_rss()
}
#[cfg(not(any(target_os = "macos", target_os = "linux")))]
{
0
}
}
#[cfg(target_os = "macos")]
fn macos_rss() -> usize {
use std::mem;
extern "C" {
fn mach_task_self() -> u32;
fn task_info(
target_task: u32,
flavor: u32,
task_info_out: *mut TaskVmInfo,
task_info_count: *mut u32,
) -> i32;
}
// Partial task_vm_info_data_t — only fields up to phys_footprint.
#[repr(C)]
struct TaskVmInfo {
virtual_size: u64,
region_count: i32,
page_size: i32,
resident_size: u64,
resident_size_peak: u64,
device: u64,
device_peak: u64,
internal: u64,
internal_peak: u64,
external: u64,
external_peak: u64,
reusable: u64,
reusable_peak: u64,
purgeable_volatile_pmap: u64,
purgeable_volatile_resident: u64,
purgeable_volatile_virtual: u64,
compressed: u64,
compressed_peak: u64,
compressed_lifetime: u64,
phys_footprint: u64,
}
const TASK_VM_INFO: u32 = 22;
let mut info: TaskVmInfo = unsafe { mem::zeroed() };
let mut count = (mem::size_of::<TaskVmInfo>() / mem::size_of::<u32>()) as u32;
let kr = unsafe { task_info(mach_task_self(), TASK_VM_INFO, &mut info, &mut count) };
if kr == 0 {
info.phys_footprint as usize
} else {
0
}
}
#[cfg(target_os = "linux")]
fn linux_rss() -> usize {
extern "C" {
fn sysconf(name: i32) -> i64;
}
const SC_PAGESIZE: i32 = 30; // x86_64 + aarch64; differs on mips (28), sparc (29)
let page_size = unsafe { sysconf(SC_PAGESIZE) };
let page_size = if page_size > 0 {
page_size as usize
} else {
4096
};
if let Ok(statm) = std::fs::read_to_string("/proc/self/statm") {
if let Some(rss_pages) = statm.split_whitespace().nth(1) {
if let Ok(pages) = rss_pages.parse::<usize>() {
return pages * page_size;
}
}
}
0
}
pub struct ServerStats {
queries_total: u64,
queries_forwarded: u64,
queries_recursive: u64,
queries_coalesced: u64,
queries_cached: u64,
queries_blocked: u64,
queries_local: u64,
@@ -11,11 +100,13 @@ pub struct ServerStats {
started_at: Instant,
}
#[derive(Clone, Copy, PartialEq, Eq)]
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum QueryPath {
Local,
Cached,
Forwarded,
Recursive,
Coalesced,
Blocked,
Overridden,
UpstreamError,
@@ -27,6 +118,8 @@ impl QueryPath {
QueryPath::Local => "LOCAL",
QueryPath::Cached => "CACHED",
QueryPath::Forwarded => "FORWARD",
QueryPath::Recursive => "RECURSIVE",
QueryPath::Coalesced => "COALESCED",
QueryPath::Blocked => "BLOCKED",
QueryPath::Overridden => "OVERRIDE",
QueryPath::UpstreamError => "SERVFAIL",
@@ -40,6 +133,10 @@ impl QueryPath {
Some(QueryPath::Cached)
} else if s.eq_ignore_ascii_case("FORWARD") {
Some(QueryPath::Forwarded)
} else if s.eq_ignore_ascii_case("RECURSIVE") {
Some(QueryPath::Recursive)
} else if s.eq_ignore_ascii_case("COALESCED") {
Some(QueryPath::Coalesced)
} else if s.eq_ignore_ascii_case("BLOCKED") {
Some(QueryPath::Blocked)
} else if s.eq_ignore_ascii_case("OVERRIDE") {
@@ -63,6 +160,8 @@ impl ServerStats {
ServerStats {
queries_total: 0,
queries_forwarded: 0,
queries_recursive: 0,
queries_coalesced: 0,
queries_cached: 0,
queries_blocked: 0,
queries_local: 0,
@@ -78,6 +177,8 @@ impl ServerStats {
QueryPath::Local => self.queries_local += 1,
QueryPath::Cached => self.queries_cached += 1,
QueryPath::Forwarded => self.queries_forwarded += 1,
QueryPath::Recursive => self.queries_recursive += 1,
QueryPath::Coalesced => self.queries_coalesced += 1,
QueryPath::Blocked => self.queries_blocked += 1,
QueryPath::Overridden => self.queries_overridden += 1,
QueryPath::UpstreamError => self.upstream_errors += 1,
@@ -98,6 +199,8 @@ impl ServerStats {
uptime_secs: self.uptime_secs(),
total: self.queries_total,
forwarded: self.queries_forwarded,
recursive: self.queries_recursive,
coalesced: self.queries_coalesced,
cached: self.queries_cached,
local: self.queries_local,
overridden: self.queries_overridden,
@@ -113,10 +216,12 @@ impl ServerStats {
let secs = uptime.as_secs() % 60;
log::info!(
"STATS | uptime {}h{}m{}s | total {} | fwd {} | cached {} | local {} | override {} | blocked {} | errors {}",
"STATS | uptime {}h{}m{}s | total {} | fwd {} | recursive {} | coalesced {} | cached {} | local {} | override {} | blocked {} | errors {}",
hours, mins, secs,
self.queries_total,
self.queries_forwarded,
self.queries_recursive,
self.queries_coalesced,
self.queries_cached,
self.queries_local,
self.queries_overridden,
@@ -130,6 +235,8 @@ pub struct StatsSnapshot {
pub uptime_secs: u64,
pub total: u64,
pub forwarded: u64,
pub recursive: u64,
pub coalesced: u64,
pub cached: u64,
pub local: u64,
pub overridden: u64,

View File

@@ -2,6 +2,10 @@ use std::net::SocketAddr;
use log::info;
fn is_loopback_or_stub(addr: &str) -> bool {
matches!(addr, "127.0.0.1" | "127.0.0.53" | "0.0.0.0" | "::1" | "")
}
/// A conditional forwarding rule: domains matching `suffix` are forwarded to `upstream`.
#[derive(Debug, Clone)]
pub struct ForwardingRule {
@@ -26,10 +30,7 @@ pub fn discover_system_dns() -> SystemDnsInfo {
}
#[cfg(target_os = "linux")]
{
SystemDnsInfo {
default_upstream: detect_upstream_linux_or_backup(),
forwarding_rules: Vec::new(),
}
discover_linux()
}
#[cfg(windows)]
{
@@ -102,11 +103,7 @@ fn discover_macos() -> SystemDnsInfo {
if ns.parse::<std::net::Ipv4Addr>().is_ok() {
current_nameserver = Some(ns.clone());
// Capture first non-supplemental, non-loopback nameserver as default upstream
if !is_supplemental
&& default_upstream.is_none()
&& ns != "127.0.0.1"
&& ns != "0.0.0.0"
{
if !is_supplemental && default_upstream.is_none() && !is_loopback_or_stub(&ns) {
default_upstream = Some(ns);
}
}
@@ -156,7 +153,7 @@ fn discover_macos() -> SystemDnsInfo {
}
}
#[cfg(target_os = "macos")]
#[cfg(any(target_os = "macos", target_os = "linux"))]
fn make_rule(domain: &str, nameserver: &str) -> Option<ForwardingRule> {
let addr: SocketAddr = format!("{}:53", nameserver).parse().ok()?;
Some(ForwardingRule {
@@ -166,38 +163,100 @@ fn make_rule(domain: &str, nameserver: &str) -> Option<ForwardingRule> {
})
}
/// Detect upstream from /etc/resolv.conf, falling back to backup file if resolv.conf
/// only has loopback (meaning numa install already ran).
#[cfg(target_os = "linux")]
fn detect_upstream_linux_or_backup() -> Option<String> {
// Try /etc/resolv.conf first
if let Some(ns) = read_upstream_from_file("/etc/resolv.conf") {
info!("detected system upstream: {}", ns);
return Some(ns);
}
// If resolv.conf only has loopback, check the backup from `numa install`
let backup = {
let home = std::env::var("HOME")
.map(std::path::PathBuf::from)
.unwrap_or_else(|_| std::path::PathBuf::from("/root"));
home.join(".numa").join("original-resolv.conf")
};
if let Some(ns) = read_upstream_from_file(backup.to_str().unwrap_or("")) {
info!("detected original upstream from backup: {}", ns);
return Some(ns);
}
None
}
const CLOUD_VPC_RESOLVER: &str = "169.254.169.253";
#[cfg(target_os = "linux")]
fn read_upstream_from_file(path: &str) -> Option<String> {
let text = std::fs::read_to_string(path).ok()?;
fn discover_linux() -> SystemDnsInfo {
// Parse resolv.conf once for both upstream and search domains
let (upstream, search_domains) = parse_resolv_conf("/etc/resolv.conf");
let default_upstream = if let Some(ns) = upstream {
info!("detected system upstream: {}", ns);
Some(ns)
} else {
// Fallback to backup from a previous `numa install`
let backup = {
let home = std::env::var("HOME")
.map(std::path::PathBuf::from)
.unwrap_or_else(|_| std::path::PathBuf::from("/root"));
home.join(".numa").join("original-resolv.conf")
};
let (ns, _) = parse_resolv_conf(backup.to_str().unwrap_or(""));
if let Some(ref ns) = ns {
info!("detected original upstream from backup: {}", ns);
}
ns
};
// On cloud VMs (AWS/GCP), internal domains need to reach the VPC resolver
let forwarding_rules = if search_domains.is_empty() {
Vec::new()
} else {
let forwarder = resolvectl_dns_server().unwrap_or_else(|| CLOUD_VPC_RESOLVER.to_string());
let rules: Vec<_> = search_domains
.iter()
.filter_map(|domain| {
let rule = make_rule(domain, &forwarder)?;
info!("forwarding .{} to {}", domain, forwarder);
Some(rule)
})
.collect();
if !rules.is_empty() {
info!("detected {} search domain forwarding rules", rules.len());
}
rules
};
SystemDnsInfo {
default_upstream,
forwarding_rules,
}
}
/// Parse resolv.conf in a single pass, extracting both the first non-loopback
/// nameserver and all search domains.
#[cfg(target_os = "linux")]
fn parse_resolv_conf(path: &str) -> (Option<String>, Vec<String>) {
let text = match std::fs::read_to_string(path) {
Ok(t) => t,
Err(_) => return (None, Vec::new()),
};
let mut upstream = None;
let mut search_domains = Vec::new();
for line in text.lines() {
let line = line.trim();
if line.starts_with("nameserver") {
if let Some(ns) = line.split_whitespace().nth(1) {
if ns != "127.0.0.1" && ns != "0.0.0.0" && ns != "::1" {
return Some(ns.to_string());
if upstream.is_none() {
if let Some(ns) = line.split_whitespace().nth(1) {
if !is_loopback_or_stub(ns) {
upstream = Some(ns.to_string());
}
}
}
} else if line.starts_with("search") || line.starts_with("domain") {
for domain in line.split_whitespace().skip(1) {
search_domains.push(domain.to_string());
}
}
}
(upstream, search_domains)
}
/// Query resolvectl for the real upstream DNS server (e.g. VPC resolver on AWS).
#[cfg(target_os = "linux")]
fn resolvectl_dns_server() -> Option<String> {
let output = std::process::Command::new("resolvectl")
.args(["status", "--no-pager"])
.output()
.ok()?;
let text = String::from_utf8_lossy(&output.stdout);
for line in text.lines() {
if line.contains("DNS Servers") || line.contains("Current DNS Server") {
if let Some(ip) = line.split(':').next_back() {
let ip = ip.trim();
if ip.parse::<std::net::IpAddr>().is_ok() && !is_loopback_or_stub(ip) {
return Some(ip.to_string());
}
}
}
@@ -236,10 +295,7 @@ fn detect_dhcp_dns_macos() -> Option<String> {
// Take the first non-loopback DNS server
for addr in inner.split(',') {
let addr = addr.trim();
if !addr.is_empty()
&& addr != "127.0.0.1"
&& addr != "0.0.0.0"
&& addr.parse::<std::net::Ipv4Addr>().is_ok()
if !is_loopback_or_stub(addr) && addr.parse::<std::net::Ipv4Addr>().is_ok()
{
log::info!("detected DHCP DNS: {}", addr);
return Some(addr.to_string());
@@ -278,7 +334,7 @@ fn discover_windows() -> SystemDnsInfo {
if trimmed.contains("DNS Servers") || trimmed.contains("DNS-Server") {
if let Some(ip) = trimmed.split(':').next_back() {
let ip = ip.trim();
if !ip.is_empty() && ip != "127.0.0.1" && ip != "::1" {
if !is_loopback_or_stub(ip) {
upstream = Some(ip.to_string());
break;
}
@@ -316,43 +372,6 @@ pub fn match_forwarding_rule(domain: &str, rules: &[ForwardingRule]) -> Option<S
// --- System DNS configuration (install/uninstall) ---
/// Set the system DNS to 127.0.0.1 so all queries go through Numa.
/// Saves the original DNS settings for later restoration.
pub fn install_system_dns() -> Result<(), String> {
#[cfg(target_os = "macos")]
let result = install_macos();
#[cfg(target_os = "linux")]
let result = install_linux();
#[cfg(not(any(target_os = "macos", target_os = "linux")))]
let result = Err("system DNS configuration not supported on this OS".to_string());
if result.is_ok() {
if let Err(e) = trust_ca() {
eprintln!(" warning: could not trust CA: {}", e);
eprintln!(" HTTPS proxy will work but browsers will show certificate warnings.\n");
}
}
result
}
/// Restore the original system DNS settings saved during install.
pub fn uninstall_system_dns() -> Result<(), String> {
let _ = untrust_ca();
#[cfg(target_os = "macos")]
{
uninstall_macos()
}
#[cfg(target_os = "linux")]
{
uninstall_linux()
}
#[cfg(not(any(target_os = "macos", target_os = "linux")))]
{
Err("system DNS configuration not supported on this OS".to_string())
}
}
// --- macOS implementation ---
#[cfg(target_os = "macos")]
@@ -500,21 +519,25 @@ const SYSTEMD_UNIT: &str = "/etc/systemd/system/numa.service";
/// Install Numa as a system service that starts on boot and auto-restarts.
pub fn install_service() -> Result<(), String> {
#[cfg(target_os = "macos")]
{
install_service_macos()
}
let result = install_service_macos();
#[cfg(target_os = "linux")]
{
install_service_linux()
}
let result = install_service_linux();
#[cfg(not(any(target_os = "macos", target_os = "linux")))]
{
Err("service installation not supported on this OS".to_string())
let result = Err::<(), String>("service installation not supported on this OS".to_string());
if result.is_ok() {
if let Err(e) = trust_ca() {
eprintln!(" warning: could not trust CA: {}", e);
eprintln!(" HTTPS proxy will work but browsers will show certificate warnings.\n");
}
}
result
}
/// Uninstall the Numa system service.
pub fn uninstall_service() -> Result<(), String> {
let _ = untrust_ca();
#[cfg(target_os = "macos")]
{
uninstall_service_macos()
@@ -609,7 +632,7 @@ fn install_service_macos() -> Result<(), String> {
std::fs::write(PLIST_DEST, plist)
.map_err(|e| format!("failed to write {}: {}", PLIST_DEST, e))?;
// Load the service
// Load the service first so numa is listening before DNS redirect
let status = std::process::Command::new("launchctl")
.args(["load", "-w", PLIST_DEST])
.status()
@@ -619,14 +642,34 @@ fn install_service_macos() -> Result<(), String> {
return Err("launchctl load failed".to_string());
}
// Set system DNS to 127.0.0.1 now that the service is running
eprintln!(" Service installed and started.");
// Wait for numa to be ready before redirecting DNS
let api_up = (0..10).any(|i| {
if i > 0 {
std::thread::sleep(std::time::Duration::from_millis(500));
}
std::net::TcpStream::connect(("127.0.0.1", crate::config::DEFAULT_API_PORT)).is_ok()
});
if !api_up {
// Service failed to start — don't redirect DNS to a dead endpoint
let _ = std::process::Command::new("launchctl")
.args(["unload", PLIST_DEST])
.status();
return Err(
"numa service did not start (port 53 may be in use). Service unloaded.".to_string(),
);
}
if let Err(e) = install_macos() {
eprintln!(" warning: failed to configure system DNS: {}", e);
}
eprintln!(" Service installed and started.");
eprintln!(" Numa will auto-start on boot and restart if killed.");
eprintln!(" Logs: /usr/local/var/log/numa.log");
eprintln!(" Run 'sudo numa service stop' to fully uninstall.\n");
eprintln!(" Run 'sudo numa uninstall' to restore original DNS.\n");
eprintln!(" Want full DNS sovereignty? Add to numa.toml:");
eprintln!(" [upstream]");
eprintln!(" mode = \"recursive\"\n");
Ok(())
}
@@ -708,8 +751,11 @@ fn install_linux() -> Result<(), String> {
.map_err(|e| format!("failed to create {}: {}", resolved_dir.display(), e))?;
let drop_in = resolved_dir.join("numa.conf");
std::fs::write(&drop_in, "[Resolve]\nDNS=127.0.0.1\nDomains=~.\n")
.map_err(|e| format!("failed to write {}: {}", drop_in.display(), e))?;
std::fs::write(
&drop_in,
"[Resolve]\nDNS=127.0.0.1\nDomains=~.\nDNSStubListener=no\n",
)
.map_err(|e| format!("failed to write {}: {}", drop_in.display(), e))?;
let _ = run_systemctl(&["restart", "systemd-resolved"]);
eprintln!(" systemd-resolved detected.");
@@ -802,17 +848,21 @@ fn install_service_linux() -> Result<(), String> {
run_systemctl(&["daemon-reload"])?;
run_systemctl(&["enable", "numa"])?;
run_systemctl(&["start", "numa"])?;
eprintln!(" Service installed and started.");
// Set system DNS now that the service is running
// Configure system DNS before starting numa so resolved releases port 53 first
if let Err(e) = install_linux() {
eprintln!(" warning: failed to configure system DNS: {}", e);
}
run_systemctl(&["start", "numa"])?;
eprintln!(" Service installed and started.");
eprintln!(" Numa will auto-start on boot and restart if killed.");
eprintln!(" Logs: journalctl -u numa -f");
eprintln!(" Run 'sudo numa service stop' to fully uninstall.\n");
eprintln!(" Run 'sudo numa uninstall' to restore original DNS.\n");
eprintln!(" Want full DNS sovereignty? Add to numa.toml:");
eprintln!(" [upstream]");
eprintln!(" mode = \"recursive\"\n");
Ok(())
}

View File

@@ -1,7 +1,10 @@
use std::collections::HashSet;
use std::path::Path;
use std::sync::Arc;
use log::{info, warn};
use crate::ctx::ServerCtx;
use rcgen::{BasicConstraints, CertificateParams, DnType, IsCa, KeyPair, KeyUsagePurpose, SanType};
use rustls::pki_types::{CertificateDer, PrivateKeyDer, PrivatePkcs8KeyDer};
use rustls::ServerConfig;
@@ -10,6 +13,26 @@ use time::{Duration, OffsetDateTime};
const CA_VALIDITY_DAYS: i64 = 3650; // 10 years
const CERT_VALIDITY_DAYS: i64 = 365; // 1 year
/// Collect all service + LAN peer names and regenerate the TLS cert.
pub fn regenerate_tls(ctx: &ServerCtx) {
let tls = match &ctx.tls_config {
Some(t) => t,
None => return,
};
let mut names: HashSet<String> = ctx.services.lock().unwrap().names().into_iter().collect();
names.extend(ctx.lan_peers.lock().unwrap().names());
let names: Vec<String> = names.into_iter().collect();
match build_tls_config(&ctx.proxy_tld, &names) {
Ok(new_config) => {
tls.store(new_config);
info!("TLS cert regenerated for {} services", names.len());
}
Err(e) => warn!("TLS regeneration failed: {}", e),
}
}
/// Build a TLS config with a cert covering all provided service names.
/// Wildcards under single-label TLDs (*.numa) are rejected by browsers,
/// so we list each service explicitly as a SAN.
@@ -89,8 +112,15 @@ fn generate_service_cert(
.distinguished_name
.push(DnType::CommonName, format!("Numa .{} services", tld));
// Add each service as an explicit SAN: numa.numa, peekm.numa, api.numa, etc.
// Add a wildcard SAN so any .numa domain gets a valid cert (including
// unregistered services — lets the proxy show a styled 404 over HTTPS).
// Also add each service explicitly for clients that don't match wildcards.
let mut sans = Vec::new();
let wildcard = format!("*.{}", tld);
match wildcard.clone().try_into() {
Ok(ia5) => sans.push(SanType::DnsName(ia5)),
Err(e) => warn!("invalid wildcard SAN {}: {}", wildcard, e),
}
for name in service_names {
let fqdn = format!("{}.{}", name, tld);
match fqdn.clone().try_into() {

419
tests/integration.sh Executable file
View File

@@ -0,0 +1,419 @@
#!/usr/bin/env bash
# Integration test suite for Numa
# Runs a test instance on port 5354, validates all features, exits with status.
# Usage: ./tests/integration.sh [release|debug]
set -euo pipefail
MODE="${1:-release}"
BINARY="./target/$MODE/numa"
PORT=5354
API_PORT=5381
CONFIG="/tmp/numa-integration-test.toml"
LOG="/tmp/numa-integration-test.log"
PASSED=0
FAILED=0
# Colors
GREEN="\033[32m"
RED="\033[31m"
DIM="\033[90m"
RESET="\033[0m"
check() {
local name="$1"
local expected="$2"
local actual="$3"
if echo "$actual" | grep -q "$expected"; then
PASSED=$((PASSED + 1))
printf " ${GREEN}${RESET} %s\n" "$name"
else
FAILED=$((FAILED + 1))
printf " ${RED}${RESET} %s\n" "$name"
printf " ${DIM}expected: %s${RESET}\n" "$expected"
printf " ${DIM} got: %s${RESET}\n" "$actual"
fi
}
# Build if needed
if [ ! -f "$BINARY" ]; then
echo "Building $MODE..."
cargo build --$MODE
fi
run_test_suite() {
local SUITE_NAME="$1"
local SUITE_CONFIG="$2"
cat > "$CONFIG" << CONF
$SUITE_CONFIG
CONF
echo "Starting Numa on :$PORT ($SUITE_NAME)..."
RUST_LOG=info "$BINARY" "$CONFIG" > "$LOG" 2>&1 &
NUMA_PID=$!
sleep 4
if ! kill -0 "$NUMA_PID" 2>/dev/null; then
echo "Failed to start Numa:"
tail -5 "$LOG"
return 1
fi
DIG="dig @127.0.0.1 -p $PORT +time=5 +tries=1"
echo ""
echo "=== Resolution ==="
check "A record (google.com)" \
"." \
"$($DIG google.com A +short)"
check "AAAA record (google.com)" \
":" \
"$($DIG google.com AAAA +short)"
check "CNAME chasing (www.github.com)" \
"github.com" \
"$($DIG www.github.com A +short)"
check "MX records (gmail.com)" \
"gmail-smtp-in" \
"$($DIG gmail.com MX +short)"
check "NS records (cloudflare.com)" \
"cloudflare.com" \
"$($DIG cloudflare.com NS +short)"
check "NXDOMAIN" \
"NXDOMAIN" \
"$($DIG nope12345678.com A 2>&1 | grep status:)"
echo ""
echo "=== Ad Blocking ==="
if echo "$SUITE_CONFIG" | grep -q 'enabled = true'; then
check "Blocked domain → 0.0.0.0" \
"0.0.0.0" \
"$($DIG ads.google.com A +short)"
else
local ADS=$($DIG ads.google.com A +short 2>/dev/null)
if echo "$ADS" | grep -q "0.0.0.0"; then
check "Blocking disabled but domain blocked" "should-resolve" "0.0.0.0"
else
check "Blocking disabled — domain resolves normally" "." "$ADS"
fi
fi
echo ""
echo "=== Cache ==="
$DIG example.com A +short > /dev/null 2>&1
sleep 1
check "Cache hit returns result" \
"." \
"$($DIG example.com A +short)"
echo ""
echo "=== Connectivity ==="
# Apple captive portal can be slow/flaky on some networks
local CAPTIVE
CAPTIVE=$($DIG captive.apple.com A +short 2>/dev/null || echo "timeout")
if echo "$CAPTIVE" | grep -q "apple\|17\.\|timeout"; then
check "Apple captive portal" "." "$CAPTIVE"
else
check "Apple captive portal" "apple" "$CAPTIVE"
fi
check "CDN (jsdelivr)" \
"." \
"$($DIG cdn.jsdelivr.net A +short)"
echo ""
echo "=== API ==="
check "Health endpoint" \
"ok" \
"$(curl -s http://127.0.0.1:$API_PORT/health)"
check "Stats endpoint" \
"uptime_secs" \
"$(curl -s http://127.0.0.1:$API_PORT/stats)"
echo ""
echo "=== Log Health ==="
ERRORS=$(grep -c 'RECURSIVE ERROR\|PARSE ERROR\|HANDLER ERROR\|panic' "$LOG" 2>/dev/null || echo 0)
check "No critical errors in log" \
"0" \
"$ERRORS"
kill "$NUMA_PID" 2>/dev/null || true
wait "$NUMA_PID" 2>/dev/null || true
sleep 1
}
# ---- Suite 1: Recursive mode + DNSSEC ----
echo ""
echo "╔══════════════════════════════════════════╗"
echo "║ Suite 1: Recursive + DNSSEC + Blocking ║"
echo "╚══════════════════════════════════════════╝"
run_test_suite "recursive + DNSSEC + blocking" "
[server]
bind_addr = \"127.0.0.1:5354\"
api_port = 5381
[upstream]
mode = \"recursive\"
[cache]
max_entries = 10000
min_ttl = 60
max_ttl = 86400
[blocking]
enabled = true
[proxy]
enabled = false
[dnssec]
enabled = true
"
DIG="dig @127.0.0.1 -p $PORT +time=5 +tries=1"
echo ""
echo "=== DNSSEC (recursive only) ==="
# Re-start for DNSSEC checks (suite 1 instance was killed)
RUST_LOG=info "$BINARY" "$CONFIG" > "$LOG" 2>&1 &
NUMA_PID=$!
sleep 4
check "AD bit set (cloudflare.com)" \
" ad" \
"$($DIG cloudflare.com A +dnssec 2>&1 | grep flags:)"
check "EDNS DO bit echoed" \
"flags: do" \
"$($DIG cloudflare.com A +dnssec 2>&1 | grep 'EDNS:')"
echo ""
echo "=== TCP wire format (real servers) ==="
# Microsoft's Azure DNS servers require length+message in a single TCP segment.
# This test catches the split-write bug that caused early-eof SERVFAILs.
check "Microsoft domain (update.code.visualstudio.com)" \
"NOERROR" \
"$($DIG update.code.visualstudio.com A 2>&1 | grep status:)"
check "Office domain (ecs.office.com)" \
"NOERROR" \
"$($DIG ecs.office.com A 2>&1 | grep status:)"
# Azure Application Insights — another strict TCP server
check "Azure telemetry (eastus2-3.in.applicationinsights.azure.com)" \
"." \
"$($DIG eastus2-3.in.applicationinsights.azure.com A +short 2>/dev/null || echo 'timeout')"
kill "$NUMA_PID" 2>/dev/null || true
wait "$NUMA_PID" 2>/dev/null || true
sleep 1
# ---- Suite 2: Forward mode (backward compat) ----
echo ""
echo "╔══════════════════════════════════════════╗"
echo "║ Suite 2: Forward (DoH) + Blocking ║"
echo "╚══════════════════════════════════════════╝"
run_test_suite "forward DoH + blocking" "
[server]
bind_addr = \"127.0.0.1:5354\"
api_port = 5381
[upstream]
mode = \"forward\"
address = \"https://9.9.9.9/dns-query\"
[cache]
max_entries = 10000
min_ttl = 60
max_ttl = 86400
[blocking]
enabled = true
[proxy]
enabled = false
"
# ---- Suite 3: Forward UDP (plain, no DoH) ----
echo ""
echo "╔══════════════════════════════════════════╗"
echo "║ Suite 3: Forward (UDP) + No Blocking ║"
echo "╚══════════════════════════════════════════╝"
run_test_suite "forward UDP, no blocking" "
[server]
bind_addr = \"127.0.0.1:5354\"
api_port = 5381
[upstream]
mode = \"forward\"
address = \"9.9.9.9\"
port = 53
[cache]
max_entries = 10000
min_ttl = 60
max_ttl = 86400
[blocking]
enabled = false
[proxy]
enabled = false
"
# Verify blocking is actually off
RUST_LOG=info "$BINARY" "$CONFIG" > "$LOG" 2>&1 &
NUMA_PID=$!
sleep 3
echo ""
echo "=== Blocking disabled ==="
ADS_RESULT=$($DIG ads.google.com A +short 2>/dev/null)
if echo "$ADS_RESULT" | grep -q "0.0.0.0"; then
check "ads.google.com NOT blocked (blocking disabled)" "not-0.0.0.0" "0.0.0.0"
else
check "ads.google.com NOT blocked (blocking disabled)" "." "$ADS_RESULT"
fi
kill "$NUMA_PID" 2>/dev/null || true
wait "$NUMA_PID" 2>/dev/null || true
sleep 1
# ---- Suite 4: Local zones + Overrides API ----
echo ""
echo "╔══════════════════════════════════════════╗"
echo "║ Suite 4: Local Zones + Overrides API ║"
echo "╚══════════════════════════════════════════╝"
cat > "$CONFIG" << 'CONF'
[server]
bind_addr = "127.0.0.1:5354"
api_port = 5381
[upstream]
mode = "forward"
address = "9.9.9.9"
port = 53
[cache]
max_entries = 10000
[blocking]
enabled = false
[proxy]
enabled = false
[[zones]]
domain = "test.local"
record_type = "A"
value = "10.0.0.1"
ttl = 60
[[zones]]
domain = "mail.local"
record_type = "MX"
value = "10 smtp.local"
ttl = 60
CONF
RUST_LOG=info "$BINARY" "$CONFIG" > "$LOG" 2>&1 &
NUMA_PID=$!
sleep 3
echo ""
echo "=== Local Zones ==="
check "Local A record (test.local)" \
"10.0.0.1" \
"$($DIG test.local A +short)"
check "Local MX record (mail.local)" \
"smtp.local" \
"$($DIG mail.local MX +short)"
check "Non-local domain still resolves" \
"." \
"$($DIG example.com A +short)"
echo ""
echo "=== Overrides API ==="
# Create override
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X POST http://127.0.0.1:$API_PORT/overrides \
-H 'Content-Type: application/json' \
-d '{"domain":"override.test","target":"192.168.1.100","duration_secs":60}')
check "Create override (HTTP 200/201)" \
"20" \
"$HTTP_CODE"
sleep 1
check "Override resolves" \
"192.168.1.100" \
"$($DIG override.test A +short)"
# List overrides
check "List overrides" \
"override.test" \
"$(curl -s http://127.0.0.1:$API_PORT/overrides)"
# Delete override
curl -s -X DELETE http://127.0.0.1:$API_PORT/overrides/override.test > /dev/null
sleep 1
# After delete, should not resolve to override
AFTER_DELETE=$($DIG override.test A +short 2>/dev/null)
if echo "$AFTER_DELETE" | grep -q "192.168.1.100"; then
check "Override deleted" "not-192.168.1.100" "$AFTER_DELETE"
else
check "Override deleted" "." "deleted"
fi
echo ""
echo "=== Cache API ==="
check "Cache list" \
"domain" \
"$(curl -s http://127.0.0.1:$API_PORT/cache)"
# Flush cache
curl -s -X DELETE http://127.0.0.1:$API_PORT/cache > /dev/null
check "Cache flushed" \
"0" \
"$(curl -s http://127.0.0.1:$API_PORT/stats | grep -o '"entries":[0-9]*' | grep -o '[0-9]*')"
kill "$NUMA_PID" 2>/dev/null || true
wait "$NUMA_PID" 2>/dev/null || true
# Summary
echo ""
TOTAL=$((PASSED + FAILED))
if [ "$FAILED" -eq 0 ]; then
printf "${GREEN}All %d tests passed.${RESET}\n" "$TOTAL"
exit 0
else
printf "${RED}%d/%d tests failed.${RESET}\n" "$FAILED" "$TOTAL"
echo ""
echo "Log: $LOG"
exit 1
fi

128
tests/network-probe.sh Executable file
View File

@@ -0,0 +1,128 @@
#!/usr/bin/env bash
# Network probe: tests which DNS transports are available on the current network.
# Run on a problematic network to diagnose what's blocked.
# Usage: ./tests/network-probe.sh
set -euo pipefail
GREEN="\033[32m"
RED="\033[31m"
DIM="\033[90m"
RESET="\033[0m"
PASSED=0
FAILED=0
probe() {
local name="$1"
local cmd="$2"
local expect="$3"
local result
result=$(eval "$cmd" 2>&1) || true
if echo "$result" | grep -q "$expect"; then
PASSED=$((PASSED + 1))
printf " ${GREEN}${RESET} %-45s ${DIM}%s${RESET}\n" "$name" "$(echo "$result" | head -1 | cut -c1-60)"
else
FAILED=$((FAILED + 1))
printf " ${RED}${RESET} %-45s ${DIM}blocked/timeout${RESET}\n" "$name"
fi
}
echo ""
echo "Network DNS Transport Probe"
echo "==========================="
echo "Network: $(networksetup -getairportnetwork en0 2>/dev/null | sed 's/Current Wi-Fi Network: //' || echo 'unknown')"
echo "Local IP: $(ipconfig getifaddr en0 2>/dev/null || echo 'unknown')"
echo "Gateway: $(route -n get default 2>/dev/null | grep gateway | awk '{print $2}' || echo 'unknown')"
echo ""
echo "=== UDP port 53 (recursive resolution) ==="
probe "Root server a (198.41.0.4)" \
"dig @198.41.0.4 . NS +short +time=5 +tries=1" \
"root-servers"
probe "Root server k (193.0.14.129)" \
"dig @193.0.14.129 . NS +short +time=5 +tries=1" \
"root-servers"
probe "Google DNS (8.8.8.8)" \
"dig @8.8.8.8 google.com A +short +time=5 +tries=1" \
"\."
probe "Cloudflare (1.1.1.1)" \
"dig @1.1.1.1 cloudflare.com A +short +time=5 +tries=1" \
"\."
probe ".com TLD (192.5.6.30)" \
"dig @192.5.6.30 google.com NS +short +time=5 +tries=1" \
"google"
echo ""
echo "=== TCP port 53 ==="
probe "Google DNS TCP (8.8.8.8)" \
"dig @8.8.8.8 google.com A +short +tcp +time=5 +tries=1" \
"\."
probe "Root server TCP (198.41.0.4)" \
"dig @198.41.0.4 . NS +short +tcp +time=5 +tries=1" \
"root-servers"
echo ""
echo "=== DoT port 853 (DNS-over-TLS) ==="
probe "Quad9 DoT (9.9.9.9:853)" \
"echo Q | openssl s_client -connect 9.9.9.9:853 -servername dns.quad9.net 2>&1 | grep 'verify return'" \
"verify return"
probe "Cloudflare DoT (1.1.1.1:853)" \
"echo Q | openssl s_client -connect 1.1.1.1:853 -servername cloudflare-dns.com 2>&1 | grep 'verify return'" \
"verify return"
echo ""
echo "=== DoH port 443 (DNS-over-HTTPS) ==="
probe "Quad9 DoH (dns.quad9.net)" \
"curl -s -m 5 -H 'accept: application/dns-json' 'https://dns.quad9.net:443/dns-query?name=google.com&type=A'" \
"Answer"
probe "Cloudflare DoH (1.1.1.1)" \
"curl -s -m 5 -H 'accept: application/dns-json' 'https://1.1.1.1/dns-query?name=google.com&type=A'" \
"Answer"
probe "Google DoH (dns.google)" \
"curl -s -m 5 'https://dns.google/resolve?name=google.com&type=A'" \
"Answer"
echo ""
echo "=== ISP DNS ==="
# Detect system DNS
SYS_DNS=$(scutil --dns 2>/dev/null | grep "nameserver\[0\]" | head -1 | awk '{print $3}' || echo "unknown")
if [ "$SYS_DNS" != "unknown" ] && [ "$SYS_DNS" != "127.0.0.1" ]; then
probe "ISP DNS ($SYS_DNS)" \
"dig @$SYS_DNS google.com A +short +time=5 +tries=1" \
"\."
else
printf " ${DIM} System DNS is $SYS_DNS (skipped)${RESET}\n"
fi
echo ""
echo "==========================="
TOTAL=$((PASSED + FAILED))
printf "Results: ${GREEN}%d passed${RESET}, ${RED}%d blocked${RESET} / %d total\n" "$PASSED" "$FAILED" "$TOTAL"
echo ""
echo "Recommendation:"
if [ "$FAILED" -eq 0 ]; then
echo " All transports available. Recursive mode will work."
elif dig @198.41.0.4 . NS +short +time=5 +tries=1 2>&1 | grep -q "root-servers"; then
echo " UDP:53 works. Recursive mode will work."
else
echo " UDP:53 blocked — recursive mode will NOT work on this network."
if curl -s -m 5 'https://dns.quad9.net:443/dns-query?name=test.com&type=A' 2>&1 | grep -q "Answer"; then
echo " DoH (port 443) works — use mode = \"forward\" with DoH upstream."
elif echo Q | openssl s_client -connect 9.9.9.9:853 2>&1 | grep -q "verify return"; then
echo " DoT (port 853) works — DoT upstream would work (not yet implemented)."
else
echo " Only ISP DNS available. Use mode = \"forward\" with ISP auto-detect."
fi
fi