46 Commits

Author SHA1 Message Date
Razvan Dimescu
2de1bc2efc chore: bump version to 0.12.0 2026-04-11 12:15:40 +03:00
Razvan Dimescu
156b68de87 fix: replace unscannable QR art with placeholder in blog post (#80)
The Unicode block-character QR code in the DoT blog post can't be
scanned by phone cameras due to HTML font metrics distorting the grid.
Replace with a bordered placeholder box — the dashboard screenshot
already shows a working QR.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 04:17:46 +03:00
Razvan Dimescu
7d6b0ed568 feat: DoH server endpoint + DoT enabled by default (#79)
* chore: document multi-forwarder and cache warming in config and README

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: DNS-over-HTTPS server endpoint (RFC 8484)

Serve DoH at POST /dns-query on the existing HTTPS proxy (port 443).
Automatically enabled when proxy TLS is active — no config needed.
Also fix zone map priority so local zones override RFC 6762 .local
special-use handling.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: cargo fmt

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: remove GoatCounter analytics from site

GoatCounter domains (goatcounter.com, gc.zgo.at) are blocked by
Hagezi Pro, which is Numa's default blocklist. A DNS privacy tool
should not embed analytics that its own resolver blocks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: enable DoT listener by default

DoT now starts automatically with `sudo numa`, matching the proxy and
DoH which are already on by default. The self-signed CA infrastructure
is shared with the proxy, so there is no additional setup. This makes
`numa setup-phone` work out of the box.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-11 04:06:17 +03:00
Razvan Dimescu
7770129589 feat: cache warming — proactive DNS resolution for configured domains (#78)
Resolves A + AAAA at startup for domains listed in [cache] warm,
then re-resolves before TTL expiry (at 75% elapsed). Keeps critical
domains always hot in cache with zero client-visible latency.

Closes #34 (item 4)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-11 01:14:04 +03:00
Razvan Dimescu
8abcd91f95 feat: multi-forwarder with SRTT-based failover (#77)
* feat: multi-forwarder with SRTT-based failover

address accepts string or array, with optional per-server port override.
New fallback pool tried only when all primaries fail. Sequential failover
with SRTT ranking ensures fastest upstream is tried first.

Closes #34 (items 1, 2, 3)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: simplify failover candidate list and deduplicate recursive pool

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: extract maybe_update_primary for testable upstream re-detection

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: rustfmt

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 00:26:58 +03:00
Razvan Dimescu
a96b84fdeb ci: use pandoc/actions/setup instead of apt-get (#76)
apt-get install pandoc took ~27 minutes due to apt index refresh.
The prebuilt binary action completes in seconds.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 23:16:59 +03:00
Razvan Dimescu
23ff3ce455 chore: blog full QR output + dashboard screenshot, hero script phone setup scene (#75)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 22:34:41 +03:00
Razvan Dimescu
2c20c56421 feat: mobile setup — QR onboarding, Wi-Fi scoped mobileconfig (#73)
* fix: scope mobileconfig DNS to Wi-Fi only via OnDemandRules

Without OnDemandRules, iOS applies the DoT profile globally —
cellular DNS breaks when the phone leaves the LAN.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: phone setup QR code in dashboard header

- Add /qr endpoint serving SVG (uses existing qrcode crate, svg feature)
- Header popover: QR on desktop, direct download link on mobile viewports
- Only visible when [mobile] enabled = true in config
- Expose mobile.enabled and mobile.port in /stats response
- Lazy-load QR on first click, dismiss on outside click

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: add Cache-Control to /qr, re-fetch QR on each popover open

Cache-Control: no-store prevents stale QR after LAN IP change.
Remove qrLoaded flag so the QR always reflects the current IP.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: rustfmt serve_qr response tuple

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: add iOS install steps to phone setup popover

iOS shows "Profile Downloaded" with no guidance. The popover
now includes the 3-step install flow including the buried
Certificate Trust Settings toggle.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 22:21:51 +03:00
Razvan Dimescu
921ed68d54 fix: allowlist parent domain unblocks subdomains (#74)
* fix: allowlist parent domain unblocks subdomains in blocklist

The allowlist walk-up was interleaved with the blocklist walk-up,
so an exact blocklist match on www.example.com short-circuited
before reaching example.com in the allowlist. Now allowlist is
checked at all parent levels before consulting the blocklist.

Deduplicate is_blocked/check via find_in_set helper; is_blocked
delegates to check. Adds 7 new blocklist tests.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: rustfmt blocklist tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* perf: zero-alloc is_blocked hot path, normalize trailing dots

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: rustfmt add_to_allowlist

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: extract normalize() for domain lowering + dot stripping

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 21:43:40 +03:00
Razvan Dimescu
8da03b1b8c chore: GoatCounter analytics, README v0.11.0, DoT blog post (#72)
* chore: GoatCounter analytics, README for v0.11.0, DoT blog post

- Add GoatCounter script to site pages (cookie-free, no consent needed)
- Update README: setup-phone section, DoT blog link, roadmap checkbox
- Add DoT blog post source and SVG assets

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: site navbar, updated roadmap, DoT blog listing, spec updates

- Add top nav bar to landing page (wordmark + links, responsive)
- Add DoT blog post entry to blog index
- Update roadmap: phases 8-13 (hostile-network, Windows, DoT shipped)
- Update specs: listeners section, dependency description, port list
- Blog: hostile-network SVG in DNSSEC post, text-transform fix
- Blog template: wordmark text-transform fix

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 19:23:11 +03:00
Razvan Dimescu
652fca5b80 chore: bump version to 0.11.0 2026-04-10 19:10:58 +03:00
Razvan Dimescu
de15b32325 feat: numa setup-phone — QR-based mobile DoT onboarding (#38)
* feat: numa setup-phone — QR-based mobile DoT onboarding

Adds a CLI subcommand that generates a one-time mobileconfig profile
containing both the Numa local CA (as a com.apple.security.root payload)
and the DoT DNS settings, then serves it via a temporary HTTP server
and prints a scannable QR code in the terminal.

Flow:
  1. User runs `numa setup-phone` (no sudo needed)
  2. Detects current LAN IP, reads CA from /usr/local/var/numa/ca.pem
  3. Builds combined mobileconfig (CA trust + DoT)
  4. Renders QR code with qrcode crate (Unicode block characters)
  5. Serves the profile on port 8765, stays open until Ctrl+C
  6. Counts successful downloads (multi-device households)

Important caveat documented in instructions: even with the CA bundled
in the profile, iOS still requires the user to manually enable trust
in Settings → General → About → Certificate Trust Settings. Verified
on a real iPhone.

Stable PayloadIdentifiers/UUIDs ensure re-running replaces the
existing profile on iOS rather than accumulating duplicates.

- New module: src/setup_phone.rs (~270 lines)
- New CLI subcommand: `numa setup-phone`
- New dependency: qrcode = "0.14" (default-features = false)
- tokio "signal" feature added for Ctrl+C handling
- 3 unit tests: PEM stripping, mobileconfig generation, QR rendering

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: mobile API, enriched /health, mobileconfig module

Adds a persistent read-only HTTP listener (default port 8765, LAN-bound)
serving a dedicated subset of Numa's API for iOS/Android companion apps
and as a replacement for the one-shot server setup_phone used to spin up:

  GET /health           — enriched JSON with version, hostname, LAN IP,
                          SNI, DoT config, mobile API port, CA
                          fingerprint, features (shared handler with
                          the main API on port 5380)
  GET /ca.pem           — public CA certificate (shared handler)
  GET /mobileconfig     — full iOS profile (CA trust + DNS settings
                          pinned to current LAN IP)
  GET /ca.mobileconfig  — CA-only iOS profile (trust anchor without
                          DNS override — for the iOS companion app's
                          programmatic DNS flow via NEDNSSettingsManager)

All routes are idempotent GETs. The mobile API never serves the
state-mutating routes that live on the main API (overrides, blocking
toggle, service CRUD, cache flush), so it is safe to expose on the LAN
regardless of the main API's bind address. The CA private key is never
served by any route.

Opt-in via `[mobile] enabled = true`. Default is false so new installs
do not silently expose a LAN listener after upgrading; our committed
numa.toml template enables it explicitly for spike testing.

New modules:

- src/mobileconfig.rs — ProfileMode::{Full, CaOnly} enum with plist
  builder lifted from setup_phone.rs. Full and CaOnly share the CA
  payload UUID (same trust anchor) but have distinct top-level UUIDs
  so they coexist as separate installable profiles on iOS.

- src/health.rs — HealthMeta cached metadata built once at startup
  from config + CA fingerprint (SHA-256 of the PEM via ring), and the
  HealthResponse JSON shape shared between the main and mobile APIs.

- src/mobile_api.rs — axum Router for the persistent listener. Reuses
  api::health and api::serve_ca from the main API; owns the two
  mobileconfig handlers.

Modified:

- src/api.rs — health() returns the enriched HealthResponse, now pub.
  serve_ca is now pub so mobile_api can reuse it.
- src/config.rs — MobileConfig section (enabled, port, bind_addr).
- src/ctx.rs — health_meta: HealthMeta field on ServerCtx.
- src/main.rs — builds HealthMeta at startup, spawns mobile API
  listener if enabled.
- src/lan.rs — build_announcement takes &HealthMeta and writes
  enriched TXT records (version, api_port, proto, dot_port, ca_fp).
  SRV port now reports the mobile API port; peer discovery still
  reads TXT `services=` so this is backwards compatible. Always
  announces even when no .numa services are registered, so the iOS
  companion app can discover Numa via mDNS regardless of service
  state.
- src/setup_phone.rs — reduced from 267 to 100 lines. The CLI is now
  a thin QR wrapper over the persistent /mobileconfig endpoint; the
  hand-rolled one-shot HTTP server (accept_loop, RUST_OK_HEADERS,
  RUST_NOT_FOUND, download counter) is gone.
- src/dot.rs — test fixture updated with HealthMeta::test_fixture().
- numa.toml — commented [mobile] section, enabled = true for spike.

Tests: 136 unit tests passing (5 new in mobileconfig, 3 new in health).
cargo clippy clean. Integration sanity check: curl'd /health, /ca.pem,
/mobileconfig, /ca.mobileconfig against a running numa — all return
200 with correct content types and valid response bodies.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: setup-phone probe, unknown command error, query source in dashboard

- setup-phone now probes the mobile API before printing the QR code
  and shows an actionable error if [mobile] is not enabled
- Unknown CLI subcommands print an error instead of silently
  attempting to start a full server
- Dashboard query log shows source IP under timestamp (localhost
  for loopback, full IP for LAN devices) with full addr on hover

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 19:08:56 +03:00
Razvan Dimescu
6f961c5ec2 fix: push only specific tag when releaseing a new version 2026-04-10 09:03:03 +03:00
Razvan Dimescu
20bf14e91c chore: bump version to 0.10.3 2026-04-10 08:59:46 +03:00
Razvan Dimescu
e860731c01 fix: escape DNS label text per RFC 1035 §5.1 (closes #36) (#54)
* fix: escape dots and special characters in DNS label text per RFC 1035 §5.1

Closes #36

read_qname was pushing raw label bytes directly into the output string,
producing ambiguous text for labels containing dots, backslashes, or
non-printable bytes. fanf2 spotted this on HN: wire format
`[8]exa.mple[3]com[0]` (two labels, first containing a literal 0x2E)
was rendered as `exa.mple.com`, indistinguishable from three labels.

Fix both sides of the text representation per RFC 1035 §5.1:

read_qname — when rendering wire bytes to text:
- literal `.` within a label → `\.`
- literal `\` → `\\`
- bytes outside 0x21..=0x7E → `\DDD` (3-digit decimal)
- printable ASCII passes through unchanged

write_qname — when parsing text back to wire:
- `\.` produces a literal 0x2E inside the current label (not a separator)
- `\\` produces a literal 0x5C
- `\DDD` produces the byte with that decimal value (0..=255)
- unescaped `.` still separates labels, empty labels still skipped
- rejects trailing `\`, short `\DD`, and `\DDD` > 255

Impact in practice is low — real-world domains don't contain dots in
labels — but it's a correctness bug in the wire format parser that
could cause round-trip failures with adversarial input. The parser is
the core of the project, so correctness bugs take priority over
practical impact.

Adds 16 unit tests in a new `#[cfg(test)] mod tests` block covering:
plain domain read/write, literal-dot escaping on both sides, backslash
escaping, non-printable + space decimal escapes, full round-trip
preservation, and the three rejection cases for malformed escapes.

Credit: fanf2 (https://news.ycombinator.com/item?id=47612321)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: stream label writes directly into buffer (review feedback)

The first cut of this fix delegated write_qname to a helper
(parse_escaped_labels) that built Vec<Vec<u8>> up-front, then iterated
to emit the wire bytes. On a plain-ASCII domain like "www.google.com"
that's ~4 heap allocations per write_qname call, and record.rs calls
write_qname ~6 times per response — so this PR would regress
bench_buffer_serialize by roughly 24 extra allocations per response
vs. main, where the old non-escaping code had zero.

Rewrite write_qname as a streaming byte-level loop that reserves the
length byte up-front, writes the label body directly into the buffer,
then backpatches the length via set(). Zero intermediate allocations
on the common path, and the 63-byte label cap is now checked
incrementally so oversized labels fail fast.

Byte-level scanning is safe for UTF-8 input: continuation bytes are
always in 0x80..=0xBF, so they can never collide with the ASCII `.`
(0x2E) or `\` (0x5C) that drive label splitting and escape parsing.

Also inline the \DDD rendering in read_qname to avoid the per-byte
format!() allocation on non-printable input. Plain-ASCII reads hit
the unchanged push(c as char) fast path, so the common case has zero
regression.

The parse_escaped_labels helper is deleted — no remaining callers.

All 158 tests pass, clippy + fmt clean. Collapses three review
findings (HIGH allocation regression, MEDIUM format! allocation,
MEDIUM .unwrap() after digit guard) in one pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: route dnssec::name_to_wire through write_qname for escape handling

Closes #55.

dnssec::name_to_wire was a parallel implementation of the old
write_qname's splitting loop — it iterated qname.split('.') and pushed
raw bytes. It predated and duplicated the buffer.rs logic, and it did
not understand RFC 1035 §5.1 text escapes. After the read_qname fix in
this PR, names that come out of read_qname can contain \., \\, or
\DDD sequences; feeding those back into the old name_to_wire would
split on the literal '.' inside a \. sequence and produce corrupt
RRSIG signed-data blobs.

The underlying bug predates this PR — the old read_qname was broken
too, so both sides of the DNSSEC canonical form pipeline were
silently wrong in the same way. Making read_qname correct exposed the
divergence, so it's fixed here in the same PR that introduced the
exposure.

Reimplement name_to_wire on top of BytePacketBuffer::write_qname:
reserve a scratch buffer, let write_qname handle the escape parsing
and length-byte framing, copy the emitted bytes into a Vec, then
walk the wire once more to lowercase label bodies (length bytes stay
untouched). Canonical form per RFC 4034 §6.2 requires the
lowercasing, and it has to happen post-escape-resolution — a
decimal escape like \065 produces 0x41 ('A'), which must be
lowercased to 'a' in the final wire bytes.

Call sites in build_signed_data, record_to_canonical_wire,
record_rdata_canonical, and nsec3_hash are unchanged — the public
signature stays the same, infallible Vec<u8> return.

Tests:
- name_to_wire_escaped_dot_in_label_is_not_a_separator — verifies
  the fanf2 example round-trips correctly through canonical form
- name_to_wire_decimal_escape_is_lowercased — verifies post-escape
  lowercasing (the subtle correctness requirement)
- existing name_to_wire_root, name_to_wire_domain, ds_verification
  tests still pass unchanged

Test count: 158 → 160.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: tighten name_to_wire per review feedback

- Replace hand-rolled per-byte lowercase loop with stdlib
  [u8]::make_ascii_lowercase(). Shorter and idiomatic.
- Tighten the .expect() message to state the actual invariant
  (parseable DNS name) instead of vague "well-formed" language.
- Replace the doc comment's "see #55" with the real invariant —
  issue numbers rot, and by merge time #55 is closed anyway. The
  comment now explains WHY the lowercase pass has to happen
  post-escape-resolution (\065 → 'A' → 'a') instead of during
  write_qname.
- Drop the redundant `\065` test comment (the one-liner version
  is enough with the assertion showing the transform).

No behavior change; 160 tests still pass, clippy + fmt clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: cover label cap and empty-label rollback; trim doc comments

Closes coverage gaps left by PR #54:

- write_rejects_label_over_63_bytes: pins the incremental 63-byte cap
  inside write_qname's inner loop (boundary at 63 vs 64).
- write_skips_empty_labels: pins the rollback branch (pos = len_pos)
  triggered by leading or consecutive dots.

Doc comments tightened:

- write_qname: drop the streaming-impl walkthrough and the escape-grammar
  restatement (already documented on read_qname).
- name_to_wire: drop the implementation explanation; keep the
  post-escape lowercasing rationale, which pins behavior a future
  refactor could silently break.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 08:53:46 +03:00
Razvan Dimescu
f556b60ce4 fix: suppress recursive hint in install when already configured (#71)
`sudo numa install` unconditionally printed the "Want full DNS
sovereignty?" hint even when numa.toml already has mode = "recursive".
Now loads the config first and skips the message if recursive is
already set.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 08:32:51 +03:00
Razvan Dimescu
422726f1c8 chore(deps): bump rcgen from 0.13 to 0.14 (#70)
rcgen 0.14 replaced the separate Certificate + KeyPair args with a
unified Issuer type. Migrates ensure_ca and generate_service_cert:

- Load path: Issuer::from_ca_cert_der replaces the old
  CertificateParams::from_ca_cert_pem + self_signed round-trip.
- Generate path: Issuer::new(params, key_pair) constructs directly
  from the params used for self_signed (no DER re-parse).
- signed_by takes (&key_pair, &issuer) instead of (&key_pair, &cert, &key).

Also drops thiserror v1 from the dep tree (rcgen 0.14 uses v2).

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 08:28:07 +03:00
dependabot[bot]
dd021d8642 chore(deps)(deps): bump socket2 from 0.5.10 to 0.6.3 (#67)
Bumps [socket2](https://github.com/rust-lang/socket2) from 0.5.10 to 0.6.3.
- [Release notes](https://github.com/rust-lang/socket2/releases)
- [Changelog](https://github.com/rust-lang/socket2/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/socket2/commits/v0.6.3)

---
updated-dependencies:
- dependency-name: socket2
  dependency-version: 0.6.3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:58:07 +03:00
dependabot[bot]
f20c72a829 chore(deps)(deps): bump toml from 0.8.23 to 1.1.2+spec-1.1.0 (#65)
Bumps [toml](https://github.com/toml-rs/toml) from 0.8.23 to 1.1.2+spec-1.1.0.
- [Commits](https://github.com/toml-rs/toml/compare/toml-v0.8.23...toml-v1.1.2)

---
updated-dependencies:
- dependency-name: toml
  dependency-version: 1.1.2+spec-1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:57:55 +03:00
dependabot[bot]
44cd17cf84 chore(deps)(deps): bump criterion from 0.5.1 to 0.8.2 (#64)
Bumps [criterion](https://github.com/criterion-rs/criterion.rs) from 0.5.1 to 0.8.2.
- [Release notes](https://github.com/criterion-rs/criterion.rs/releases)
- [Changelog](https://github.com/criterion-rs/criterion.rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/criterion-rs/criterion.rs/compare/0.5.1...criterion-v0.8.2)

---
updated-dependencies:
- dependency-name: criterion
  dependency-version: 0.8.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:56:50 +03:00
dependabot[bot]
fb0a21e5e6 chore(deps)(deps): bump the minor-and-patch group with 3 updates (#63)
Bumps the minor-and-patch group with 3 updates: [tokio](https://github.com/tokio-rs/tokio), [hyper](https://github.com/hyperium/hyper) and [arc-swap](https://github.com/vorner/arc-swap).


Updates `tokio` from 1.50.0 to 1.51.1
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.50.0...tokio-1.51.1)

Updates `hyper` from 1.8.1 to 1.9.0
- [Release notes](https://github.com/hyperium/hyper/releases)
- [Changelog](https://github.com/hyperium/hyper/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper/compare/v1.8.1...v1.9.0)

Updates `arc-swap` from 1.9.0 to 1.9.1
- [Changelog](https://github.com/vorner/arc-swap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/vorner/arc-swap/compare/v1.9.0...v1.9.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.51.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: minor-and-patch
- dependency-name: hyper
  dependency-version: 1.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: minor-and-patch
- dependency-name: arc-swap
  dependency-version: 1.9.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: minor-and-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:55:24 +03:00
dependabot[bot]
66b937f710 chore(deps): bump actions/download-artifact from 4 to 8 (#69)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 8.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v8)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:54:58 +03:00
dependabot[bot]
524aed7fa1 chore(deps)(deps): bump actions/checkout from 4 to 6 (#60)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Commits](https://github.com/actions/checkout/compare/v4...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:51:33 +03:00
dependabot[bot]
11e3fdeae6 chore(deps)(deps): bump actions/upload-artifact from 4 to 7 (#58)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4 to 7.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4...v7)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:50:51 +03:00
dependabot[bot]
636c45b3d7 chore(deps)(deps): bump actions/upload-pages-artifact from 3 to 4 (#59)
Bumps [actions/upload-pages-artifact](https://github.com/actions/upload-pages-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-pages-artifact/releases)
- [Commits](https://github.com/actions/upload-pages-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-pages-artifact
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:50:38 +03:00
dependabot[bot]
f602687d93 chore(deps)(deps): bump actions/configure-pages from 5 to 6 (#61)
Bumps [actions/configure-pages](https://github.com/actions/configure-pages) from 5 to 6.
- [Release notes](https://github.com/actions/configure-pages/releases)
- [Commits](https://github.com/actions/configure-pages/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/configure-pages
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:50:22 +03:00
dependabot[bot]
b8b0fda1e0 chore(deps)(deps): bump actions/deploy-pages from 4 to 5 (#62)
Bumps [actions/deploy-pages](https://github.com/actions/deploy-pages) from 4 to 5.
- [Release notes](https://github.com/actions/deploy-pages/releases)
- [Commits](https://github.com/actions/deploy-pages/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/deploy-pages
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:50:08 +03:00
Razvan Dimescu
9a3fae9a0c fix: drop include:scope from dependabot commit-message config (#68)
The combination of `prefix: "chore(deps)"` and `include: "scope"`
produced `chore(deps)(deps):` — double scope. Removing `include`
keeps the prefix as-is.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 07:49:46 +03:00
dependabot[bot]
a31ac36957 chore(deps)(deps): bump the minor-and-patch group with 2 updates (#57)
Bumps the minor-and-patch group with 2 updates: rust and alpine.


Updates `rust` from 1.88-alpine to 1.94-alpine

Updates `alpine` from 3.20 to 3.23

---
updated-dependencies:
- dependency-name: rust
  dependency-version: 1.94-alpine
  dependency-type: direct:production
  dependency-group: minor-and-patch
- dependency-name: alpine
  dependency-version: '3.23'
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: minor-and-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 07:48:46 +03:00
Casey Labs
9001b14fed [Feature] Add GitHub Dependabot scanning (runs once a month) (#46)
* Add GitHub Dependabot scanning (runs once a month)

* chore: group dependabot updates and use conventional commit prefix

Bundle all minor/patch bumps per ecosystem into a single PR to keep
noise manageable (~3 PRs/month instead of 10+). Major bumps still
get individual PRs since they may break APIs.

Commit messages now use the `chore(deps)` conventional-commit prefix
to match the repo's existing style.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Razvan Dimescu <ssaricu@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 07:40:49 +03:00
Razvan Dimescu
63ac69a222 ci: call homebrew-bump as reusable workflow instead of PAT event propagation (#53)
Reverts PR #44's approach of swapping GITHUB_TOKEN for a PAT on
action-gh-release. That approach worked in principle but failed in
practice during the v0.10.2 cut: HOMEBREW_TAP_GITHUB_TOKEN is a
fine-grained PAT scoped only to razvandimescu/homebrew-tap, so when
action-gh-release tried to create a release on razvandimescu/numa it
got 403 Resource not accessible. v0.10.2 had to be recovered manually
via `gh release create` from a user PAT.

Root cause of the original bug (from #44): GitHub Actions deliberately
does not propagate workflow events triggered by GITHUB_TOKEN, so a
release created by GITHUB_TOKEN silently failed to fire homebrew-bump's
`release: published` trigger.

Fix: sidestep the event-propagation rule entirely by invoking
homebrew-bump.yml directly as a reusable workflow via `workflow_call`.

- release.yml: drop the `token:` override on action-gh-release (reverts
  to GITHUB_TOKEN default, which v0.10.0 and v0.10.1 used successfully)
  and add a new `bump-homebrew` job that `needs: release` and `uses:`
  homebrew-bump.yml with `secrets: inherit`.
- homebrew-bump.yml: add `workflow_call` trigger with a `version` input,
  remove the `release: published` trigger (no longer needed), keep
  `workflow_dispatch` for manual recovery, and collapse the version
  determination step to a single `inputs.version` read.

Each token now does exactly what its scope permits:
- GITHUB_TOKEN creates the release on numa (contents: write, default)
- HOMEBREW_TAP_GITHUB_TOKEN pushes to homebrew-tap (unchanged)

The tap update becomes a child job in the release run, so failures are
visible in one place instead of "why didn't the release event fire?"
mysteries.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 23:33:48 +03:00
Razvan Dimescu
1f6bdff8f8 chore: bump version to 0.10.2 2026-04-09 22:59:10 +03:00
Razvan Dimescu
643d6b01e1 fix(linux): consult resolvectl when resolv.conf only shows the stub (#52)
On modern Arch / Ubuntu 22.04+ / Fedora desktops, NetworkManager +
systemd-resolved symlink /etc/resolv.conf to stub-resolv.conf, which
contains only:

  nameserver 127.0.0.53

The real upstream servers (router, ISP, configured DoT providers) live
inside systemd-resolved's per-link state, exposed via 'resolvectl status'.

discover_linux() was parsing /etc/resolv.conf, correctly filtering the
stub address, and then falling through to the Quad9 DoH fallback because
detect_dhcp_dns() is macOS-only on Linux. Net effect: on a large chunk of
Linux installs, numa silently defaulted to Quad9 instead of the user's
actual DNS — visible in Casey's AUR test banner (#33) as
'Upstream https://9.9.9.9/dns-query' despite his machine having working
router DNS the entire time.

resolvectl_dns_server() already exists — it was introduced for cloud VPC
forwarding-rule discovery and knows how to ask systemd-resolved for the
real active DNS server. This commit wires it into the default-upstream
fallback chain, between the primary resolv.conf parse and the
~/.numa/original-resolv.conf backup.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 22:32:57 +03:00
Razvan Dimescu
17c8e70aa3 fix(ci): skip prepare() in publish-aur metadata container (#51)
Follow-up to #49 and #50. With ownership and quoting fixed, the next run
([24199871832](https://github.com/razvandimescu/numa/actions/runs/24199871832))
reached makepkg and failed with:

  /pkg/PKGBUILD: line 34: cargo: command not found
  ==> ERROR: A failure occurred in prepare().

The publish job only installs 'binutils git sudo' since its sole purpose
is to regenerate .SRCINFO. 'makepkg -od' still runs prepare(), which
calls cargo. The sibling validate job avoids this by passing --noprepare
(and installs rust anyway).

Mirror that pattern: add --noprepare to the metadata-generation invocation.
pkgver() runs before prepare() in makepkg's pipeline, so .SRCINFO still
captures the computed version. Keeps the container minimal (no rust toolchain).

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 19:39:28 +03:00
Razvan Dimescu
389ac09907 fix(ci): repair broken quoting in publish-aur docker heredoc (#50)
The docker block runs as '/bin/bash -c "<multi-line script>"'. A comment
inside the script contained embedded double quotes:

  # "makepkg -od" fetches the source first so pkgver() can calculate the version.

The first embedded '"' prematurely closes the outer string. Bash then
parses the remainder into a second argument to 'bash -c' which becomes
$0 inside the container and is silently discarded. Net effect: the
in-container script stops at 'git config --add safe.directory', neither
'makepkg -od' nor 'makepkg --printsrcinfo > .SRCINFO' ever run, and the
host-side 'git add PKGBUILD .SRCINFO' fails with:

  fatal: pathspec '.SRCINFO' did not match any files

This bug was masked by the earlier ownership bug fixed in #49 — once
that permission error was removed, this one surfaced.

Fix: drop the embedded double quotes from the comment.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 18:55:03 +03:00
Razvan Dimescu
5308e9648c fix(ci): reclaim aur-repo ownership after docker chown (#49)
The 'Push to AUR' step failed on run 24195384571 with:
  error: could not lock config file .git/config: Permission denied

Inside the docker block we 'chown -R builduser:builduser /pkg', which
propagates through the bind mount and transfers ownership of aur-repo/
(including .git/) to the container's builduser UID. When control returns
to the runner user, 'git config user.name' can no longer write .git/config
and the step exits 255.

Chown the directory back to the runner's UID/GID before resuming host-side
git operations.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 18:24:30 +03:00
Casey Labs
819614fa7d [Feature] Add GitHub Action Workflow for Arch Linux AUR Package publishing (#33)
* Feature: add GitHub Actions workflow for publishing Arch Linux AUR package

* Fix issues in Arch Linux AUR publishing process

* Add patch to fix default Arch Linux binary path location issues

* fix: PKGBUILD compatibility with numa v0.10.1, fix QEMU action SHA pin

Three small bug fixes that make this PR mergeable end-to-end against
current main, without changing the package design (still numa-git,
still pushed on every main commit, still tracking HEAD via pkgver()):

1. Simplified prepare() — drop the obsolete sed patching for
   /usr/local/bin/numa. That literal only appears in a comment
   in current main; the actual binary path is determined at
   runtime via std::env::current_exe(). Additionally, numa
   v0.10.1 ships PR #43 which makes numa FHS-compliant on Linux
   out of the box (/var/lib/numa for data dir), so no source
   patching is needed at all on Arch.

2. Fixed package() sed for the systemd unit. The previous sed
   targeted "ExecStart=/usr/local/bin/numa" but numa.service
   actually uses "{{exe_path}}" as a templating placeholder
   that's substituted at runtime by replace_exe_path() when
   `numa install` runs. The sed silently did nothing, and the
   AUR-installed unit file would have a literal "{{exe_path}}"
   that systemd cannot start. Fixed sed:

     sed 's|{{exe_path}}|/usr/bin/numa /etc/numa.toml|g' \
       numa.service > numa.service.patched

3. Fixed broken docker/setup-qemu-action SHA pin in
   publish-aur.yml. The pinned SHA
   6882732593b27c7f95a044d559b586a46371a68e doesn't exist as
   a commit in upstream docker/setup-qemu-action. Verified
   v3.0.0 SHA is 68827325e0b33c7199eb31dd4e31fbe9023e06e3.
   Without this fix the aarch64 validate job would fail to
   load the action at workflow start.

Also refreshed the stale pkgver placeholder in PKGBUILD and
.SRCINFO from 0.9.1.r0.g1234abc to 0.10.1.r0.g0000000 — purely
cosmetic since pkgver() auto-overrides on every makepkg run,
but at least the in-VC value reflects the current era.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: make AUR packaging x86_64-only and stabilize local validation

Turns out Arch Linux doesn't officially support aarch64 architecture, so we will drop if from this AUR build process.

Changes:

- drop aarch64 from PKGBUILD, .SRCINFO, and AUR validation workflow
- keep AUR process aligned with official Arch Linux x86_64 support
- install rust directly in CI to avoid Arch cargo provider prompts
- fetch sources before running cargo audit and audit inside the
fetched repo
- disable makepkg LTO for this package to avoid Arch packaging link
failures
- mark /etc/numa.toml as a backup file
- Add local AUR build scratch directory exclusion to .gitignore

* Add temporary AUR test workflow

* Update github actions checkout workflow version

* remove temporary AUR test workflow

* fix: correct AUR SSH host key fingerprint

The previously pinned ed25519 key was truncated (52 chars) and did not
match the actual aur.archlinux.org host key. Verified via ssh-keyscan.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Razvan Dimescu <ssaricu@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 17:22:38 +03:00
Razvan Dimescu
fab8b698d8 fix: human-readable advisories for TLS data_dir + port-53 EACCES (#48)
* fix: human-readable advisory when TLS data_dir is not writable

When numa runs as non-root on a system with a privileged default
data_dir (e.g. /usr/local/var/numa on macOS), TLS CA setup fails with
a raw "Permission denied (os error 13)" and HTTPS proxy is silently
disabled. The user sees a cryptic warning with no path forward.

Detect std::io::ErrorKind::PermissionDenied on the tls error, print a
diagnostic naming the data_dir and offering two fixes (install as
system resolver, or point data_dir at a writable path), and keep the
graceful-degradation behavior — DNS resolution and plain-HTTP proxy
continue to work without HTTPS.

All other TLS setup errors fall through to the existing log::warn!.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: port-53 advisory also handles EACCES (non-root privileged bind)

The original port-53 match arm only caught EADDRINUSE, so a fresh
non-root user on macOS/Linux hitting EACCES when trying to bind a
privileged port saw the raw OS error instead of the advisory.

Collapse the scoping helper and the advisory into a single
`try_port53_advisory(bind_addr, &io::Error) -> Option<String>` that
returns the formatted diagnostic when both the port is 53 and the
error kind is one we can speak to (AddrInUse or PermissionDenied),
and `None` otherwise. The two failure modes share one body with a
cause-sentence variant — no duplicated fix text.

Caller becomes a plain if-let: no match guard, no separate is_port_53
helper exposed on the public API. is_port_53 goes back to private.

Unit tests cover all branches: AddrInUse, PermissionDenied, non-53
bind_addr, unrelated ErrorKind, and malformed bind_addr.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: move TLS error classification into tls module

main.rs no longer downcasts a boxed error to figure out whether it's
a permission-denied case. tls::try_data_dir_advisory(&err, &dir)
encapsulates the downcast + kind match and returns Some(advisory) or
None, mirroring system_dns::try_port53_advisory. main.rs becomes a
plain if-let, symmetric with the port-53 path.

Trim the docstrings on both advisory functions: they were narrating
the implementation (errno mapping) instead of stating the contract.

Add unit tests for try_data_dir_advisory covering PermissionDenied,
other io::ErrorKind, and non-io errors.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 16:27:08 +03:00
Razvan Dimescu
a6f23a5ddb fix: advisory + exit(1) when port 53 is already in use (#45) (#47)
* fix: advisory + exit(1) when port 53 is already in use (#45)

Detect AddrInUse on bind, print a human-readable diagnostic explaining
systemd-resolved / Dnscache as the likely cause and offer two concrete
fixes (sudo numa install, or bind_addr on a non-privileged port), then
exit(1) instead of surfacing a raw OS error.

Adds tests/docker/smoke-port53.sh: end-to-end Docker test that
pre-binds port 53 with a Python UDP socket and asserts the advisory +
exit code.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: collapse port53 advisory to single flat path

The per-platform cause sentences were cosmetic — they didn't change
the user's actions (install, or bind_addr on a non-privileged port),
but they introduced duplicated "another process..." strings, a
dead-from-CI branch (is_systemd_resolved_active() == true is never
reached by any test), and a pub visibility bump on
is_systemd_resolved_active for a single caller.

Replace with one flat format! whose cause line mentions both
systemd-resolved and the Windows DNS Client inline. The existing
smoke test now exercises 100% of the function.

is_systemd_resolved_active reverts to private.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 15:03:58 +03:00
Razvan Dimescu
27dfaab360 ci: pass PAT to action-gh-release so release events propagate (#44)
GitHub Actions deliberately does not propagate workflow events triggered
by the default GITHUB_TOKEN — a safety feature against infinite loops.
softprops/action-gh-release falls back to GITHUB_TOKEN when no `token`
is supplied, so the resulting `release: published` event was silently
swallowed and never reached homebrew-bump.yml.

Discovered shipping v0.10.1: tag pushed cleanly, crates.io published
cleanly, GitHub release page created cleanly, but the brew tap never
auto-bumped. Had to trigger homebrew-bump.yml manually via
workflow_dispatch.

Fix: pass HOMEBREW_TAP_GITHUB_TOKEN explicitly. This is already a PAT
(used by homebrew-bump.yml to push cross-repo to razvandimescu/
homebrew-tap), so reusing it keeps the secret surface flat. PAT-authored
release events are the documented escape hatch from the GITHUB_TOKEN
no-propagation rule.

Applies to v0.10.2+. v0.10.1 was bumped manually.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 18:26:21 +03:00
Razvan Dimescu
b2ed2e6aec chore: bump version to 0.10.1 2026-04-08 18:05:00 +03:00
Razvan Dimescu
79ecb73d87 fix: use FHS-compliant /var/lib/numa as Linux data dir default (#43)
* fix: use FHS-compliant /var/lib/numa as Linux data dir default

numa's default system-wide data directory was hardcoded to
/usr/local/var/numa for all Unix platforms. This is the right path on
macOS (Homebrew prefix convention) but non-FHS on Linux, where Arch /
Fedora / Debian / etc. expect persistent state under /var/lib/<pkg>.
The mismatch was invisible to existing users (numa creates the dir
silently on first run) but immediately surfaces when packaging for a
distro — see PR #33 (community contribution to add an Arch AUR package)
which had to add fragile sed-based path patching at PKGBUILD build time.

The fix moves the path decision into a small helper:

  - daemon_data_dir()        — cfg-gated platform dispatch (linux/macos)
  - resolve_linux_data_dir() — pure function, takes "does X exist?"
                               as parameters, returns the right path

Linux behavior:
  - Fresh install                       → /var/lib/numa (FHS)
  - Upgrading from pre-v0.10.1 install  → /usr/local/var/numa (legacy)
  - Both paths exist                    → /var/lib/numa (FHS wins)

The legacy fallback is critical: existing v0.10.0 Linux users have
their CA cert + services.json under /usr/local/var/numa. Returning
the new path unconditionally would cause CA regeneration on upgrade,
breaking every browser that had trusted the previous CA. The fallback
is checked at startup via std::path::Path::exists, so the upgrade is
seamless and zero-config.

macOS behavior is unchanged — /usr/local/var/numa is still correct
because Homebrew's prefix is /usr/local.

Test coverage:

  - resolve_linux_data_dir is a pure function gated cfg(any(linux,test))
    so the same code path is unit-tested on every platform's CI run.
  - Four tests cover all combinations of (legacy_exists, fhs_exists),
    asserting the migration logic stays correct under future edits.

The default config in numa.toml is also updated to document the new
per-platform default paths.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: end-to-end FHS path verification + simplify cleanup

Two related changes from a /simplify pass and a follow-up testing
finalization:

1. lib.rs cleanup (no behavior change):
   - Drop FHS_LINUX_DATA_DIR and LEGACY_LINUX_DATA_DIR consts. Both
     were used in only 4 places total and the unit tests already
     bypassed them with string literals, so they were over-engineering.
     Inline the strings in daemon_data_dir() and resolve_linux_data_dir().
   - Trim narrating doc/comments on the helper and the test bodies.
     Keep only the non-obvious WHY (the macOS Homebrew note and the
     migration-keeps-legacy rationale).

2. tests/docker/smoke-arch.sh:
   - Cherry-picked the previously-uncommitted Arch compatibility smoke
     test from feat/smoke-arch.
   - Removed the [server] data_dir = "/tmp/numa-smoke" override from
     the test config so the script now exercises the DEFAULT data dir
     code path — which is exactly what the FHS fix touches.
   - Added a path assertion after the dig succeeds: verify that
     /var/lib/numa/ca.pem exists (FHS) and /usr/local/var/numa is
     absent (no accidental dual-creation on a fresh install).

Verified end-to-end on archlinux:latest (Apple Silicon, Rosetta):

  ── building + running numa on archlinux:latest ──
  ── cargo build --release --locked ──
      Finished `release` profile [optimized] target(s) in 24.02s
  ── dig @127.0.0.1 -p 5354 google.com A ──
    142.251.38.206
  ── FHS path check ──
    ✓ CA cert at /var/lib/numa/ca.pem (FHS path)
    ✓ legacy path /usr/local/var/numa absent (fresh install used FHS)
  ── smoke-arch passed ──

This closes the testing gap where the unit tests covered the
path-decision LOGIC in isolation but nothing exercised the live
wiring on a real Linux filesystem.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 18:00:27 +03:00
Razvan Dimescu
bf5565ac26 fix: macOS use launchctl bootout/bootstrap instead of deprecated load (#42)
The deprecated `launchctl load -w` returns exit code 0 even when it
cannot actually reload a service whose label is already in launchd's
in-memory state. It prints `Load failed: 5: Input/output error` to
stderr but exits 0, so the install path interprets it as success and
continues — silently leaving the running daemon on whatever binary
was first loaded, even though the on-disk plist now points elsewhere.

The consequence: every macOS user running `brew upgrade numa` rewrites
the plist to point at the new binary, but launchctl never actually
loads it. They think they upgraded; they're still running the old
version. Neither #41 (cross-platform CA trust) nor #40 (self-referential
backup) would actually take effect for them until they manually run:

  sudo launchctl bootout system /Library/LaunchDaemons/com.numa.dns.plist
  sudo launchctl bootstrap system /Library/LaunchDaemons/com.numa.dns.plist

The fix uses the modern API symmetrically across all three call sites:

- install_service_macos: bootout (best-effort cleanup, no-op on first
  install) → bootstrap → wait for readiness → configure DNS
- install_service_macos rollback path: bootout instead of `unload`
- uninstall_service_macos: bootout BEFORE remove_file (the modern API
  needs the plist file path as the specifier; doing it after remove
  would leave the service in memory until reboot)

No new tests — this is a shell-call substitution with no logic to
unit-test. Verified manually on macOS: `sudo numa install` no longer
prints `Load failed`, and the daemon is correctly running the binary
the plist points at.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:54:21 +03:00
Razvan Dimescu
679b346246 fix: prevent self-referential DNS backup on re-install (#40)
* fix: prevent self-referential DNS backup on re-install

The install flow previously captured current system DNS servers
verbatim into the backup file. If numa was already installed, current
DNS was 127.0.0.1, so the "backup" recorded 127.0.0.1 as the "original"
— making a subsequent uninstall a no-op self-reference.

Reproduced 2026-04-08 during v0.10.0 brew dogfood: after
`sudo numa uninstall; sudo /opt/homebrew/bin/numa install`,
`sudo numa uninstall` printed `restored DNS for "Wi-Fi" -> 127.0.0.1`
because the brew binary's install step had overwritten the backup with
the already-stub state.

Fix (all three platforms):
- macOS/Windows: if the existing backup already contains at least one
  non-loopback/non-stub upstream, preserve it as-is. If writing a fresh
  backup, filter loopback/stub addresses first so a capture from
  already-numa-managed state isn't self-referential.
- Linux (resolv.conf fallback path): detect numa-managed or all-loopback
  resolv.conf content and skip the file copy in that case; preserve an
  existing useful backup rather than overwriting it. systemd-resolved
  path is unaffected (uses a drop-in, no backup file).

Adds three unit tests for the predicates: macOS HashMap detection,
Windows interface filter, and resolv.conf parsing (real upstream,
self-referential, numa-marker, systemd stub, mixed).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: share iter_nameservers helper and reuse resolv.conf content

Post-review simplifications on the stale-backup fix:

- Extract iter_nameservers(&str) helper used by both parse_resolv_conf
  and resolv_conf_has_real_upstream. Eliminates the duplicated
  line-by-line nameserver parsing (findings from reuse review).
- install_linux: reuse the already-read resolv.conf content via
  std::fs::write instead of a second read via std::fs::copy.
- install_macos / install_windows: flatten the conditional eprintln
  pattern — always print a blank line, conditionally print the save
  message. Equivalent output, less branching.

Net −12 lines. All 130 tests still pass, clippy clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: drop redundant trim before split_whitespace

CI caught `clippy::trim_split_whitespace` on Rust 1.94: `split_whitespace()`
already skips leading/trailing whitespace, so `.trim()` first is redundant.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: extract load_backup helper

Remove duplicated read+deserialize boilerplate shared by install_macos
and install_windows. The two call sites each had an identical 4-line
chain of read_to_string().ok().and_then(serde_json::from_str).ok() —
collapse into a single generic helper load_backup<T>().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Revert "refactor: extract load_backup helper"

This reverts commit a54fb99428.

* test: drop windows_backup_filters_loopback

The test inlined the 3-line filter block from install_windows rather
than calling a production helper, so it was testing stdlib Vec::retain
+ is_loopback_or_stub — both already covered elsewhere. Deleting it
removes a test that would silently pass even if install_windows stopped
filtering altogether.

The predicate logic for macOS-shaped backups stays covered by
macos_backup_real_upstream_detection (same inner Vec<String> type).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: add windows_backup_filters_loopback unit test

The PR description mentioned this test but it was missing from the
diff, leaving backup_has_real_upstream_windows untested. Mirrors the
shape of macos_backup_real_upstream_detection: empty map → false,
all-loopback (127.0.0.1, ::1, 0.0.0.0) → false, one real entry
alongside loopback → true.

Also relax the cfg gate on backup_has_real_upstream_windows from
cfg(windows) to cfg(any(windows, test)) so the test compiles
cross-platform, matching how backup_has_real_upstream_macos and
the resolv_conf helpers are gated.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:38:37 +03:00
Razvan Dimescu
039254280b fix: cross-platform CA trust (Arch/Fedora + Windows) (#41)
* fix: cross-platform CA trust (Arch/Fedora + Windows)

Closes #35.

trust_ca_linux now detects which trust store the distro ships and
runs the matching refresh command, instead of hardcoding Debian's
update-ca-certificates. Detection walks a const table in priority
order, picking the first whose anchor dir exists:

  - debian: /usr/local/share/ca-certificates  (update-ca-certificates)
  - pki:    /etc/pki/ca-trust/source/anchors  (update-ca-trust extract)
  - p11kit: /etc/ca-certificates/trust-source/anchors (trust extract-compat)

Falls back with a clear error listing every backend tried.

Adds Windows support via certutil -addstore Root / -delstore Root,
removing the silent CA-trust gap on numa install (previously the
service installed but the trust step quietly errored, leaving every
HTTPS .numa request throwing browser warnings).

Refactor: trust_ca and untrust_ca are now thin dispatchers calling
per-platform helpers. CA_COMMON_NAME and CA_FILE_NAME are centralized
in tls.rs and reused from system_dns.rs and api.rs. untrust_ca_linux
no longer pre-checks file existence (TOCTOU) and skips the refresh
when no file was actually removed.

Test: tests/docker/install-trust.sh runs the install/uninstall
contract against debian:stable, fedora:latest, and archlinux:latest
in containers, asserting the cert lands in (and is removed from)
the system bundle. All three pass locally.

README notes the Firefox/NSS limitation (separate trust store).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: rustfmt fixes for trust_ca_linux helpers

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: macOS CA trust contract test (manual)

Adds tests/manual/install-trust-macos.sh — a sudo bash script that
mirrors trust_ca_macos / untrust_ca_macos against a fixture cert with
a unique CN. Designed to coexist with a running production numa:

- Refuses to run if a real "Numa Local CA" is already in System.keychain
  (fail-closed protection for dogfood installs)
- Uses a unique CN ("Numa Local CA Test <pid-timestamp>") so the test
  cert can never collide with production
- Mirrors the by-hash deletion loop from untrust_ca_macos
- Trap-cleanup on success or interrupt

Lives under tests/manual/ to signal "host-mutating, dev-only" — distinct
from tests/docker/install-trust.sh which is hermetic.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: relax bail-out in macOS trust test (safe alongside production)

The bail-out was overly defensive. The test cert uses a unique CN
("Numa Local CA Test <pid-ts>") that is strictly longer than the
production CN, so `security find-certificate -c $TEST_CN` cannot
substring-match the production cert. All deletes are by-hash, which
can only target the test cert's specific hash. Coexistence is
provably safe; document the reasoning in the header comment block
and replace the refusal with an informational notice.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 15:18:01 +03:00
Razvan Dimescu
1b2f682026 ci: auto-bump homebrew formula on release (#39)
Add a workflow that runs on release:published (and via manual
workflow_dispatch), fetches sha256 checksums from the published release
assets, and rewrites razvandimescu/homebrew-tap/numa.rb in place:
version, URL paths, and sha256 lines after each url. The formula's
existing on_macos/on_linux structure is preserved.

Uses HOMEBREW_TAP_GITHUB_TOKEN (already set as a repo secret) to push
directly to the tap's main branch.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 03:47:43 +03:00
51 changed files with 4663 additions and 517 deletions

19
.SRCINFO Normal file
View File

@@ -0,0 +1,19 @@
pkgbase = numa-git
pkgdesc = Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS
pkgver = 0.10.1.r0.g0000000
pkgrel = 1
url = https://github.com/razvandimescu/numa
arch = x86_64
license = MIT
options = !lto
makedepends = cargo
makedepends = git
depends = gcc-libs
depends = glibc
provides = numa
conflicts = numa
backup = etc/numa.toml
source = numa::git+https://github.com/razvandimescu/numa.git
sha256sums = SKIP
pkgname = numa-git

34
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
version: 2
updates:
- package-ecosystem: "cargo"
directory: "/"
schedule:
interval: "monthly"
commit-message:
prefix: "chore(deps)"
groups:
minor-and-patch:
patterns: ["*"]
update-types: ["minor", "patch"]
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"
commit-message:
prefix: "chore(deps)"
groups:
minor-and-patch:
patterns: ["*"]
update-types: ["minor", "patch"]
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "monthly"
commit-message:
prefix: "chore(deps)"
groups:
minor-and-patch:
patterns: ["*"]
update-types: ["minor", "patch"]

View File

@@ -13,7 +13,7 @@ jobs:
check: check:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@stable - uses: dtolnay/rust-toolchain@stable
with: with:
components: rustfmt, clippy components: rustfmt, clippy
@@ -30,7 +30,7 @@ jobs:
check-macos: check-macos:
runs-on: macos-latest runs-on: macos-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@stable - uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
- name: clippy - name: clippy
@@ -41,7 +41,7 @@ jobs:
check-windows: check-windows:
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@stable - uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
- name: build - name: build
@@ -51,7 +51,7 @@ jobs:
- name: test - name: test
run: cargo test run: cargo test
- name: Upload binary - name: Upload binary
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v7
with: with:
name: numa-windows-x86_64 name: numa-windows-x86_64
path: target/debug/numa.exe path: target/debug/numa.exe

77
.github/workflows/homebrew-bump.yml vendored Normal file
View File

@@ -0,0 +1,77 @@
name: Bump Homebrew Tap
on:
workflow_call:
inputs:
version:
description: 'Version to bump (e.g. 0.10.0 or v0.10.0)'
type: string
required: true
workflow_dispatch:
inputs:
version:
description: 'Version to bump (e.g. 0.10.0 or v0.10.0)'
required: true
permissions:
contents: read
jobs:
bump:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Determine version
id: ver
env:
INPUT_VERSION: ${{ inputs.version }}
run: |
V="${INPUT_VERSION#v}"
echo "version=$V" >> "$GITHUB_OUTPUT"
- name: Fetch sha256 checksums from release assets
id: shas
env:
V: ${{ steps.ver.outputs.version }}
run: |
set -euo pipefail
base="https://github.com/razvandimescu/numa/releases/download/v${V}"
for t in macos-aarch64 macos-x86_64 linux-aarch64 linux-x86_64; do
sha=$(curl -fsSL "${base}/numa-${t}.tar.gz.sha256" | awk '{print $1}')
if [ -z "$sha" ]; then
echo "ERROR: failed to fetch sha256 for $t" >&2
exit 1
fi
key=$(echo "$t" | tr '[:lower:]-' '[:upper:]_')
echo "SHA_${key}=${sha}" >> "$GITHUB_ENV"
done
- name: Clone homebrew-tap
env:
HOMEBREW_TAP_GITHUB_TOKEN: ${{ secrets.HOMEBREW_TAP_GITHUB_TOKEN }}
run: |
git clone "https://x-access-token:${HOMEBREW_TAP_GITHUB_TOKEN}@github.com/razvandimescu/homebrew-tap.git" tap
- name: Update formula
env:
VERSION: ${{ steps.ver.outputs.version }}
run: |
python3 scripts/update-homebrew-formula.py tap/numa.rb
echo "--- updated numa.rb ---"
cat tap/numa.rb
- name: Commit and push
working-directory: tap
env:
V: ${{ steps.ver.outputs.version }}
run: |
if git diff --quiet; then
echo "numa.rb already at v${V}, nothing to commit"
exit 0
fi
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add numa.rb
git commit -m "chore: bump numa to v${V}"
git push origin main

159
.github/workflows/publish-aur.yml vendored Normal file
View File

@@ -0,0 +1,159 @@
# `publish-aur.yml` - Arch Linux AUR Package Workflow
# --------------------
# This workflow automates the validation and publishing of the 'numa-git' package to the
# Arch User Repository (AUR). The AUR is a community-driven repository for Arch Linux users.
#
# Workflow Overview:
# 1. Validate: Builds and tests the package for Arch Linux x86_64 using a clean
# Arch Linux container.
# 2. Audit: Checks Rust dependencies for known security vulnerabilities using
# 'cargo-audit'.
# 3. Publish: If on the 'main' branch, it pushes the updated PKGBUILD and
# .SRCINFO to the AUR.
#
# Security Best Practices:
# - SHA Pinning: All GitHub Actions are pinned to a full-length commit SHA (e.g., v6.0.2 @ SHA)
# to ensure the code is immutable and protects against supply-chain attacks where a tag
# might be maliciously moved to a compromised commit.
# - SSH Hygiene: Uses ssh-agent to keep the private key in memory rather than on disk.
# - Audit: Runs 'cargo audit' to prevent publishing known vulnerable dependencies.
name: Publish - Arch Linux AUR Package
on:
push:
branches: [main]
workflow_dispatch:
permissions:
contents: read
jobs:
# The 'validate' job ensures that the PKGBUILD is correct and the software builds/tests
# successfully on Arch Linux before we attempt to publish it.
validate:
name: Validate PKGBUILD (${{ matrix.arch }})
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch: [x86_64]
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Build and Test Package
timeout-minutes: 60
env:
AUR_PKGNAME: ${{ secrets.AUR_PACKAGE_NAME }}
run: |
# We use a temporary directory to avoid Docker permission issues with the workspace.
mkdir -p build-dir
cp PKGBUILD build-dir/
docker run --rm -v $PWD/build-dir:/pkg -w /pkg archlinux:latest /bin/bash -c "
# ARCH LINUX SECURITY REQUIREMENT:
# 'makepkg' (the tool that builds Arch packages) refuses to run as root for safety.
# We must create a standard user and give them sudo access.
# Install build-time dependencies.
# 'base-devel' includes essential tools like gcc, make, and binutils.
# Install 'rust' directly to avoid the interactive virtual-package
# prompt for 'cargo' on current Arch images.
pacman -Syu --noconfirm --needed base-devel rust git sudo cargo-audit
useradd -m builduser
chown -R builduser:builduser /pkg
# Allow the build user to install dependencies during the build process.
echo 'builduser ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/builduser
# Fetch the source tree first so pkgver() and cargo-audit have a
# real Cargo.lock to inspect.
sudo -u builduser makepkg -o --nobuild --nocheck --nodeps --noprepare
# SECURITY AUDIT:
# Fail early if any dependencies have known security vulnerabilities.
sudo -u builduser sh -lc 'cd /pkg/src/numa && cargo audit'
# BUILD & TEST:
# 'makepkg -s' will:
# 1. Download source files (cloning this repo)
# 2. Run prepare(), build(), and check() (running cargo test)
# 3. Create the final .pkg.tar.zst package
sudo -u builduser makepkg -s --noconfirm
"
# The 'publish' job updates the AUR repository with our latest PKGBUILD and .SRCINFO.
publish:
name: Publish to AUR
needs: validate
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
# Securely configure SSH for AUR access.
- name: Configure SSH
run: |
mkdir -p ~/.ssh
# Official AUR Ed25519 fingerprint (prevents Man-in-the-Middle attacks).
echo "aur.archlinux.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEuBKrPzbawxA/k2g6NcyV5jmqwJ2s+zpgZGZ7tpLIcN" >> ~/.ssh/known_hosts
# Use ssh-agent to keep the private key in memory rather than writing it to disk.
eval $(ssh-agent -s)
echo "${{ secrets.AUR_SSH_PRIVATE_KEY }}" | tr -d '\r' | ssh-add -
# Export the agent socket so subsequent 'git' commands can use it.
echo "SSH_AUTH_SOCK=$SSH_AUTH_SOCK" >> $GITHUB_ENV
echo "SSH_AGENT_PID=$SSH_AGENT_PID" >> $GITHUB_ENV
- name: Push to AUR
env:
AUR_PKGNAME: ${{ secrets.AUR_PACKAGE_NAME }}
AUR_EMAIL: ${{ secrets.AUR_EMAIL }}
AUR_USER: ${{ secrets.AUR_USERNAME }}
run: |
# AUR repos are managed via Git. Each package has its own repo at:
# ssh://aur@aur.archlinux.org/<package-name>.git
git clone ssh://aur@aur.archlinux.org/$AUR_PKGNAME.git aur-repo
cp PKGBUILD aur-repo/
cd aur-repo
# METADATA GENERATION:
# '.SRCINFO' is a machine-readable version of the PKGBUILD.
# We must run this as a non-root user ('builduser') inside the container.
docker run --rm -v $(pwd):/pkg archlinux:latest /bin/bash -c "
pacman -Syu --noconfirm --needed binutils git sudo
useradd -m builduser
chown -R builduser:builduser /pkg
cd /pkg
sudo -u builduser git config --global --add safe.directory '*'
# makepkg -od fetches the source first so pkgver() can calculate the version.
# --noprepare skips the prepare() function, which invokes cargo and would
# otherwise require a full rust toolchain in this metadata-only container.
# pkgver() runs before prepare(), so .SRCINFO still gets the correct version.
sudo -u builduser makepkg -od --noprepare && sudo -u builduser makepkg --printsrcinfo > .SRCINFO
"
# Reclaim ownership: the in-container 'chown -R builduser:builduser /pkg'
# propagates through the bind mount, leaving .git/ owned by the container's
# builduser UID. Without this, subsequent 'git config' on the host fails with
# "could not lock config file .git/config: Permission denied".
sudo chown -R "$(id -u):$(id -g)" .
# Set the commit identity using secrets for security and auditability.
git config user.name "$AUR_USER"
git config user.email "$AUR_EMAIL"
# Stage and commit both the human-readable PKGBUILD and machine-readable .SRCINFO.
git add PKGBUILD .SRCINFO
if ! git diff --cached --quiet; then
git commit -m "chore: update PKGBUILD to ${{ github.sha }}"
git push origin master
else
echo "No changes to commit (metadata and PKGBUILD are already up-to-date)."
fi

View File

@@ -31,7 +31,7 @@ jobs:
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v6
- name: Install Rust - name: Install Rust
uses: dtolnay/rust-toolchain@stable uses: dtolnay/rust-toolchain@stable
@@ -70,7 +70,7 @@ jobs:
(Get-FileHash "${{ matrix.name }}.zip" -Algorithm SHA256).Hash.ToLower() + " ${{ matrix.name }}.zip" | Out-File "${{ matrix.name }}.zip.sha256" -Encoding ascii (Get-FileHash "${{ matrix.name }}.zip" -Algorithm SHA256).Hash.ToLower() + " ${{ matrix.name }}.zip" | Out-File "${{ matrix.name }}.zip.sha256" -Encoding ascii
- name: Upload artifact - name: Upload artifact
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v7
with: with:
name: ${{ matrix.name }} name: ${{ matrix.name }}
path: | path: |
@@ -82,7 +82,7 @@ jobs:
publish: publish:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v6
- name: Install Rust - name: Install Rust
uses: dtolnay/rust-toolchain@stable uses: dtolnay/rust-toolchain@stable
@@ -96,7 +96,7 @@ jobs:
needs: [build, publish] needs: [build, publish]
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/download-artifact@v4 - uses: actions/download-artifact@v8
with: with:
merge-multiple: true merge-multiple: true
@@ -108,3 +108,10 @@ jobs:
*.tar.gz *.tar.gz
*.zip *.zip
*.sha256 *.sha256
bump-homebrew:
needs: release
uses: ./.github/workflows/homebrew-bump.yml
with:
version: ${{ github.ref_name }}
secrets: inherit

View File

@@ -30,18 +30,18 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v6
- name: Install pandoc - name: Install pandoc
run: sudo apt-get install -y pandoc uses: pandoc/actions/setup@v1
- name: Generate blog HTML - name: Generate blog HTML
run: make blog run: make blog
- name: Setup Pages - name: Setup Pages
uses: actions/configure-pages@v5 uses: actions/configure-pages@v6
- name: Upload artifact - name: Upload artifact
uses: actions/upload-pages-artifact@v3 uses: actions/upload-pages-artifact@v4
with: with:
# Upload entire repository # Upload entire repository
path: './site' path: './site'
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
id: deployment id: deployment
uses: actions/deploy-pages@v4 uses: actions/deploy-pages@v5

2
.gitignore vendored
View File

@@ -1,4 +1,6 @@
/target /target
/build-dir
CLAUDE.md CLAUDE.md
docs/ docs/
site/blog/posts/ site/blog/posts/
ios/

246
Cargo.lock generated
View File

@@ -17,6 +17,15 @@ dependencies = [
"memchr", "memchr",
] ]
[[package]]
name = "alloca"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5a7d05ea6aea7e9e64d25b9156ba2fee3fdd659e34e41063cd2fc7cd020d7f4"
dependencies = [
"cc",
]
[[package]] [[package]]
name = "anes" name = "anes"
version = "0.1.6" version = "0.1.6"
@@ -84,9 +93,9 @@ dependencies = [
[[package]] [[package]]
name = "asn1-rs" name = "asn1-rs"
version = "0.6.2" version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5493c3bedbacf7fd7382c6346bbd66687d12bbaad3a89a2d2c303ee6cf20b048" checksum = "56624a96882bb8c26d61312ae18cb45868e5a9992ea73c58e45c3101e56a1e60"
dependencies = [ dependencies = [
"asn1-rs-derive", "asn1-rs-derive",
"asn1-rs-impl", "asn1-rs-impl",
@@ -94,15 +103,15 @@ dependencies = [
"nom", "nom",
"num-traits", "num-traits",
"rusticata-macros", "rusticata-macros",
"thiserror 1.0.69", "thiserror",
"time", "time",
] ]
[[package]] [[package]]
name = "asn1-rs-derive" name = "asn1-rs-derive"
version = "0.5.1" version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "965c2d33e53cb6b267e148a4cb0760bc01f4904c1cd4bb4002a085bb016d1490" checksum = "3109e49b1e4909e9db6515a30c633684d68cdeaa252f215214cb4fa1a5bfee2c"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@@ -368,25 +377,24 @@ dependencies = [
[[package]] [[package]]
name = "criterion" name = "criterion"
version = "0.5.1" version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2b12d017a929603d80db1831cd3a24082f8137ce19c69e6447f54f5fc8d692f" checksum = "950046b2aa2492f9a536f5f4f9a3de7b9e2476e575e05bd6c333371add4d98f3"
dependencies = [ dependencies = [
"alloca",
"anes", "anes",
"cast", "cast",
"ciborium", "ciborium",
"clap", "clap",
"criterion-plot", "criterion-plot",
"is-terminal",
"itertools", "itertools",
"num-traits", "num-traits",
"once_cell",
"oorandom", "oorandom",
"page_size",
"plotters", "plotters",
"rayon", "rayon",
"regex", "regex",
"serde", "serde",
"serde_derive",
"serde_json", "serde_json",
"tinytemplate", "tinytemplate",
"walkdir", "walkdir",
@@ -394,9 +402,9 @@ dependencies = [
[[package]] [[package]]
name = "criterion-plot" name = "criterion-plot"
version = "0.5.0" version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b50826342786a51a89e2da3a28f1c32b06e387201bc2d19791f622c673706b1" checksum = "d8d80a2f4f5b554395e47b5d8305bc3d27813bacb73493eb1001e8f76dae29ea"
dependencies = [ dependencies = [
"cast", "cast",
"itertools", "itertools",
@@ -441,9 +449,9 @@ checksum = "d7a1e2f27636f116493b8b860f5546edb47c8d8f8ea73e1d2a20be88e28d1fea"
[[package]] [[package]]
name = "der-parser" name = "der-parser"
version = "9.0.0" version = "10.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5cd0a5c643689626bec213c4d8bd4d96acc8ffdb4ad4bb6bc16abf27d5f4b553" checksum = "07da5016415d5a3c4dd39b11ed26f915f52fc4e0dc197d87908bc916e51bc1a6"
dependencies = [ dependencies = [
"asn1-rs", "asn1-rs",
"displaydoc", "displaydoc",
@@ -514,6 +522,16 @@ version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f" checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
[[package]]
name = "errno"
version = "0.3.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.61.2",
]
[[package]] [[package]]
name = "find-msvc-tools" name = "find-msvc-tools"
version = "0.1.9" version = "0.1.9"
@@ -702,12 +720,6 @@ version = "0.16.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100" checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
[[package]]
name = "hermit-abi"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc0fef456e4baa96da950455cd02c081ca953b141298e41db3fc7e36b1da849c"
[[package]] [[package]]
name = "http" name = "http"
version = "1.4.0" version = "1.4.0"
@@ -810,7 +822,7 @@ dependencies = [
"libc", "libc",
"percent-encoding", "percent-encoding",
"pin-project-lite", "pin-project-lite",
"socket2 0.6.3", "socket2",
"tokio", "tokio",
"tower-service", "tower-service",
"tracing", "tracing",
@@ -944,17 +956,6 @@ dependencies = [
"serde", "serde",
] ]
[[package]]
name = "is-terminal"
version = "0.4.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3640c1c38b8e4e43584d8df18be5fc6b0aa314ce6ebf51b53313d4306cca8e46"
dependencies = [
"hermit-abi",
"libc",
"windows-sys 0.61.2",
]
[[package]] [[package]]
name = "is_terminal_polyfill" name = "is_terminal_polyfill"
version = "1.70.2" version = "1.70.2"
@@ -963,9 +964,9 @@ checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695"
[[package]] [[package]]
name = "itertools" name = "itertools"
version = "0.10.5" version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b0fd2260e829bddf4cb6ea802289de2f86d6a7a690192fbe91b3f46e0f2c8473" checksum = "413ee7dfc52ee1a4949ceeb7dbc8a33f2d6c088194d9f922fb8318faf1f01186"
dependencies = [ dependencies = [
"either", "either",
] ]
@@ -1143,7 +1144,7 @@ dependencies = [
[[package]] [[package]]
name = "numa" name = "numa"
version = "0.10.0" version = "0.12.0"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"axum", "axum",
@@ -1155,6 +1156,7 @@ dependencies = [
"hyper", "hyper",
"hyper-util", "hyper-util",
"log", "log",
"qrcode",
"rcgen", "rcgen",
"reqwest", "reqwest",
"ring", "ring",
@@ -1162,7 +1164,7 @@ dependencies = [
"rustls-pemfile", "rustls-pemfile",
"serde", "serde",
"serde_json", "serde_json",
"socket2 0.5.10", "socket2",
"time", "time",
"tokio", "tokio",
"tokio-rustls", "tokio-rustls",
@@ -1172,9 +1174,9 @@ dependencies = [
[[package]] [[package]]
name = "oid-registry" name = "oid-registry"
version = "0.7.1" version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8d8034d9489cdaf79228eb9f6a3b8d7bb32ba00d6645ebd48eef4077ceb5bd9" checksum = "12f40cff3dde1b6087cc5d5f5d4d65712f34016a03ed60e9c08dcc392736b5b7"
dependencies = [ dependencies = [
"asn1-rs", "asn1-rs",
] ]
@@ -1197,6 +1199,16 @@ version = "11.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d6790f58c7ff633d8771f42965289203411a5e5c68388703c06e14f24770b41e" checksum = "d6790f58c7ff633d8771f42965289203411a5e5c68388703c06e14f24770b41e"
[[package]]
name = "page_size"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "30d5b2194ed13191c1999ae0704b7839fb18384fa22e49b57eeaa97d79ce40da"
dependencies = [
"libc",
"winapi",
]
[[package]] [[package]]
name = "pem" name = "pem"
version = "3.0.6" version = "3.0.6"
@@ -1301,6 +1313,12 @@ dependencies = [
"unicode-ident", "unicode-ident",
] ]
[[package]]
name = "qrcode"
version = "0.14.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d68782463e408eb1e668cf6152704bd856c78c5b6417adaee3203d8f4c1fc9ec"
[[package]] [[package]]
name = "quinn" name = "quinn"
version = "0.11.9" version = "0.11.9"
@@ -1314,8 +1332,8 @@ dependencies = [
"quinn-udp", "quinn-udp",
"rustc-hash", "rustc-hash",
"rustls", "rustls",
"socket2 0.6.3", "socket2",
"thiserror 2.0.18", "thiserror",
"tokio", "tokio",
"tracing", "tracing",
"web-time", "web-time",
@@ -1336,7 +1354,7 @@ dependencies = [
"rustls", "rustls",
"rustls-pki-types", "rustls-pki-types",
"slab", "slab",
"thiserror 2.0.18", "thiserror",
"tinyvec", "tinyvec",
"tracing", "tracing",
"web-time", "web-time",
@@ -1351,7 +1369,7 @@ dependencies = [
"cfg_aliases", "cfg_aliases",
"libc", "libc",
"once_cell", "once_cell",
"socket2 0.6.3", "socket2",
"tracing", "tracing",
"windows-sys 0.60.2", "windows-sys 0.60.2",
] ]
@@ -1422,9 +1440,9 @@ dependencies = [
[[package]] [[package]]
name = "rcgen" name = "rcgen"
version = "0.13.2" version = "0.14.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75e669e5202259b5314d1ea5397316ad400819437857b90861765f24c4cf80a2" checksum = "10b99e0098aa4082912d4c649628623db6aba77335e4f4569ff5083a6448b32e"
dependencies = [ dependencies = [
"pem", "pem",
"ring", "ring",
@@ -1655,11 +1673,11 @@ dependencies = [
[[package]] [[package]]
name = "serde_spanned" name = "serde_spanned"
version = "0.6.9" version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf41e0cfaf7226dca15e8197172c295a782857fcb97fad1808a166870dee75a3" checksum = "6662b5879511e06e8999a8a235d848113e942c9124f211511b16466ee2995f26"
dependencies = [ dependencies = [
"serde", "serde_core",
] ]
[[package]] [[package]]
@@ -1680,6 +1698,16 @@ version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64" checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "signal-hook-registry"
version = "1.4.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c4db69cba1110affc0e9f7bcd48bbf87b3f4fc7c61fc9155afd4c469eb3d6c1b"
dependencies = [
"errno",
"libc",
]
[[package]] [[package]]
name = "simd-adler32" name = "simd-adler32"
version = "0.3.9" version = "0.3.9"
@@ -1698,16 +1726,6 @@ version = "1.15.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "67b1b7a3b5fe4f1376887184045fcf45c69e92af734b7aaddc05fb777b6fbd03" checksum = "67b1b7a3b5fe4f1376887184045fcf45c69e92af734b7aaddc05fb777b6fbd03"
[[package]]
name = "socket2"
version = "0.5.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e22376abed350d73dd1cd119b57ffccad95b4e585a7cda43e286245ce23c0678"
dependencies = [
"libc",
"windows-sys 0.52.0",
]
[[package]] [[package]]
name = "socket2" name = "socket2"
version = "0.6.3" version = "0.6.3"
@@ -1761,33 +1779,13 @@ dependencies = [
"syn", "syn",
] ]
[[package]]
name = "thiserror"
version = "1.0.69"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6aaf5339b578ea85b50e080feb250a3e8ae8cfcdff9a461c9ec2904bc923f52"
dependencies = [
"thiserror-impl 1.0.69",
]
[[package]] [[package]]
name = "thiserror" name = "thiserror"
version = "2.0.18" version = "2.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4288b5bcbc7920c07a1149a35cf9590a2aa808e0bc1eafaade0b80947865fbc4" checksum = "4288b5bcbc7920c07a1149a35cf9590a2aa808e0bc1eafaade0b80947865fbc4"
dependencies = [ dependencies = [
"thiserror-impl 2.0.18", "thiserror-impl",
]
[[package]]
name = "thiserror-impl"
version = "1.0.69"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4fee6c4efc90059e10f81e6d42c60a18f76588c3d74cb83a0b242a2b6c7504c1"
dependencies = [
"proc-macro2",
"quote",
"syn",
] ]
[[package]] [[package]]
@@ -1877,7 +1875,8 @@ dependencies = [
"libc", "libc",
"mio", "mio",
"pin-project-lite", "pin-project-lite",
"socket2 0.6.3", "signal-hook-registry",
"socket2",
"tokio-macros", "tokio-macros",
"windows-sys 0.61.2", "windows-sys 0.61.2",
] ]
@@ -1918,44 +1917,42 @@ dependencies = [
[[package]] [[package]]
name = "toml" name = "toml"
version = "0.8.23" version = "1.1.2+spec-1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc1beb996b9d83529a9e75c17a1686767d148d70663143c7854d8b4a09ced362" checksum = "81f3d15e84cbcd896376e6730314d59fb5a87f31e4b038454184435cd57defee"
dependencies = [
"serde",
"serde_spanned",
"toml_datetime",
"toml_edit",
]
[[package]]
name = "toml_datetime"
version = "0.6.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "22cddaf88f4fbc13c51aebbf5f8eceb5c7c5a9da2ac40a13519eb5b0a0e8f11c"
dependencies = [
"serde",
]
[[package]]
name = "toml_edit"
version = "0.22.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "41fe8c660ae4257887cf66394862d21dbca4a6ddd26f04a3560410406a2f819a"
dependencies = [ dependencies = [
"indexmap", "indexmap",
"serde", "serde_core",
"serde_spanned", "serde_spanned",
"toml_datetime", "toml_datetime",
"toml_write", "toml_parser",
"toml_writer",
"winnow", "winnow",
] ]
[[package]] [[package]]
name = "toml_write" name = "toml_datetime"
version = "0.1.2" version = "1.1.1+spec-1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d99f8c9a7727884afe522e9bd5edbfc91a3312b36a77b5fb8926e4c31a41801" checksum = "3165f65f62e28e0115a00b2ebdd37eb6f3b641855f9d636d3cd4103767159ad7"
dependencies = [
"serde_core",
]
[[package]]
name = "toml_parser"
version = "1.1.2+spec-1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2abe9b86193656635d2411dc43050282ca48aa31c2451210f4202550afb7526"
dependencies = [
"winnow",
]
[[package]]
name = "toml_writer"
version = "1.1.1+spec-1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "756daf9b1013ebe47a8776667b466417e2d4c5679d441c26230efd9ef78692db"
[[package]] [[package]]
name = "tower" name = "tower"
@@ -2188,6 +2185,22 @@ dependencies = [
"rustls-pki-types", "rustls-pki-types",
] ]
[[package]]
name = "winapi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
dependencies = [
"winapi-i686-pc-windows-gnu",
"winapi-x86_64-pc-windows-gnu",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]] [[package]]
name = "winapi-util" name = "winapi-util"
version = "0.1.11" version = "0.1.11"
@@ -2197,6 +2210,12 @@ dependencies = [
"windows-sys 0.61.2", "windows-sys 0.61.2",
] ]
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]] [[package]]
name = "windows-link" name = "windows-link"
version = "0.2.1" version = "0.2.1"
@@ -2361,12 +2380,9 @@ checksum = "d6bbff5f0aada427a1e5a6da5f1f98158182f26556f345ac9e04d36d0ebed650"
[[package]] [[package]]
name = "winnow" name = "winnow"
version = "0.7.15" version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df79d97927682d2fd8adb29682d1140b343be4ac0f08fd68b7765d9c059d3945" checksum = "09dac053f1cd375980747450bfc7250c264eaae0583872e845c0c7cd578872b5"
dependencies = [
"memchr",
]
[[package]] [[package]]
name = "wit-bindgen" name = "wit-bindgen"
@@ -2382,9 +2398,9 @@ checksum = "9edde0db4769d2dc68579893f2306b26c6ecfbe0ef499b013d731b7b9247e0b9"
[[package]] [[package]]
name = "x509-parser" name = "x509-parser"
version = "0.16.0" version = "0.18.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fcbc162f30700d6f3f82a24bf7cc62ffe7caea42c0b2cba8bf7f3ae50cf51f69" checksum = "d43b0f71ce057da06bc0851b23ee24f3f86190b07203dd8f567d0b706a185202"
dependencies = [ dependencies = [
"asn1-rs", "asn1-rs",
"data-encoding", "data-encoding",
@@ -2394,7 +2410,7 @@ dependencies = [
"oid-registry", "oid-registry",
"ring", "ring",
"rusticata-macros", "rusticata-macros",
"thiserror 1.0.69", "thiserror",
"time", "time",
] ]

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "numa" name = "numa"
version = "0.10.0" version = "0.12.0"
authors = ["razvandimescu <razvan@dimescu.com>"] authors = ["razvandimescu <razvan@dimescu.com>"]
edition = "2021" edition = "2021"
description = "Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS" description = "Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS"
@@ -10,11 +10,11 @@ keywords = ["dns", "dns-server", "ad-blocking", "reverse-proxy", "developer-tool
categories = ["network-programming", "development-tools"] categories = ["network-programming", "development-tools"]
[dependencies] [dependencies]
tokio = { version = "1", features = ["rt-multi-thread", "macros", "net", "time", "sync"] } tokio = { version = "1", features = ["rt-multi-thread", "macros", "net", "time", "sync", "signal"] }
axum = "0.8" axum = "0.8"
serde = { version = "1", features = ["derive"] } serde = { version = "1", features = ["derive"] }
serde_json = "1" serde_json = "1"
toml = "0.8" toml = "1.1"
log = "0.4" log = "0.4"
env_logger = "0.11" env_logger = "0.11"
reqwest = { version = "0.12", features = ["rustls-tls", "gzip", "http2"], default-features = false } reqwest = { version = "0.12", features = ["rustls-tls", "gzip", "http2"], default-features = false }
@@ -22,17 +22,18 @@ hyper = { version = "1", features = ["client", "http1", "server"] }
hyper-util = { version = "0.1", features = ["client-legacy", "http1", "tokio"] } hyper-util = { version = "0.1", features = ["client-legacy", "http1", "tokio"] }
http-body-util = "0.1" http-body-util = "0.1"
futures = "0.3" futures = "0.3"
socket2 = { version = "0.5", features = ["all"] } socket2 = { version = "0.6", features = ["all"] }
rcgen = { version = "0.13", features = ["pem", "x509-parser"] } rcgen = { version = "0.14", features = ["pem", "x509-parser"] }
time = "0.3" time = "0.3"
rustls = "0.23" rustls = "0.23"
tokio-rustls = "0.26" tokio-rustls = "0.26"
arc-swap = "1" arc-swap = "1"
ring = "0.17" ring = "0.17"
rustls-pemfile = "2.2.0" rustls-pemfile = "2.2.0"
qrcode = { version = "0.14", default-features = false, features = ["svg"] }
[dev-dependencies] [dev-dependencies]
criterion = { version = "0.5", features = ["html_reports"] } criterion = { version = "0.8", features = ["html_reports"] }
tower = { version = "0.5", features = ["util"] } tower = { version = "0.5", features = ["util"] }
http = "1" http = "1"

View File

@@ -1,4 +1,4 @@
FROM rust:1.88-alpine AS builder FROM rust:1.94-alpine AS builder
RUN apk add --no-cache musl-dev cmake make perl RUN apk add --no-cache musl-dev cmake make perl
WORKDIR /app WORKDIR /app
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./
@@ -11,7 +11,7 @@ COPY numa.toml com.numa.dns.plist numa.service ./
RUN touch src/main.rs src/lib.rs RUN touch src/main.rs src/lib.rs
RUN cargo build --release RUN cargo build --release
FROM alpine:3.20 FROM alpine:3.23
COPY --from=builder /app/target/release/numa /usr/local/bin/numa COPY --from=builder /app/target/release/numa /usr/local/bin/numa
EXPOSE 53/udp 80/tcp 443/tcp 853/tcp 5380/tcp EXPOSE 53/udp 80/tcp 443/tcp 853/tcp 5380/tcp
ENTRYPOINT ["numa"] ENTRYPOINT ["numa"]

62
PKGBUILD Normal file
View File

@@ -0,0 +1,62 @@
# Maintainer: razvandimescu <razvan@dimescu.com>
pkgname=numa-git
_pkgname=numa
pkgver=0.10.1.r0.g0000000 # Placeholder — pkgver() rewrites this on each makepkg run
pkgrel=1
pkgdesc="Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS"
arch=('x86_64')
url="https://github.com/razvandimescu/numa"
license=('MIT')
options=('!lto')
depends=('gcc-libs' 'glibc')
makedepends=('cargo' 'git')
provides=("$_pkgname")
conflicts=("$_pkgname")
backup=('etc/numa.toml')
source=("$_pkgname::git+$url.git")
sha256sums=('SKIP')
pkgver() {
cd "$srcdir/$_pkgname"
( set -o pipefail
git describe --long --tags 2>/dev/null | sed 's/\([^-]*-g\)/r\1/;s/-/./g' ||
printf "r%s.%s" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)"
) | sed 's/^v//'
}
prepare() {
cd "$srcdir/$_pkgname"
# numa v0.10.1+ uses FHS-compliant paths on Linux by default
# (/var/lib/numa for data, journalctl for logs), so no source
# patching is needed. The earlier sed targeted /usr/local/bin/numa,
# which only appears in a comment in current main.
export RUSTUP_TOOLCHAIN=stable
cargo fetch --locked
}
build() {
cd "$srcdir/$_pkgname"
export RUSTUP_TOOLCHAIN=stable
cargo build --frozen --release
}
check() {
cd "$srcdir/$_pkgname"
export RUSTUP_TOOLCHAIN=stable
cargo test --frozen
}
package() {
cd "$srcdir/$_pkgname"
install -Dm755 "target/release/$_pkgname" "$pkgdir/usr/bin/$_pkgname"
# numa.service uses {{exe_path}} as a placeholder substituted by
# `numa install` at runtime via replace_exe_path(). For an AUR
# package install (no `numa install` step), we substitute it
# statically here so systemd gets a real ExecStart path.
sed 's|{{exe_path}}|/usr/bin/numa /etc/numa.toml|g' numa.service > numa.service.patched
install -Dm644 "numa.service.patched" "$pkgdir/usr/lib/systemd/system/numa.service"
install -Dm644 "numa.toml" "$pkgdir/etc/numa.toml"
install -Dm644 "LICENSE" "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}

View File

@@ -21,6 +21,9 @@ brew install razvandimescu/tap/numa
# Linux # Linux
curl -fsSL https://raw.githubusercontent.com/razvandimescu/numa/main/install.sh | sh curl -fsSL https://raw.githubusercontent.com/razvandimescu/numa/main/install.sh | sh
# Arch Linux (AUR)
yay -S numa-git
# Windows — download from GitHub Releases # Windows — download from GitHub Releases
# All platforms # All platforms
cargo install numa cargo install numa
@@ -69,11 +72,19 @@ DNSSEC validates the full chain of trust: RRSIG signatures, DNSKEY verification,
**DNS-over-TLS listener** (RFC 7858) — accept encrypted queries on port 853 from strict clients like iOS Private DNS, systemd-resolved, or stubby. Two modes: **DNS-over-TLS listener** (RFC 7858) — accept encrypted queries on port 853 from strict clients like iOS Private DNS, systemd-resolved, or stubby. Two modes:
- **Self-signed** (default) — numa generates a local CA automatically. Works on any network with zero DNS setup, but clients must manually trust the CA (on macOS/Linux add to the system trust store; on iOS install a `.mobileconfig`). - **Self-signed** (default) — numa generates a local CA automatically. `numa install` adds it to the system trust store on macOS, Linux (Debian/Ubuntu, Fedora/RHEL/SUSE, Arch), and Windows. On iOS, install the `.mobileconfig` from `numa setup-phone`. Firefox keeps its own NSS store and ignores the system one — trust the CA there manually if you need HTTPS for `.numa` services in Firefox.
- **Bring-your-own cert** — point `[dot] cert_path` / `key_path` at a publicly-trusted cert (e.g., Let's Encrypt via DNS-01 challenge on a domain pointing at your numa instance). Clients connect without any trust-store setup — same UX as AdGuard Home or Cloudflare `1.1.1.1`. - **Bring-your-own cert** — point `[dot] cert_path` / `key_path` at a publicly-trusted cert (e.g., Let's Encrypt via DNS-01 challenge on a domain pointing at your numa instance). Clients connect without any trust-store setup — same UX as AdGuard Home or Cloudflare `1.1.1.1`.
ALPN `"dot"` is advertised and enforced in both modes; a handshake with mismatched ALPN is rejected as a cross-protocol confusion defense. ALPN `"dot"` is advertised and enforced in both modes; a handshake with mismatched ALPN is rejected as a cross-protocol confusion defense.
**Phone setup** — point your iPhone or Android at Numa in one step:
```bash
numa setup-phone
```
Prints a QR code. Scan it, install the profile, toggle certificate trust — your phone's DNS now routes through Numa over TLS. Requires `[mobile] enabled = true` in `numa.toml`.
## LAN Discovery ## LAN Discovery
Run Numa on multiple machines. They find each other automatically via mDNS: Run Numa on multiple machines. They find each other automatically via mDNS:
@@ -113,6 +124,7 @@ From Machine B: `curl http://api.numa` → proxied to Machine A's port 8000. Ena
## Learn More ## Learn More
- [Blog: DNS-over-TLS from Scratch in Rust](https://numa.rs/blog/posts/dot-from-scratch.html)
- [Blog: Implementing DNSSEC from Scratch in Rust](https://numa.rs/blog/posts/dnssec-from-scratch.html) - [Blog: Implementing DNSSEC from Scratch in Rust](https://numa.rs/blog/posts/dnssec-from-scratch.html)
- [Blog: I Built a DNS Resolver from Scratch](https://numa.rs/blog/posts/dns-from-scratch.html) - [Blog: I Built a DNS Resolver from Scratch](https://numa.rs/blog/posts/dns-from-scratch.html)
- [Configuration reference](numa.toml) — all options documented inline - [Configuration reference](numa.toml) — all options documented inline
@@ -127,6 +139,9 @@ From Machine B: `curl http://api.numa` → proxied to Machine A's port 8000. Ena
- [x] DNS-over-TLS listener — encrypted client connections (RFC 7858, ALPN strict) - [x] DNS-over-TLS listener — encrypted client connections (RFC 7858, ALPN strict)
- [x] Recursive resolution + DNSSEC — chain-of-trust, NSEC/NSEC3 - [x] Recursive resolution + DNSSEC — chain-of-trust, NSEC/NSEC3
- [x] SRTT-based nameserver selection - [x] SRTT-based nameserver selection
- [x] Multi-forwarder failover — multiple upstreams with SRTT ranking, fallback pool
- [x] Cache warming — proactive resolution for configured domains
- [x] Mobile onboarding — `setup-phone` QR flow, mobile API, mobileconfig profiles
- [ ] pkarr integration — self-sovereign DNS via Mainline DHT - [ ] pkarr integration — self-sovereign DNS via Mainline DHT
- [ ] Global `.numa` names — DHT-backed, no registrar - [ ] Global `.numa` names — DHT-backed, no registrar

View File

@@ -163,12 +163,12 @@ The fix has three parts:
**TCP fallback.** Every outbound query tries UDP first (800ms timeout). If UDP fails or the response is truncated, retry immediately over TCP. TCP uses a 2-byte length prefix before the DNS message — trivial to implement, and it handles DNSSEC responses that exceed the UDP payload limit. **TCP fallback.** Every outbound query tries UDP first (800ms timeout). If UDP fails or the response is truncated, retry immediately over TCP. TCP uses a 2-byte length prefix before the DNS message — trivial to implement, and it handles DNSSEC responses that exceed the UDP payload limit.
**UDP auto-disable.** After 3 consecutive UDP failures, flip a global `AtomicBool` and skip UDP entirely — go TCP-first for all queries. This avoids burning 800ms per hop on a network where UDP will never work. The flag resets when the network changes (detected via LAN IP monitoring). **UDP auto-disable.** After 3 consecutive UDP failures, flip a global `AtomicBool` and skip UDP entirely — go TCP-first for all queries. The flag resets when the network changes (detected via LAN IP monitoring).
<img src="../hostile-network.svg" alt="Latency profile on a hostile network: queries 1-3 each spend 800ms waiting for a UDP timeout before retrying over TCP, taking 1,100ms total per query. After 3 consecutive failures the UDP auto-disable flag flips, and queries 4+ go TCP-first and complete in 300ms each — 3.7× faster.">
**Query minimization (RFC 7816).** When querying root servers, send only the TLD — `com` instead of `secret-project.example.com`. Root servers handle trillions of queries and are operated by 12 organizations. Minimization reduces what they learn from yours. **Query minimization (RFC 7816).** When querying root servers, send only the TLD — `com` instead of `secret-project.example.com`. Root servers handle trillions of queries and are operated by 12 organizations. Minimization reduces what they learn from yours.
The result: on a network that blocks UDP:53, Numa detects the block within the first 3 queries, switches to TCP, and resolves normally at 300-500ms per cold query. Cached queries remain 0ms. No manual config change needed — switch networks and it adapts.
I wouldn't have found this without dogfooding. The code worked perfectly on my home network. It took a real hostile network to expose the assumption that UDP always works. I wouldn't have found this without dogfooding. The code worked perfectly on my home network. It took a real hostile network to expose the assumption that UDP always works.
## What I learned ## What I learned

176
blog/dot-from-scratch.md Normal file
View File

@@ -0,0 +1,176 @@
---
title: DNS-over-TLS from Scratch in Rust
description: Building RFC 7858 on top of rustls — length-prefix framing, ALPN cross-protocol defense, and two bugs that only the strict clients caught.
date: April 2026
---
The [previous post](/blog/posts/dnssec-from-scratch.html) ended with "DoT — the last encrypted transport we don't support." This post is about building it.
Numa now runs a DoT listener on port 853. My iPhone uses it as its system resolver, so ad blocking, DNSSEC validation, and recursive resolution follow my phone through the day. No cloud, no account, no companion app — a self-signed cert, a `.mobileconfig` profile, and a QR code in the terminal.
RFC 7858 is ten pages. The hard parts weren't in the RFC. They were in cross-protocol confusion defenses, a crypto-provider init gotcha that only triggered in one specific config combination, and a certificate SAN bug iOS was happy to accept and `kdig` immediately rejected. This post is about those parts.
## Why DoT when you already have DoH?
Numa has shipped DoH since v0.1. Both protocols tunnel DNS over TLS; DoH wraps queries in HTTP/2, DoT is DNS-over-TCP with TLS in front. Same privacy guarantees, different wrapper.
The answer to "why both" is that **phones ask for DoT by name.** iOS system DNS configures it with two fields (IP + server name) instead of a URL template. Android 9+ "Private DNS" speaks DoT natively. Linux stubs default to DoT. I wanted my phone on Numa without installing anything on the phone itself, and DoT is the protocol iOS and Android already speak for that.
## The wire format is refreshingly small
RFC 7858 is one sentence of wire protocol: *DNS-over-TCP (RFC 1035 §4.2.2) with TLS in front, on port 853.* DNS-over-TCP has existed since 1987 — a 2-byte length prefix followed by the DNS message. DoT is that, wrapped in a TLS session. The entire framing code is seven lines:
```rust
async fn write_framed<S>(stream: &mut S, msg: &[u8]) -> io::Result<()>
where S: AsyncWriteExt + Unpin {
let mut out = Vec::with_capacity(2 + msg.len());
out.extend_from_slice(&(msg.len() as u16).to_be_bytes());
out.extend_from_slice(msg);
stream.write_all(&out).await?;
stream.flush().await
}
```
Reads are symmetric: `read_exact` two bytes, convert to `u16`, `read_exact` that many bytes. No HTTP headers, no chunked encoding, no framing layer.
## Persistent connections
A fresh TCP+TLS handshake is at least 3 RTTs — about 300ms on a 100ms connection, 60× the cost of a UDP query. RFC 7858 §3.4 says clients SHOULD reuse the TCP connection for multiple queries, and every real DoT client does: iOS, Android, systemd, stubby. A single connection often carries hundreds of queries.
<img src="../dot-handshake.svg" alt="Timing diagram comparing a DNS lookup over plain UDP (1 RTT), over DoT on a fresh connection (3 RTTs — TCP handshake, TLS 1.3 handshake, then the query), and over a reused DoT session (1 RTT, same as UDP).">
The amortization point is the whole game. If you only ever do one query per connection, DoT is roughly 3× slower than UDP and you should not use it. If you reuse the same TLS session for a browsing session's worth of queries, the handshake is paid once and every subsequent query is effectively free.
The server is a loop that reads a length-prefixed message, resolves it, writes the response framed the same way, waits for the next one. Three timeouts keep it honest:
- **Handshake timeout (10s)** — a slowloris that opens TCP but never sends a ClientHello can't pin a worker.
- **Idle timeout (30s)** — a connected client with nothing to say gets dropped.
- **Write timeout (10s)** — a stalled reader can't hold a response buffer indefinitely.
A semaphore caps concurrent connections at 512 so a burst of handshakes can't exhaust the tokio runtime.
## ALPN, the cross-protocol defense that matters
If DoT lives on port 853 and HTTPS on 443, what stops an HTTP/2 client from hitting 853 and getting confused replies? [Cross-protocol attacks](https://alpaca-attack.com/) exist and have had real CVEs. The defense is ALPN: during the TLS handshake the client advertises protocols, the server picks one it supports or fails. A DoT server advertises `"dot"`; a client offering only `"h2"` gets a `no_application_protocol` fatal alert before any frames are exchanged.
rustls enforces this by default when you set `alpn_protocols`:
```rust
let mut config = ServerConfig::builder()
.with_no_client_auth()
.with_single_cert(certs, key)?;
config.alpn_protocols = vec![b"dot".to_vec()];
```
"The library enforces it by default" has a latent risk: a future rustls upgrade could change the default, and the defense would quietly evaporate. I wrote a test that pins the behavior so any regression in a dependency update fails loudly:
```rust
#[tokio::test]
async fn dot_rejects_non_dot_alpn() {
let (addr, cert_der) = spawn_dot_server().await;
let client_config = dot_client(&cert_der, vec![b"h2".to_vec()]);
let connector = tokio_rustls::TlsConnector::from(client_config);
let tcp = tokio::net::TcpStream::connect(addr).await.unwrap();
let result = connector
.connect(ServerName::try_from("numa.numa").unwrap(), tcp)
.await;
assert!(result.is_err(),
"DoT server must reject ALPN that doesn't include \"dot\"");
}
```
When you're leaning on a library's default for a security-critical invariant, the test is the contract.
## Two bugs that hid for days
Both were fixed before v0.10 shipped. Both stayed hidden because my initial tests used *permissive* clients.
### The rustls crypto provider panic
rustls 0.23 requires a `CryptoProvider` installed before you can build a `ServerConfig`. Numa's HTTPS proxy calls `install_default` as a side effect when it builds its own config, so DoT "just worked" for users who enabled both — the proxy had already initialized the provider before DoT's first handshake.
Then I added support for user-provided DoT certificates. Someone running DoT with their own Let's Encrypt cert, with the HTTPS proxy disabled, would hit:
```
thread 'dot' panicked at rustls-0.23.25/src/crypto/mod.rs:185:14:
no process-level CryptoProvider available -- call
CryptoProvider::install_default() before this point
```
The panic happened on the first client connection, not at startup. While writing the integration suite for "DoT with BYO cert, proxy disabled" — the one combination nobody had ever actually exercised — the first run panicked. Fix is two lines: call `install_default` inside `load_tls_config` so DoT can stand alone. If a side effect initializes something and you have a path that skips that side effect, you have a bug waiting for a specific deployment.
### The SAN bug iOS was happy to accept
Numa's self-signed DoT cert is generated on first run from a local CA alongside the data directory. It needs to match whatever `ServerName` the client sends as SNI. For the HTTPS proxy, that's the wildcard domain pattern `*.numa` (matching `frontend.numa`, `api.numa`, etc.). I initially reused the same SAN list for DoT: a wildcard `*.numa` and nothing else.
On an iPhone this worked perfectly. Full browsing session, persistent connections in the log, ad blocking active. I was about to merge when I ran one last smoke test with `kdig` (GnuTLS-backed, from [Knot DNS](https://www.knot-dns.cz/)):
```
$ kdig @192.168.1.16 -p 853 +tls \
+tls-ca=/usr/local/var/numa/ca.pem \
+tls-hostname=numa.numa example.com A
;; TLS, handshake failed (Error in the certificate.)
```
Huh.
[RFC 6125 §6.4.3](https://datatracker.ietf.org/doc/html/rfc6125#section-6.4.3): a wildcard in a certificate's DNS-ID matches exactly one label. `*.numa` matches `frontend.numa`, but not `numa.numa`, because the wildcard wants at least one label to substitute and strict clients reject wildcards in the leftmost label under single-label TLDs as ambiguous.
iOS's TLS stack is lenient and accepts it. GnuTLS, NSS (Firefox), and most non-Apple validators don't. The fix is five lines — add an explicit `numa.numa` SAN alongside the wildcard. But the lesson is the one that stuck: I wrote a commit message saying "fix an iOS bug" and had to rewrite it, because iOS was fine. The real bug was that every GnuTLS/NSS-based client on the planet would have rejected the cert, and I only found it by running one more test with a stricter tool.
> Test with the strict client. The permissive client hides your bugs.
## Getting your phone onto it
A DoT server is useless without a way to point a phone at it. iOS won't let you type an IP and a server name into Settings directly — you install a `.mobileconfig` profile that bundles the CA as a trust anchor and the DNS settings in a single payload.
Numa ships a subcommand that builds one on the fly and serves it over a QR code in the terminal:
```
$ numa setup-phone
Numa Phone Setup
Profile URL: http://192.168.1.10:8765/mobileconfig
██████████████████████████████
██ ██
██ [QR code rendered in ██
██ your terminal] ██
██ ██
██████████████████████████████
On your iPhone:
1. Open Camera, point at the QR code, tap the yellow banner
2. Allow the download when Safari asks
3. Open Settings — tap "Profile Downloaded" near the top
(or: Settings → General → VPN & Device Management → Numa DNS)
4. Tap Install (top right), enter passcode, Install again
5. Settings → General → About → Certificate Trust Settings
Toggle ON "Numa Local CA" — required for DoT to work
```
The same QR is available in the dashboard — click "Phone Setup" in the header and the popover renders an SVG QR code pointing at the mobileconfig URL. On mobile viewports it shows a direct download link instead.
<img src="../phone-setup-dashboard.png" alt="Numa dashboard with Phone Setup popover showing QR code and install instructions">
Step 4 is non-negotiable. Even though the CA is bundled in the same profile that installs the DNS settings, iOS still requires the user to explicitly toggle trust in Certificate Trust Settings. It's a deliberate iOS policy to prevent profile-based trust injection — annoying, and correct.
I've been dogfooding this since v0.10 shipped in early April. The phone resolves through Numa over DoT whenever I'm home; persistent connections are visible in the log as a single source port living through dozens of queries. The one real caveat: if the laptop's LAN IP changes, the profile breaks. [RFC 9462 DDR](https://datatracker.ietf.org/doc/html/rfc9462) fixes that — Numa can respond to `_dns.resolver.arpa IN SVCB` with its current IP and iOS picks it up on each network join. Next piece of work.
## What I learned
**RFC-level small, API-level hard.** RFC 7858 is ten pages. The framing is trivial. But the subtle stuff — ALPN, timeouts, connection caps, handshake vs idle vs write deadlines, backoff on accept errors — isn't in the RFC. Miss any of it and you leak a DoS vector or a protocol confusion hole.
**Your test matrix is your security matrix.** Both bugs in this post were hidden by lenient clients. In both cases the strict client — kdig, or a specific config combination — surfaced the bug instantly. Pick test tools for strictness, not convenience. The moment you find yourself thinking "but iOS accepts it," stop and run kdig.
**Don't initialize global state via side effects.** "Module A installs a global, module B silently depends on it, disabling A breaks B" is a bug pattern that keeps coming back. Fix: have module B initialize its dependency explicitly, even if it means calling an idempotent `install_default` twice. The dependency graph should be local and obvious.
## What's next
- **DoH server** — Numa already has a DoH client; the other half unlocks Firefox's built-in DoH setting pointing at Numa.
- **DoQ server (RFC 9250)** — DNS over QUIC. Android 14+ supports it natively.
- **DDR (RFC 9462)** — auto-discovery via `_dns.resolver.arpa IN SVCB`, so phones pick up a moved Numa instance without the installed profile going stale.
The code is at [github.com/razvandimescu/numa](https://github.com/razvandimescu/numa) — the DoT listener is in [`src/dot.rs`](https://github.com/razvandimescu/numa/blob/main/src/dot.rs) and the phone onboarding flow is in [`src/setup_phone.rs`](https://github.com/razvandimescu/numa/blob/main/src/setup_phone.rs) and [`src/mobileconfig.rs`](https://github.com/razvandimescu/numa/blob/main/src/mobileconfig.rs). MIT license.

View File

@@ -2,19 +2,21 @@
bind_addr = "0.0.0.0:53" bind_addr = "0.0.0.0:53"
api_port = 5380 api_port = 5380
# api_bind_addr = "127.0.0.1" # default; set to "0.0.0.0" for LAN dashboard access # api_bind_addr = "127.0.0.1" # default; set to "0.0.0.0" for LAN dashboard access
# data_dir = "/usr/local/var/numa" # where numa stores TLS CA and cert material # data_dir = "/var/lib/numa" # where numa stores TLS CA and cert material
# (default: /usr/local/var/numa on unix, # Defaults: /var/lib/numa on linux (FHS),
# %PROGRAMDATA%\numa on windows). Override for # /usr/local/var/numa on macos (homebrew prefix),
# %PROGRAMDATA%\numa on windows. Override for
# containerized deploys or tests that can't # containerized deploys or tests that can't
# write to the system path. # write to the system path.
# [upstream] # [upstream]
# mode = "forward" # "forward" (default) — relay to upstream # mode = "forward" # "forward" (default) — relay to upstream
# # "recursive" — resolve from root hints (no address needed) # # "recursive" — resolve from root hints (no address needed)
# address = "9.9.9.9" # single upstream (plain UDP)
# address = ["192.168.1.1", "9.9.9.9:5353"] # multiple upstreams — SRTT picks fastest
# address = "https://dns.quad9.net/dns-query" # DNS-over-HTTPS (encrypted) # address = "https://dns.quad9.net/dns-query" # DNS-over-HTTPS (encrypted)
# address = "https://cloudflare-dns.com/dns-query" # Cloudflare DoH # fallback = ["8.8.8.8", "1.1.1.1"] # tried only when all primaries fail
# address = "9.9.9.9" # plain UDP # port = 53 # default port for addresses without :port
# port = 53 # only for forward mode, plain UDP
# timeout_ms = 3000 # timeout_ms = 3000
# root_hints = [ # only used in recursive mode # root_hints = [ # only used in recursive mode
# "198.41.0.4", # a.root-servers.net (Verisign) # "198.41.0.4", # a.root-servers.net (Verisign)
@@ -53,6 +55,7 @@ api_port = 5380
max_entries = 10000 max_entries = 10000
min_ttl = 60 min_ttl = 60
max_ttl = 86400 max_ttl = 86400
# warm = ["google.com", "github.com"] # resolve at startup, refresh before TTL expiry
[proxy] [proxy]
enabled = true enabled = true
@@ -90,7 +93,7 @@ tld = "numa"
# DNS-over-TLS listener (RFC 7858) — encrypted DNS on port 853 # DNS-over-TLS listener (RFC 7858) — encrypted DNS on port 853
# [dot] # [dot]
# enabled = false # opt-in: accept DoT queries # enabled = true # on by default; set false to disable
# port = 853 # standard DoT port # port = 853 # standard DoT port
# bind_addr = "0.0.0.0" # IPv4 or IPv6; unspecified binds all interfaces # bind_addr = "0.0.0.0" # IPv4 or IPv6; unspecified binds all interfaces
# cert_path = "/etc/numa/dot.crt" # PEM cert; omit to use self-signed (proxy CA if available) # cert_path = "/etc/numa/dot.crt" # PEM cert; omit to use self-signed (proxy CA if available)
@@ -101,3 +104,22 @@ tld = "numa"
# enabled = true # discover other Numa instances via mDNS (_numa._tcp.local) # enabled = true # discover other Numa instances via mDNS (_numa._tcp.local)
# broadcast_interval_secs = 30 # broadcast_interval_secs = 30
# peer_timeout_secs = 90 # peer_timeout_secs = 90
# Mobile API — persistent HTTP listener serving read-only routes
# (/health, /ca.pem, /mobileconfig, /ca.mobileconfig) on a LAN-reachable
# port. Consumed by the iOS/Android companion apps for discovery and
# profile fetching, and by `numa setup-phone` for QR-based onboarding.
#
# Opt-in because the listener binds to the LAN by default. None of the
# exposed routes are cryptographically sensitive (no private keys, no
# state mutations, all idempotent GETs), but enabling it does add a new
# listener to any device on the LAN that scans port 8765.
#
# Safe for home LANs. Think twice before enabling on untrusted LANs
# (office Wi-Fi, coffee shops, etc.) — an attacker on the same network
# could run a competing Numa instance that shadows yours via mDNS and
# trick companion apps into installing their profile instead of yours.
[mobile]
enabled = true # opt-in to the mobile API listener
# port = 8765 # default; matches Discovery.swift defaultAPIPort
# bind_addr = "0.0.0.0" # default; set to "127.0.0.1" for localhost-only

View File

@@ -7,18 +7,19 @@
# The script: # The script:
# 1. Opens the dashboard in Chrome --app mode (clean, no address bar) # 1. Opens the dashboard in Chrome --app mode (clean, no address bar)
# 2. Generates DNS traffic (forward, cache hit, blocked) # 2. Generates DNS traffic (forward, cache hit, blocked)
# 3. Types "peekm" / "6419" into the Local Services form on camera # 3. Opens Phone Setup QR popover
# 4. Shows LAN accessibility badge ("local only" / "LAN") # 4. Types "peekm" / "6419" into the Local Services form on camera
# 5. Checks a blocked domain # 5. Shows LAN accessibility badge ("local only" / "LAN")
# 6. Opens peekm.numa to show the proxy working # 6. Checks a blocked domain
# 7. Records via ffmpeg and converts to optimized GIF # 7. Opens peekm.numa to show the proxy working
# 8. Records via ffmpeg and converts to optimized GIF
set -euo pipefail set -euo pipefail
# --------------- Configuration --------------- # --------------- Configuration ---------------
OUTPUT="${1:-assets/hero-demo.gif}" OUTPUT="${1:-assets/hero-demo.gif}"
PORT=5380 PORT=5380
RECORD_SECONDS=20 RECORD_SECONDS=24
VIEWPORT_W=1800 VIEWPORT_W=1800
VIEWPORT_H=1100 VIEWPORT_H=1100
FPS=12 FPS=12
@@ -230,8 +231,16 @@ dig @127.0.0.1 github.com +short > /dev/null 2>&1
dig @127.0.0.1 ad.doubleclick.net +short > /dev/null 2>&1 dig @127.0.0.1 ad.doubleclick.net +short > /dev/null 2>&1
sleep 3 sleep 3
# --------------- Scene 2: Add peekm service via UI (3-7s) --------------- # --------------- Scene 2: Phone Setup popover (3-7s) ---------------
log "Scene 2: Adding peekm.numa service..." log "Scene 2: Phone Setup QR popover..."
run_js "document.querySelector('#phoneSetup button').click();"
sleep 3
# Dismiss popover
run_js "document.getElementById('phoneSetupPopover').style.display = 'none';"
sleep 1
# --------------- Scene 3: Add peekm service via UI (7-11s) ---------------
log "Scene 3: Adding peekm.numa service..."
# Services panel is now first — scroll to it # Services panel is now first — scroll to it
run_js " run_js "
@@ -249,18 +258,18 @@ sleep 0.3
run_js "document.querySelector('#serviceForm .btn-add').click();" run_js "document.querySelector('#serviceForm .btn-add').click();"
sleep 2 sleep 2
# --------------- Scene 3: Open peekm.numa (7-11s) --------------- # --------------- Scene 4: Open peekm.numa (11-15s) ---------------
log "Scene 3: Opening peekm.numa in browser..." log "Scene 4: Opening peekm.numa in browser..."
open "http://peekm.numa/view/peekm/README.md" 2>/dev/null || true open "http://peekm.numa/view/peekm/README.md" 2>/dev/null || true
sleep 4 sleep 4
# --------------- Scene 4: Back to dashboard (11-14s) --------------- # --------------- Scene 5: Back to dashboard (15-18s) ---------------
log "Scene 4: Back to dashboard — LAN badges + LOCAL queries visible..." log "Scene 5: Back to dashboard — LAN badges + LOCAL queries visible..."
osascript -e "tell application \"System Events\" to set frontmost of (first process whose unix id is $CHROME_PID) to true" 2>/dev/null || true osascript -e "tell application \"System Events\" to set frontmost of (first process whose unix id is $CHROME_PID) to true" 2>/dev/null || true
sleep 3 sleep 3
# --------------- Scene 5: Check Domain blocker (14-17s) --------------- # --------------- Scene 6: Check Domain blocker (18-21s) ---------------
log "Scene 5: Check Domain — blocked tracker..." log "Scene 6: Check Domain — blocked tracker..."
# Scroll down to blocking panel # Scroll down to blocking panel
run_js " run_js "
var blockPanel = document.getElementById('blockingPanel'); var blockPanel = document.getElementById('blockingPanel');
@@ -273,8 +282,8 @@ sleep 0.3
run_js "document.querySelector('#checkDomainInput').closest('form').querySelector('.btn').click();" run_js "document.querySelector('#checkDomainInput').closest('form').querySelector('.btn').click();"
sleep 2 sleep 2
# --------------- Scene 6: Terminal-style dig overlay (17-20s) --------------- # --------------- Scene 7: Terminal-style dig overlay (21-24s) ---------------
log "Scene 6: dig proof overlay..." log "Scene 7: dig proof overlay..."
DIG_RESULT=$(dig @127.0.0.1 peekm.numa +short 2>/dev/null | head -1) DIG_RESULT=$(dig @127.0.0.1 peekm.numa +short 2>/dev/null | head -1)
run_js " run_js "
var overlay = document.createElement('div'); var overlay = document.createElement('div');

View File

@@ -37,7 +37,7 @@ cargo update --workspace
git add Cargo.toml Cargo.lock git add Cargo.toml Cargo.lock
git commit -m "chore: bump version to $VERSION" git commit -m "chore: bump version to $VERSION"
git tag "$TAG" git tag "$TAG"
git push origin main --tags git push origin main "$TAG"
echo echo
echo "Released $TAG — GitHub Actions will build, publish to crates.io, and create the release." echo "Released $TAG — GitHub Actions will build, publish to crates.io, and create the release."

View File

@@ -0,0 +1,57 @@
#!/usr/bin/env python3
"""Rewrite a Homebrew formula in place: bump version, URL paths, and sha256 lines.
Reads the formula path from argv[1], and the following env vars:
VERSION e.g. "0.10.0" (no leading v)
SHA_MACOS_AARCH64
SHA_MACOS_X86_64
SHA_LINUX_AARCH64
SHA_LINUX_X86_64
Assumptions about the formula:
- Has `version "X.Y.Z"` somewhere
- Has `url "...releases/download/vX.Y.Z/numa-<target>.tar.gz"` lines
- May or may not already have `sha256 "..."` lines immediately after each url
"""
import os
import re
import sys
formula_path = sys.argv[1]
version = os.environ["VERSION"].lstrip("v")
shas = {
"macos-aarch64": os.environ["SHA_MACOS_AARCH64"],
"macos-x86_64": os.environ["SHA_MACOS_X86_64"],
"linux-aarch64": os.environ["SHA_LINUX_AARCH64"],
"linux-x86_64": os.environ["SHA_LINUX_X86_64"],
}
with open(formula_path) as f:
content = f.read()
content = re.sub(r'version "[^"]*"', f'version "{version}"', content)
content = re.sub(
r"releases/download/v[\d.]+/numa-",
f"releases/download/v{version}/numa-",
content,
)
content = re.sub(r'\n[ \t]*sha256 "[^"]*"', "", content)
def add_sha(match: re.Match) -> str:
indent = match.group(1)
target = match.group(2)
if target not in shas:
return match.group(0)
return f'{match.group(0)}\n{indent}sha256 "{shas[target]}"'
content = re.sub(
r'^([ \t]+)url "[^"]*numa-([\w-]+)\.tar\.gz"',
add_sha,
content,
flags=re.MULTILINE,
)
with open(formula_path, "w") as f:
f.write(content)

View File

@@ -74,6 +74,7 @@ body::before {
font-weight: 400; font-weight: 400;
color: var(--text-primary); color: var(--text-primary);
text-decoration: none; text-decoration: none;
text-transform: none;
letter-spacing: -0.02em; letter-spacing: -0.02em;
} }
.blog-nav .wordmark:hover { color: var(--amber); } .blog-nav .wordmark:hover { color: var(--amber); }

129
site/blog/dot-handshake.svg Normal file
View File

@@ -0,0 +1,129 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 720 360" font-family="'DM Sans', system-ui, sans-serif" font-size="12">
<defs>
<marker id="arr-amber" viewBox="0 0 10 10" refX="9" refY="5" markerWidth="6" markerHeight="6" orient="auto">
<path d="M 0 0 L 10 5 L 0 10 z" fill="#c0623a"/>
</marker>
<marker id="arr-dim" viewBox="0 0 10 10" refX="9" refY="5" markerWidth="6" markerHeight="6" orient="auto">
<path d="M 0 0 L 10 5 L 0 10 z" fill="#a39888"/>
</marker>
<filter id="shadow" x="-3%" y="-3%" width="106%" height="106%">
<feDropShadow dx="0" dy="1" stdDeviation="2" flood-opacity="0.06"/>
</filter>
</defs>
<!-- Background -->
<rect width="720" height="360" rx="8" fill="#faf7f2"/>
<!-- Title -->
<text x="360" y="32" text-anchor="middle" font-size="15" font-weight="600" fill="#2c2418" font-family="'Instrument Serif', Georgia, serif" letter-spacing="-0.02em">UDP vs DoT — one lookup, three scenarios</text>
<text x="360" y="50" text-anchor="middle" font-size="11" fill="#a39888">Time flows downward. Amber = DNS work. Gray = TCP/TLS handshake overhead.</text>
<!-- ==================== Column 1: Plain UDP ==================== -->
<g transform="translate(20, 0)">
<!-- Column header -->
<text x="90" y="84" text-anchor="middle" font-size="13" font-weight="600" fill="#2c2418">Plain UDP DNS</text>
<text x="90" y="101" text-anchor="middle" font-size="10" fill="#a39888" letter-spacing="0.06em">PORT 53 · CLEARTEXT</text>
<!-- Lane labels -->
<text x="25" y="128" font-size="10" fill="#6b5e4f">client</text>
<text x="133" y="128" font-size="10" fill="#6b5e4f">server</text>
<!-- Lanes -->
<line x1="35" y1="138" x2="35" y2="198" stroke="#d4cbba" stroke-width="1" stroke-dasharray="2 3"/>
<line x1="145" y1="138" x2="145" y2="198" stroke="#d4cbba" stroke-width="1" stroke-dasharray="2 3"/>
<!-- query -->
<line x1="37" y1="148" x2="143" y2="160" stroke="#c0623a" stroke-width="2" marker-end="url(#arr-amber)"/>
<text x="90" y="143" text-anchor="middle" font-size="10" fill="#9e4e2d" font-weight="500">query</text>
<!-- response -->
<line x1="143" y1="178" x2="37" y2="190" stroke="#c0623a" stroke-width="2" marker-end="url(#arr-amber)"/>
<text x="90" y="205" text-anchor="middle" font-size="10" fill="#9e4e2d" font-weight="500">response</text>
<!-- Total cost badge -->
<rect x="20" y="225" width="140" height="32" rx="4" fill="#faf7f2" stroke="#d4cbba" stroke-width="1" filter="url(#shadow)"/>
<text x="90" y="241" text-anchor="middle" font-size="9" fill="#a39888" letter-spacing="0.04em">TOTAL LATENCY</text>
<text x="90" y="253" text-anchor="middle" font-size="11" font-weight="600" fill="#c0623a" font-family="'JetBrains Mono', monospace">1 × RTT</text>
</g>
<!-- ==================== Column 2: DoT cold ==================== -->
<g transform="translate(270, 0)">
<!-- Column header -->
<text x="90" y="84" text-anchor="middle" font-size="13" font-weight="600" fill="#2c2418">DoT — first query</text>
<text x="90" y="101" text-anchor="middle" font-size="10" fill="#a39888" letter-spacing="0.06em">PORT 853 · NEW CONNECTION</text>
<!-- Lane labels -->
<text x="25" y="128" font-size="10" fill="#6b5e4f">client</text>
<text x="133" y="128" font-size="10" fill="#6b5e4f">server</text>
<!-- Lanes -->
<line x1="35" y1="138" x2="35" y2="308" stroke="#d4cbba" stroke-width="1" stroke-dasharray="2 3"/>
<line x1="145" y1="138" x2="145" y2="308" stroke="#d4cbba" stroke-width="1" stroke-dasharray="2 3"/>
<!-- === RTT 1: TCP handshake === -->
<!-- SYN -->
<line x1="37" y1="145" x2="143" y2="153" stroke="#a39888" stroke-width="1.5" marker-end="url(#arr-dim)"/>
<!-- SYN-ACK -->
<line x1="143" y1="163" x2="37" y2="171" stroke="#a39888" stroke-width="1.5" marker-end="url(#arr-dim)"/>
<!-- ACK -->
<line x1="37" y1="181" x2="143" y2="189" stroke="#a39888" stroke-width="1.5" marker-end="url(#arr-dim)"/>
<!-- Label + RTT marker -->
<text x="168" y="170" font-size="9" fill="#a39888" font-family="'JetBrains Mono', monospace">1 rtt</text>
<text x="90" y="143" text-anchor="middle" font-size="9" fill="#6b5e4f" font-style="italic">TCP handshake</text>
<!-- === RTT 2: TLS 1.3 handshake === -->
<!-- ClientHello -->
<line x1="37" y1="208" x2="143" y2="216" stroke="#a39888" stroke-width="1.5" marker-end="url(#arr-dim)"/>
<!-- ServerHello + Cert + Finished -->
<line x1="143" y1="226" x2="37" y2="234" stroke="#a39888" stroke-width="1.5" marker-end="url(#arr-dim)"/>
<!-- Label + RTT marker -->
<text x="168" y="222" font-size="9" fill="#a39888" font-family="'JetBrains Mono', monospace">2 rtt</text>
<text x="90" y="205" text-anchor="middle" font-size="9" fill="#6b5e4f" font-style="italic">TLS 1.3 handshake</text>
<!-- === RTT 3: DNS exchange === -->
<!-- query (piggybacked on ClientFinished) -->
<line x1="37" y1="253" x2="143" y2="261" stroke="#c0623a" stroke-width="2" marker-end="url(#arr-amber)"/>
<!-- response -->
<line x1="143" y1="271" x2="37" y2="279" stroke="#c0623a" stroke-width="2" marker-end="url(#arr-amber)"/>
<!-- Label + RTT marker -->
<text x="168" y="267" font-size="9" fill="#a39888" font-family="'JetBrains Mono', monospace">3 rtt</text>
<text x="90" y="250" text-anchor="middle" font-size="10" fill="#9e4e2d" font-weight="500">query + response</text>
<!-- Total cost badge -->
<rect x="20" y="295" width="140" height="32" rx="4" fill="#faf7f2" stroke="#d4cbba" stroke-width="1" filter="url(#shadow)"/>
<text x="90" y="311" text-anchor="middle" font-size="9" fill="#a39888" letter-spacing="0.04em">TOTAL LATENCY</text>
<text x="90" y="323" text-anchor="middle" font-size="11" font-weight="600" fill="#c0623a" font-family="'JetBrains Mono', monospace">3 × RTT</text>
</g>
<!-- ==================== Column 3: DoT reused ==================== -->
<g transform="translate(520, 0)">
<!-- Column header -->
<text x="90" y="84" text-anchor="middle" font-size="13" font-weight="600" fill="#2c2418">DoT — reused session</text>
<text x="90" y="101" text-anchor="middle" font-size="10" fill="#a39888" letter-spacing="0.06em">PORT 853 · PERSISTENT TCP/TLS</text>
<!-- Lane labels -->
<text x="25" y="128" font-size="10" fill="#6b5e4f">client</text>
<text x="133" y="128" font-size="10" fill="#6b5e4f">server</text>
<!-- Lanes -->
<line x1="35" y1="138" x2="35" y2="198" stroke="#d4cbba" stroke-width="1" stroke-dasharray="2 3"/>
<line x1="145" y1="138" x2="145" y2="198" stroke="#d4cbba" stroke-width="1" stroke-dasharray="2 3"/>
<!-- query -->
<line x1="37" y1="148" x2="143" y2="160" stroke="#c0623a" stroke-width="2" marker-end="url(#arr-amber)"/>
<text x="90" y="143" text-anchor="middle" font-size="10" fill="#9e4e2d" font-weight="500">query</text>
<!-- response -->
<line x1="143" y1="178" x2="37" y2="190" stroke="#c0623a" stroke-width="2" marker-end="url(#arr-amber)"/>
<text x="90" y="205" text-anchor="middle" font-size="10" fill="#9e4e2d" font-weight="500">response</text>
<!-- Total cost badge -->
<rect x="20" y="225" width="140" height="32" rx="4" fill="#faf7f2" stroke="#d4cbba" stroke-width="1" filter="url(#shadow)"/>
<text x="90" y="241" text-anchor="middle" font-size="9" fill="#a39888" letter-spacing="0.04em">TOTAL LATENCY</text>
<text x="90" y="253" text-anchor="middle" font-size="11" font-weight="600" fill="#c0623a" font-family="'JetBrains Mono', monospace">1 × RTT</text>
<!-- Tiny caption -->
<text x="90" y="280" text-anchor="middle" font-size="9" fill="#a39888" font-style="italic">(handshake amortized</text>
<text x="90" y="292" text-anchor="middle" font-size="9" fill="#a39888" font-style="italic">across the session)</text>
</g>
</svg>

After

Width:  |  Height:  |  Size: 7.7 KiB

View File

@@ -0,0 +1,92 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 720 330" font-family="'DM Sans', system-ui, sans-serif" font-size="12">
<defs>
<filter id="shadow" x="-3%" y="-3%" width="106%" height="106%">
<feDropShadow dx="0" dy="1" stdDeviation="2" flood-opacity="0.06"/>
</filter>
<!-- Diagonal hatch for "wasted" UDP timeout regions. Darker warm gray
base + slightly darker diagonal stripes at 45°. The stripe pattern
is the Gantt convention for "dead/blocked time" — it reads as
"this time was thrown away" without needing the legend. -->
<pattern id="wasted-hatch" patternUnits="userSpaceOnUse" width="7" height="7" patternTransform="rotate(-45)">
<rect width="7" height="7" fill="#8b7f6f"/>
<line x1="0" y1="0" x2="0" y2="7" stroke="#3d3427" stroke-width="1.6" opacity="0.38"/>
</pattern>
</defs>
<!-- Background -->
<rect width="720" height="330" rx="8" fill="#faf7f2"/>
<!-- Title -->
<text x="360" y="32" text-anchor="middle" font-size="15" font-weight="600" fill="#2c2418" font-family="'Instrument Serif', Georgia, serif" letter-spacing="-0.02em">TCP fallback with UDP auto-disable</text>
<text x="360" y="50" text-anchor="middle" font-size="11" fill="#a39888">Latency profile on an ISP that blocks outbound UDP:53</text>
<!-- Legend -->
<g transform="translate(160, 70)">
<rect width="14" height="12" rx="2" fill="url(#wasted-hatch)"/>
<text x="22" y="10" font-size="11" fill="#6b5e4f">UDP timeout — 800 ms wasted</text>
<rect x="220" width="14" height="12" rx="2" fill="#c0623a"/>
<text x="242" y="10" font-size="11" fill="#6b5e4f">TCP — successful exchange</text>
</g>
<!-- Time axis -->
<!-- bar area: x=90 to x=570 (480px), representing 0-1200ms, scale 0.4 px/ms -->
<line x1="90" y1="108" x2="570" y2="108" stroke="#d4cbba" stroke-width="1"/>
<!-- tick marks -->
<line x1="90" y1="106" x2="90" y2="112" stroke="#a39888" stroke-width="1"/>
<line x1="210" y1="106" x2="210" y2="112" stroke="#a39888" stroke-width="1"/>
<line x1="330" y1="106" x2="330" y2="112" stroke="#a39888" stroke-width="1"/>
<line x1="410" y1="106" x2="410" y2="112" stroke="#a39888" stroke-width="1"/>
<line x1="530" y1="106" x2="530" y2="112" stroke="#a39888" stroke-width="1"/>
<!-- tick labels -->
<text x="90" y="102" text-anchor="middle" font-size="9" fill="#a39888" font-family="'JetBrains Mono', monospace">0</text>
<text x="210" y="102" text-anchor="middle" font-size="9" fill="#a39888" font-family="'JetBrains Mono', monospace">300</text>
<text x="330" y="102" text-anchor="middle" font-size="9" fill="#a39888" font-family="'JetBrains Mono', monospace">600</text>
<text x="410" y="102" text-anchor="middle" font-size="9" fill="#a39888" font-family="'JetBrains Mono', monospace">800</text>
<text x="530" y="102" text-anchor="middle" font-size="9" fill="#a39888" font-family="'JetBrains Mono', monospace">1100 ms</text>
<!-- ============ Phase 1: UDP-first (wasted 800ms per query) ============ -->
<!-- Query 1 -->
<text x="82" y="135" text-anchor="end" font-size="11" fill="#6b5e4f">query 1</text>
<rect x="90" y="125" width="320" height="16" rx="2" fill="url(#wasted-hatch)"/>
<rect x="410" y="125" width="120" height="16" rx="2" fill="#c0623a"/>
<text x="540" y="137" font-size="10" fill="#6b5e4f" font-family="'JetBrains Mono', monospace">1,100 ms</text>
<!-- Query 2 -->
<text x="82" y="159" text-anchor="end" font-size="11" fill="#6b5e4f">query 2</text>
<rect x="90" y="149" width="320" height="16" rx="2" fill="url(#wasted-hatch)"/>
<rect x="410" y="149" width="120" height="16" rx="2" fill="#c0623a"/>
<text x="540" y="161" font-size="10" fill="#6b5e4f" font-family="'JetBrains Mono', monospace">1,100 ms</text>
<!-- Query 3 -->
<text x="82" y="183" text-anchor="end" font-size="11" fill="#6b5e4f">query 3</text>
<rect x="90" y="173" width="320" height="16" rx="2" fill="url(#wasted-hatch)"/>
<rect x="410" y="173" width="120" height="16" rx="2" fill="#c0623a"/>
<text x="540" y="185" font-size="10" fill="#6b5e4f" font-family="'JetBrains Mono', monospace">1,100 ms</text>
<!-- State-change divider -->
<line x1="90" y1="206" x2="570" y2="206" stroke="#6b7c4e" stroke-width="1" stroke-dasharray="4 3"/>
<rect x="200" y="198" width="260" height="18" rx="9" fill="#faf7f2" stroke="#6b7c4e" stroke-width="1" filter="url(#shadow)"/>
<text x="330" y="210" text-anchor="middle" font-size="10" fill="#566540" font-weight="500">3 consecutive failures → UDP auto-disabled</text>
<!-- ============ Phase 2: TCP-first (UDP skipped) ============ -->
<!-- Query 4 -->
<text x="82" y="235" text-anchor="end" font-size="11" fill="#6b5e4f">query 4</text>
<rect x="90" y="225" width="120" height="16" rx="2" fill="#c0623a"/>
<text x="220" y="237" font-size="10" fill="#6b5e4f" font-family="'JetBrains Mono', monospace">300 ms</text>
<!-- Query 5 -->
<text x="82" y="259" text-anchor="end" font-size="11" fill="#6b5e4f">query 5</text>
<rect x="90" y="249" width="120" height="16" rx="2" fill="#c0623a"/>
<text x="220" y="261" font-size="10" fill="#6b5e4f" font-family="'JetBrains Mono', monospace">300 ms</text>
<!-- Speedup callout -->
<g transform="translate(300, 246)">
<line x1="0" y1="-10" x2="0" y2="22" stroke="#6b7c4e" stroke-width="1" stroke-dasharray="2 2"/>
<text x="10" y="6" font-size="10" fill="#566540" font-style="italic">3.7× faster — no more UDP wait</text>
</g>
<!-- Footer caption -->
<text x="360" y="298" text-anchor="middle" font-size="10" fill="#a39888" font-style="italic">The flag resets on network change (LAN IP delta). Switch back to a clean network and UDP is tried again.</text>
</svg>

After

Width:  |  Height:  |  Size: 5.6 KiB

View File

@@ -67,6 +67,7 @@ body::before {
font-weight: 400; font-weight: 400;
color: var(--text-primary); color: var(--text-primary);
text-decoration: none; text-decoration: none;
text-transform: none;
letter-spacing: -0.02em; letter-spacing: -0.02em;
} }
.blog-nav .wordmark:hover { color: var(--amber); } .blog-nav .wordmark:hover { color: var(--amber); }
@@ -167,6 +168,13 @@ body::before {
<main class="blog-index"> <main class="blog-index">
<h1>Blog</h1> <h1>Blog</h1>
<ul class="post-list"> <ul class="post-list">
<li>
<a href="/blog/posts/dot-from-scratch.html">
<div class="post-title">DNS-over-TLS from Scratch in Rust</div>
<div class="post-desc">Building RFC 7858 on top of rustls — length-prefix framing, ALPN cross-protocol defense, iPhone dogfooding, and two bugs that only the strict clients caught.</div>
<div class="post-date">April 2026</div>
</a>
</li>
<li> <li>
<a href="/blog/posts/dnssec-from-scratch.html"> <a href="/blog/posts/dnssec-from-scratch.html">
<div class="post-title">Implementing DNSSEC from Scratch in Rust</div> <div class="post-title">Implementing DNSSEC from Scratch in Rust</div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 310 KiB

View File

@@ -288,6 +288,7 @@ body {
.path-tag.SERVFAIL { background: rgba(181, 68, 58, 0.12); color: var(--rose); } .path-tag.SERVFAIL { background: rgba(181, 68, 58, 0.12); color: var(--rose); }
.path-tag.BLOCKED { background: rgba(163, 152, 136, 0.15); color: var(--text-dim); } .path-tag.BLOCKED { background: rgba(163, 152, 136, 0.15); color: var(--text-dim); }
.path-tag.COALESCED { background: rgba(138, 104, 158, 0.12); color: var(--violet-dim); } .path-tag.COALESCED { background: rgba(138, 104, 158, 0.12); color: var(--violet-dim); }
.src-tag { font-size: 0.6rem; color: var(--text-dim); letter-spacing: 0.02em; }
/* Sidebar panels */ /* Sidebar panels */
.sidebar { .sidebar {
@@ -553,6 +554,20 @@ body {
<div class="tagline">DNS that governs itself</div> <div class="tagline">DNS that governs itself</div>
</div> </div>
<div style="display:flex;align-items:center;gap:1.2rem;"> <div style="display:flex;align-items:center;gap:1.2rem;">
<div id="phoneSetup" style="position:relative;display:none;">
<button class="btn" onclick="togglePhoneSetup()" style="background:var(--bg-surface);color:var(--text-secondary);font-family:var(--font-mono);font-size:0.7rem;padding:0.35rem 0.6rem;border:1px solid var(--border);" title="Set up phone">Phone Setup</button>
<div id="phoneSetupPopover" style="display:none;position:absolute;top:calc(100% + 8px);right:0;z-index:100;background:var(--bg-card);border:1px solid var(--border);border-radius:10px;padding:1.2rem;width:260px;box-shadow:0 4px 20px rgba(0,0,0,0.08);">
<div style="font-size:0.7rem;font-weight:600;text-transform:uppercase;letter-spacing:0.1em;color:var(--text-secondary);margin-bottom:0.8rem;">Phone Setup</div>
<div id="qrContainer" style="display:flex;justify-content:center;margin-bottom:0.8rem;"></div>
<div id="phoneSetupLink" style="display:none;text-align:center;margin-bottom:0.8rem;"></div>
<div style="font-family:var(--font-mono);font-size:0.62rem;color:var(--text-dim);line-height:1.6;">
1. Scan QR &rarr; allow download<br>
2. Settings &rarr; Profile Downloaded &rarr; Install<br>
3. Settings &rarr; General &rarr; About &rarr;<br>
&nbsp;&nbsp;&nbsp;Certificate Trust Settings &rarr; toggle ON
</div>
</div>
</div>
<button class="btn" id="pauseBtn" style="background:var(--amber);color:white;font-family:var(--font-mono);font-size:0.7rem;display:none;">Pause 5m</button> <button class="btn" id="pauseBtn" style="background:var(--amber);color:white;font-family:var(--font-mono);font-size:0.7rem;display:none;">Pause 5m</button>
<button class="btn" id="toggleBtn" onclick="toggleBlocking()" style="background:var(--rose);color:white;font-family:var(--font-mono);font-size:0.7rem;display:none;"></button> <button class="btn" id="toggleBtn" onclick="toggleBlocking()" style="background:var(--rose);color:white;font-family:var(--font-mono);font-size:0.7rem;display:none;"></button>
<div class="status-badge"> <div class="status-badge">
@@ -787,6 +802,41 @@ function formatTime(epoch) {
return d.toLocaleTimeString([], { hour12: false }); return d.toLocaleTimeString([], { hour12: false });
} }
let mobilePort = 8765;
function togglePhoneSetup() {
const pop = document.getElementById('phoneSetupPopover');
const isOpen = pop.style.display !== 'none';
pop.style.display = isOpen ? 'none' : 'block';
if (!isOpen) {
if (window.innerWidth <= 700) {
document.getElementById('qrContainer').style.display = 'none';
const linkEl = document.getElementById('phoneSetupLink');
const host = window.location.hostname;
linkEl.style.display = 'block';
linkEl.innerHTML = `<a href="http://${host}:${mobilePort}/mobileconfig" style="display:inline-block;padding:0.5rem 1rem;background:var(--amber);color:white;border-radius:6px;text-decoration:none;font-family:var(--font-mono);font-size:0.75rem;">Install Profile</a>`;
} else {
fetch(API + '/qr').then(r => r.text()).then(svg => {
document.getElementById('qrContainer').innerHTML = svg;
}).catch(() => {
document.getElementById('qrContainer').innerHTML = '<div class="empty-state">Could not load QR</div>';
});
}
}
}
document.addEventListener('click', (e) => {
const setup = document.getElementById('phoneSetup');
if (setup && !setup.contains(e.target)) {
document.getElementById('phoneSetupPopover').style.display = 'none';
}
});
function shortSrc(addr) {
if (!addr) return '';
const ip = addr.replace(/:\d+$/, '');
if (ip === '127.0.0.1' || ip === '::1') return 'localhost';
return ip;
}
function formatRemaining(secs) { function formatRemaining(secs) {
if (secs == null) return 'permanent'; if (secs == null) return 'permanent';
if (secs < 60) return `${secs}s left`; if (secs < 60) return `${secs}s left`;
@@ -912,8 +962,8 @@ function applyLogFilter() {
? ` <button class="btn-delete" onclick="allowDomain('${e.domain}')" title="Allow this domain" style="color:var(--emerald);font-size:0.65rem;">allow</button>` ? ` <button class="btn-delete" onclick="allowDomain('${e.domain}')" title="Allow this domain" style="color:var(--emerald);font-size:0.65rem;">allow</button>`
: ''; : '';
return ` return `
<tr> <tr title="Source: ${e.src || 'unknown'}">
<td>${formatTime(e.timestamp_epoch)}</td> <td>${formatTime(e.timestamp_epoch)}<br><span class="src-tag">${shortSrc(e.src)}</span></td>
<td>${e.query_type}</td> <td>${e.query_type}</td>
<td class="domain-cell" title="${e.domain}">${e.domain}${allowBtn}</td> <td class="domain-cell" title="${e.domain}">${e.domain}${allowBtn}</td>
<td><span class="path-tag ${e.path}">${e.path}</span></td> <td><span class="path-tag ${e.path}">${e.path}</span></td>
@@ -1050,6 +1100,14 @@ async function refresh() {
} }
} }
const phoneSetupEl = document.getElementById('phoneSetup');
if (stats.mobile && stats.mobile.enabled) {
phoneSetupEl.style.display = '';
mobilePort = stats.mobile.port;
} else {
phoneSetupEl.style.display = 'none';
}
document.getElementById('overrideCount').textContent = stats.overrides.active; document.getElementById('overrideCount').textContent = stats.overrides.active;
document.getElementById('blockedCount').textContent = formatNumber(q.blocked); document.getElementById('blockedCount').textContent = formatNumber(q.blocked);
const bl = stats.blocking; const bl = stats.blocking;

View File

@@ -188,11 +188,50 @@ p.lead {
line-height: 1.8; line-height: 1.8;
} }
/* ===========================
TOP NAV
=========================== */
.site-nav {
padding: 1.5rem 2rem;
display: flex;
align-items: center;
gap: 1.5rem;
position: relative;
z-index: 10;
}
.site-nav a {
font-family: var(--font-mono);
font-size: 0.75rem;
letter-spacing: 0.08em;
text-transform: uppercase;
color: var(--text-dim);
text-decoration: none;
transition: color 0.2s ease;
}
.site-nav a:hover { color: var(--amber); }
.site-nav .wordmark {
font-family: var(--font-display);
font-size: 1.4rem;
font-weight: 400;
color: var(--text-primary);
text-transform: none;
letter-spacing: -0.02em;
}
.site-nav .wordmark:hover { color: var(--amber); }
.site-nav .sep {
color: var(--text-dim);
font-family: var(--font-mono);
font-size: 0.75rem;
}
/* =========================== /* ===========================
HERO HERO
=========================== */ =========================== */
.hero { .hero {
min-height: 100vh; min-height: calc(100vh - 5rem);
display: flex; display: flex;
align-items: center; align-items: center;
position: relative; position: relative;
@@ -1158,6 +1197,9 @@ footer .closing {
@media (max-width: 600px) { @media (max-width: 600px) {
section { padding: 4rem 0; } section { padding: 4rem 0; }
.container { padding: 0 1.25rem; } .container { padding: 0 1.25rem; }
.site-nav { padding: 1rem 1.25rem; gap: 1rem; }
.site-nav .wordmark { font-size: 1.2rem; }
.hero { min-height: calc(100vh - 4rem); }
.network-grid { grid-template-columns: 1fr; } .network-grid { grid-template-columns: 1fr; }
.pipeline { flex-direction: column; align-items: stretch; gap: 0; } .pipeline { flex-direction: column; align-items: stretch; gap: 0; }
.pipeline-arrow { transform: rotate(90deg); padding: 0.15rem 0; align-self: center; } .pipeline-arrow { transform: rotate(90deg); padding: 0.15rem 0; align-self: center; }
@@ -1171,6 +1213,14 @@ footer .closing {
</head> </head>
<body> <body>
<nav class="site-nav">
<a href="/" class="wordmark">Numa</a>
<span class="sep">/</span>
<a href="/blog/">Blog</a>
<span class="sep">/</span>
<a href="https://github.com/razvandimescu/numa" target="_blank" rel="noopener">GitHub</a>
</nav>
<!-- ==================== HERO ==================== --> <!-- ==================== HERO ==================== -->
<section class="hero"> <section class="hero">
<div class="roman-bricks" aria-hidden="true"></div> <div class="roman-bricks" aria-hidden="true"></div>
@@ -1243,6 +1293,8 @@ footer .closing {
<li>Ad &amp; tracker blocking &mdash; 385K+ domains, zero config</li> <li>Ad &amp; tracker blocking &mdash; 385K+ domains, zero config</li>
<li>Recursive resolution &mdash; opt-in, resolve from root nameservers, no upstream needed</li> <li>Recursive resolution &mdash; opt-in, resolve from root nameservers, no upstream needed</li>
<li>DNSSEC validation &mdash; chain-of-trust + NSEC/NSEC3 denial proofs (RSA, ECDSA, Ed25519)</li> <li>DNSSEC validation &mdash; chain-of-trust + NSEC/NSEC3 denial proofs (RSA, ECDSA, Ed25519)</li>
<li>DNS-over-TLS listener &mdash; encrypted DNS for phones and strict clients (RFC 7858 with ALPN defense)</li>
<li>Hostile-network resilience &mdash; TCP fallback with UDP auto-disable when ISPs block port 53</li>
<li>TTL-aware caching (sub-ms lookups)</li> <li>TTL-aware caching (sub-ms lookups)</li>
<li>Single binary, portable &mdash; macOS, Linux, and Windows</li> <li>Single binary, portable &mdash; macOS, Linux, and Windows</li>
</ul> </ul>
@@ -1261,7 +1313,7 @@ footer .closing {
</ul> </ul>
</div> </div>
<div class="layer-card reveal reveal-delay-3"> <div class="layer-card reveal reveal-delay-3">
<div class="layer-badge">Coming Next</div> <div class="layer-badge">The Vision</div>
<h3>Self-Sovereign DNS</h3> <h3>Self-Sovereign DNS</h3>
<ul> <ul>
<li>pkarr integration &mdash; DNS via Mainline DHT, no registrar needed</li> <li>pkarr integration &mdash; DNS via Mainline DHT, no registrar needed</li>
@@ -1342,6 +1394,14 @@ footer .closing {
<td class="cross">No</td> <td class="cross">No</td>
<td class="check">Root hints + full DNSSEC</td> <td class="check">Root hints + full DNSSEC</td>
</tr> </tr>
<tr>
<td>DNSSEC validation</td>
<td class="muted">Passthrough</td>
<td class="muted">Cloud only</td>
<td class="muted">Cloud only</td>
<td class="muted">Passthrough</td>
<td class="check">Full chain-of-trust</td>
</tr>
<tr> <tr>
<td>Ad &amp; tracker blocking</td> <td>Ad &amp; tracker blocking</td>
<td class="check">Yes</td> <td class="check">Yes</td>
@@ -1398,6 +1458,14 @@ footer .closing {
<td class="cross">No</td> <td class="cross">No</td>
<td class="check">Built in (HTTP/2 + rustls)</td> <td class="check">Built in (HTTP/2 + rustls)</td>
</tr> </tr>
<tr>
<td>DNS-over-TLS listener</td>
<td class="cross">No</td>
<td class="muted">Cloud only</td>
<td class="muted">Cloud only</td>
<td class="check">Yes (cert required)</td>
<td class="check">Self-signed or BYO</td>
</tr>
<tr> <tr>
<td>Conditional forwarding</td> <td>Conditional forwarding</td>
<td class="cross">No</td> <td class="cross">No</td>
@@ -1567,11 +1635,14 @@ footer .closing {
<dt>Resolution Modes</dt> <dt>Resolution Modes</dt>
<dd>Recursive (iterative from root hints, CNAME chasing, glue extraction) or Forward (DoH / plain UDP)</dd> <dd>Recursive (iterative from root hints, CNAME chasing, glue extraction) or Forward (DoH / plain UDP)</dd>
<dt>Listeners</dt>
<dd>UDP:53 + TCP:53 (plain DNS), DoT:853 (RFC 7858 + ALPN), HTTP proxy :80 / HTTPS proxy :443, dashboard :5380</dd>
<dt>DNSSEC</dt> <dt>DNSSEC</dt>
<dd>Chain-of-trust via ring &mdash; RSA/SHA-256, ECDSA P-256, Ed25519. NSEC/NSEC3 denial proofs. EDNS0 DO bit, 1232-byte payload (DNS Flag Day 2020).</dd> <dd>Chain-of-trust via ring &mdash; RSA/SHA-256, ECDSA P-256, Ed25519. NSEC/NSEC3 denial proofs. EDNS0 DO bit, 1232-byte payload (DNS Flag Day 2020).</dd>
<dt>Dependencies</dt> <dt>Dependencies</dt>
<dd>19 runtime crates &mdash; tokio, axum, hyper, ring (DNSSEC), reqwest (DoH), rcgen + rustls (TLS), socket2 (multicast), serde, and more</dd> <dd>A focused set &mdash; tokio, axum, hyper, ring (DNSSEC), reqwest (DoH), rcgen + rustls + tokio-rustls (TLS/DoT), socket2 (multicast), serde. No transitive DNS library.</dd>
<dt>Packet Format</dt> <dt>Packet Format</dt>
<dd>RFC 1035 compliant. EDNS0 OPT pseudo-record. Parses A, AAAA, NS, CNAME, MX, SOA, SRV, HTTPS, DNSKEY, DS, RRSIG, NSEC, NSEC3.</dd> <dd>RFC 1035 compliant. EDNS0 OPT pseudo-record. Parses A, AAAA, NS, CNAME, MX, SOA, SRV, HTTPS, DNSKEY, DS, RRSIG, NSEC, NSEC3.</dd>
@@ -1586,7 +1657,7 @@ footer .closing {
<span class="prompt">$</span> <span class="cmd">curl</span> <span class="flag">-fsSL</span> https://raw.githubusercontent.com/razvandimescu/numa/main/install.sh <span class="flag">|</span> <span class="cmd">sh</span> <span class="prompt">$</span> <span class="cmd">curl</span> <span class="flag">-fsSL</span> https://raw.githubusercontent.com/razvandimescu/numa/main/install.sh <span class="flag">|</span> <span class="cmd">sh</span>
<span class="comment"># Run</span> <span class="comment"># Run</span>
<span class="prompt">$</span> <span class="cmd">sudo numa</span> <span class="comment"># bind to :53, :80, :5380</span> <span class="prompt">$</span> <span class="cmd">sudo numa</span> <span class="comment"># bind :53, :80, :443, :853, :5380</span>
<span class="prompt">$</span> <span class="cmd">dig</span> <span class="flag">@127.0.0.1</span> google.com <span class="comment"># test resolution</span> <span class="prompt">$</span> <span class="cmd">dig</span> <span class="flag">@127.0.0.1</span> google.com <span class="comment"># test resolution</span>
<span class="prompt">$</span> <span class="cmd">open</span> http://localhost:5380 <span class="comment"># dashboard</span> <span class="prompt">$</span> <span class="cmd">open</span> http://localhost:5380 <span class="comment"># dashboard</span>
<span class="prompt">$</span> <span class="cmd">curl</span> <span class="flag">-X POST</span> localhost:5380/services \ <span class="prompt">$</span> <span class="cmd">curl</span> <span class="flag">-X POST</span> localhost:5380/services \
@@ -1639,16 +1710,28 @@ footer .closing {
<span class="phase">Phase 7</span> <span class="phase">Phase 7</span>
<span class="phase-desc">DNSSEC validation &mdash; chain-of-trust, NSEC/NSEC3 denial proofs, RSA + ECDSA + Ed25519</span> <span class="phase-desc">DNSSEC validation &mdash; chain-of-trust, NSEC/NSEC3 denial proofs, RSA + ECDSA + Ed25519</span>
</div> </div>
<div class="roadmap-item phase-teal"> <div class="roadmap-item done">
<span class="phase">Phase 8</span> <span class="phase">Phase 8</span>
<span class="phase-desc">Hostile-network resilience &mdash; TCP fallback with UDP auto-disable when ISPs block :53, RFC 7816 query minimization</span>
</div>
<div class="roadmap-item done">
<span class="phase">Phase 9</span>
<span class="phase-desc">Windows support &mdash; cross-platform install/uninstall, <code>netsh</code> DNS config, service integration</span>
</div>
<div class="roadmap-item done">
<span class="phase">Phase 10</span>
<span class="phase-desc">DNS-over-TLS listener (RFC 7858) &mdash; ALPN enforcement, persistent connections, self-signed or BYO cert</span>
</div>
<div class="roadmap-item phase-teal">
<span class="phase">Phase 11</span>
<span class="phase-desc">pkarr integration &mdash; self-sovereign DNS via Mainline DHT, no registrar needed</span> <span class="phase-desc">pkarr integration &mdash; self-sovereign DNS via Mainline DHT, no registrar needed</span>
</div> </div>
<div class="roadmap-item phase-teal"> <div class="roadmap-item phase-teal">
<span class="phase">Phase 9</span> <span class="phase">Phase 12</span>
<span class="phase-desc">Global .numa names &mdash; self-publish, DHT-backed, first-come-first-served</span> <span class="phase-desc">Global .numa names &mdash; self-publish, DHT-backed, first-come-first-served</span>
</div> </div>
<div class="roadmap-item phase-teal"> <div class="roadmap-item phase-teal">
<span class="phase">Phase 10</span> <span class="phase">Phase 13</span>
<span class="phase-desc">.onion bridge &mdash; human-readable Tor naming via Ed25519 same-key binding</span> <span class="phase-desc">.onion bridge &mdash; human-readable Tor naming via Ed25519 same-key binding</span>
</div> </div>
</div> </div>

View File

@@ -57,6 +57,7 @@ pub fn router(ctx: Arc<ServerCtx>) -> Router {
.route("/services/{name}/routes", post(add_route)) .route("/services/{name}/routes", post(add_route))
.route("/services/{name}/routes", delete(remove_route)) .route("/services/{name}/routes", delete(remove_route))
.route("/ca.pem", get(serve_ca)) .route("/ca.pem", get(serve_ca))
.route("/qr", get(serve_qr))
.route("/fonts/fonts.css", get(serve_fonts_css)) .route("/fonts/fonts.css", get(serve_fonts_css))
.route( .route(
"/fonts/dm-sans-latin.woff2", "/fonts/dm-sans-latin.woff2",
@@ -170,9 +171,16 @@ struct StatsResponse {
overrides: OverrideStats, overrides: OverrideStats,
blocking: BlockingStatsResponse, blocking: BlockingStatsResponse,
lan: LanStatsResponse, lan: LanStatsResponse,
mobile: MobileStatsResponse,
memory: MemoryStats, memory: MemoryStats,
} }
#[derive(Serialize)]
struct MobileStatsResponse {
enabled: bool,
port: u16,
}
#[derive(Serialize)] #[derive(Serialize)]
struct LanStatsResponse { struct LanStatsResponse {
enabled: bool, enabled: bool,
@@ -403,9 +411,12 @@ async fn diagnose(
} }
// Check upstream (async, no locks held) // Check upstream (async, no locks held)
let upstream = ctx.upstream.lock().unwrap().clone(); let upstream = ctx.upstream_pool.lock().unwrap().preferred().cloned();
let (upstream_matched, upstream_detail) = let (upstream_matched, upstream_detail) = if let Some(ref u) = upstream {
forward_query_for_diagnose(&domain_lower, &upstream, ctx.timeout).await; forward_query_for_diagnose(&domain_lower, u, ctx.timeout).await
} else {
(false, "no upstream configured".to_string())
};
steps.push(DiagnoseStep { steps.push(DiagnoseStep {
source: "upstream".to_string(), source: "upstream".to_string(),
matched: upstream_matched, matched: upstream_matched,
@@ -512,7 +523,7 @@ async fn stats(State(ctx): State<Arc<ServerCtx>>) -> Json<StatsResponse> {
let upstream = if ctx.upstream_mode == crate::config::UpstreamMode::Recursive { let upstream = if ctx.upstream_mode == crate::config::UpstreamMode::Recursive {
"recursive (root hints)".to_string() "recursive (root hints)".to_string()
} else { } else {
ctx.upstream.lock().unwrap().to_string() ctx.upstream_pool.lock().unwrap().label()
}; };
Json(StatsResponse { Json(StatsResponse {
@@ -551,6 +562,10 @@ async fn stats(State(ctx): State<Arc<ServerCtx>>) -> Json<StatsResponse> {
enabled: ctx.lan_enabled, enabled: ctx.lan_enabled,
peers: ctx.lan_peers.lock().unwrap().list().len(), peers: ctx.lan_peers.lock().unwrap().list().len(),
}, },
mobile: MobileStatsResponse {
enabled: ctx.mobile_enabled,
port: ctx.mobile_port,
},
memory: MemoryStats { memory: MemoryStats {
cache_bytes, cache_bytes,
blocklist_bytes, blocklist_bytes,
@@ -592,8 +607,19 @@ async fn flush_cache_domain(
StatusCode::NO_CONTENT StatusCode::NO_CONTENT
} }
async fn health() -> Json<serde_json::Value> { /// Enriched `/health` handler shared between the main API and the mobile API.
Json(serde_json::json!({ "status": "ok" })) ///
/// Returns the cached `HealthMeta` assembled with live fields (LAN IP,
/// uptime). Backward compatible with the previous minimal response in
/// that `status` is still the first field and `"ok"` is still the value.
/// The iOS companion app's `HealthInfo` Swift struct decodes the full
/// response; any HTTP client asserting only on `"status"` keeps working.
pub async fn health(State(ctx): State<Arc<ServerCtx>>) -> Json<crate::health::HealthResponse> {
let lan_ip = Some(*ctx.lan_ip.lock().unwrap());
Json(crate::health::HealthResponse::build(
&ctx.health_meta,
lan_ip,
))
} }
// --- Blocking handlers --- // --- Blocking handlers ---
@@ -905,12 +931,8 @@ async fn remove_route(
} }
} }
async fn serve_ca(State(ctx): State<Arc<ServerCtx>>) -> Result<impl IntoResponse, StatusCode> { pub async fn serve_ca(State(ctx): State<Arc<ServerCtx>>) -> Result<impl IntoResponse, StatusCode> {
let ca_path = ctx.data_dir.join("ca.pem"); let pem = ctx.ca_pem.as_deref().ok_or(StatusCode::NOT_FOUND)?;
let bytes = tokio::task::spawn_blocking(move || std::fs::read(ca_path))
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
.map_err(|_| StatusCode::NOT_FOUND)?;
Ok(( Ok((
[ [
(header::CONTENT_TYPE, "application/x-pem-file"), (header::CONTENT_TYPE, "application/x-pem-file"),
@@ -920,7 +942,29 @@ async fn serve_ca(State(ctx): State<Arc<ServerCtx>>) -> Result<impl IntoResponse
), ),
(header::CACHE_CONTROL, "public, max-age=86400"), (header::CACHE_CONTROL, "public, max-age=86400"),
], ],
bytes, pem.to_string(),
))
}
async fn serve_qr(State(ctx): State<Arc<ServerCtx>>) -> Result<impl IntoResponse, StatusCode> {
if !ctx.mobile_enabled {
return Err(StatusCode::NOT_FOUND);
}
let lan_ip = *ctx.lan_ip.lock().unwrap();
let url = format!("http://{}:{}/mobileconfig", lan_ip, ctx.mobile_port);
let code = qrcode::QrCode::new(&url).map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
let svg = code
.render::<qrcode::render::svg::Color>()
.min_dimensions(180, 180)
.dark_color(qrcode::render::svg::Color("#2c2418"))
.light_color(qrcode::render::svg::Color("#faf7f2"))
.build();
Ok((
[
(header::CONTENT_TYPE, "image/svg+xml"),
(header::CACHE_CONTROL, "no-store"),
],
svg,
)) ))
} }
@@ -975,8 +1019,11 @@ mod tests {
services: Mutex::new(crate::service_store::ServiceStore::new()), services: Mutex::new(crate::service_store::ServiceStore::new()),
lan_peers: Mutex::new(crate::lan::PeerStore::new(90)), lan_peers: Mutex::new(crate::lan::PeerStore::new(90)),
forwarding_rules: Vec::new(), forwarding_rules: Vec::new(),
upstream: Mutex::new(crate::forward::Upstream::Udp( upstream_pool: Mutex::new(crate::forward::UpstreamPool::new(
vec![crate::forward::Upstream::Udp(
"127.0.0.1:53".parse().unwrap(), "127.0.0.1:53".parse().unwrap(),
)],
vec![],
)), )),
upstream_auto: false, upstream_auto: false,
upstream_port: 53, upstream_port: 53,
@@ -996,6 +1043,10 @@ mod tests {
inflight: Mutex::new(std::collections::HashMap::new()), inflight: Mutex::new(std::collections::HashMap::new()),
dnssec_enabled: false, dnssec_enabled: false,
dnssec_strict: false, dnssec_strict: false,
health_meta: crate::health::HealthMeta::test_fixture(),
ca_pem: None,
mobile_enabled: false,
mobile_port: 8765,
}) })
} }

View File

@@ -81,66 +81,70 @@ impl BlocklistStore {
if !self.enabled { if !self.enabled {
return false; return false;
} }
if let Some(until) = self.paused_until { if let Some(until) = self.paused_until {
if Instant::now() < until { if Instant::now() < until {
return false; return false;
} }
} }
let domain = Self::normalize(domain);
if self.allowlist.contains(domain) { if Self::find_in_set(&domain, &self.allowlist).is_some() {
return false; return false;
} }
Self::find_in_set(&domain, &self.domains).is_some()
if self.domains.contains(domain) {
return true;
} }
// Walk up: ads.tracker.example.com → tracker.example.com → example.com
let mut d = domain;
while let Some(dot) = d.find('.') {
d = &d[dot + 1..];
if self.allowlist.contains(d) {
return false;
}
if self.domains.contains(d) {
return true;
}
}
false
}
/// Check if a domain is blocked and return the reason.
pub fn check(&self, domain: &str) -> BlockCheckResult { pub fn check(&self, domain: &str) -> BlockCheckResult {
let domain = domain.to_lowercase();
if !self.enabled { if !self.enabled {
return BlockCheckResult::disabled(); return BlockCheckResult::disabled();
} }
if self.allowlist.contains(&domain) { if let Some(until) = self.paused_until {
return BlockCheckResult::allowed(&domain, "exact match in allowlist"); if Instant::now() < until {
return BlockCheckResult::disabled();
}
} }
if self.domains.contains(&domain) { let domain = Self::normalize(domain);
return BlockCheckResult::blocked(&domain, "exact match in blocklist");
if let Some(matched) = Self::find_in_set(&domain, &self.allowlist) {
let reason = if matched == domain {
"exact match in allowlist"
} else {
"parent domain in allowlist"
};
return BlockCheckResult::allowed(matched, reason);
} }
let mut d = domain.as_str(); if let Some(matched) = Self::find_in_set(&domain, &self.domains) {
while let Some(dot) = d.find('.') { let reason = if matched == domain {
d = &d[dot + 1..]; "exact match in blocklist"
if self.allowlist.contains(d) { } else {
return BlockCheckResult::allowed(d, "parent domain in allowlist"); "parent domain in blocklist"
} };
if self.domains.contains(d) { return BlockCheckResult::blocked(matched, reason);
return BlockCheckResult::blocked(d, "parent domain in blocklist");
}
} }
BlockCheckResult::not_blocked() BlockCheckResult::not_blocked()
} }
fn normalize(domain: &str) -> String {
domain.to_lowercase().trim_end_matches('.').to_string()
}
fn find_in_set<'a>(domain: &'a str, set: &HashSet<String>) -> Option<&'a str> {
if set.contains(domain) {
return Some(domain);
}
let mut d = domain;
while let Some(dot) = d.find('.') {
d = &d[dot + 1..];
if set.contains(d) {
return Some(d);
}
}
None
}
/// Atomically swap in a new domain set. Build the set outside the lock, /// Atomically swap in a new domain set. Build the set outside the lock,
/// then call this to swap — keeps lock hold time sub-microsecond. /// then call this to swap — keeps lock hold time sub-microsecond.
pub fn swap_domains(&mut self, domains: HashSet<String>, sources: Vec<String>) { pub fn swap_domains(&mut self, domains: HashSet<String>, sources: Vec<String>) {
@@ -172,11 +176,11 @@ impl BlocklistStore {
} }
pub fn add_to_allowlist(&mut self, domain: &str) { pub fn add_to_allowlist(&mut self, domain: &str) {
self.allowlist.insert(domain.to_lowercase()); self.allowlist.insert(Self::normalize(domain));
} }
pub fn remove_from_allowlist(&mut self, domain: &str) -> bool { pub fn remove_from_allowlist(&mut self, domain: &str) -> bool {
self.allowlist.remove(&domain.to_lowercase()) self.allowlist.remove(&Self::normalize(domain))
} }
pub fn allowlist(&self) -> Vec<String> { pub fn allowlist(&self) -> Vec<String> {
@@ -247,6 +251,97 @@ pub fn parse_blocklist(text: &str) -> HashSet<String> {
mod tests { mod tests {
use super::*; use super::*;
fn store_with(domains: &[&str], allowlist: &[&str]) -> BlocklistStore {
let mut store = BlocklistStore::new();
store.swap_domains(domains.iter().map(|s| s.to_string()).collect(), vec![]);
for d in allowlist {
store.add_to_allowlist(d);
}
store
}
#[test]
fn exact_block() {
let store = store_with(&["ads.example.com"], &[]);
assert!(store.is_blocked("ads.example.com"));
assert!(!store.is_blocked("example.com"));
}
#[test]
fn parent_block_covers_subdomain() {
let store = store_with(&["tracker.com"], &[]);
assert!(store.is_blocked("tracker.com"));
assert!(store.is_blocked("www.tracker.com"));
assert!(store.is_blocked("deep.sub.tracker.com"));
}
#[test]
fn exact_allowlist_unblocks() {
let store = store_with(&["ads.example.com"], &["ads.example.com"]);
assert!(!store.is_blocked("ads.example.com"));
}
#[test]
fn parent_allowlist_unblocks_subdomain() {
let store = store_with(&["example.com", "www.example.com"], &["example.com"]);
assert!(!store.is_blocked("example.com"));
assert!(!store.is_blocked("www.example.com"));
assert!(!store.is_blocked("sub.deep.example.com"));
}
#[test]
fn allowlist_does_not_unblock_sibling() {
let store = store_with(
&["www.example.com", "ads.example.com"],
&["www.example.com"],
);
assert!(!store.is_blocked("www.example.com"));
assert!(store.is_blocked("ads.example.com"));
}
#[test]
fn check_reports_parent_allowlist() {
let store = store_with(
&["goatcounter.com", "www.goatcounter.com"],
&["goatcounter.com"],
);
let result = store.check("www.goatcounter.com");
assert!(!result.blocked);
assert_eq!(result.matched_rule.as_deref(), Some("goatcounter.com"));
}
#[test]
fn disabled_never_blocks() {
let mut store = store_with(&["ads.example.com"], &[]);
store.set_enabled(false);
assert!(!store.is_blocked("ads.example.com"));
}
#[test]
fn trailing_dot_normalized() {
let store = store_with(&["ads.example.com"], &["safe.example.com"]);
assert!(store.is_blocked("ads.example.com."));
assert!(!store.is_blocked("safe.example.com."));
let result = store.check("ads.example.com.");
assert!(result.blocked);
}
#[test]
fn case_insensitive() {
let store = store_with(&["ads.example.com"], &["safe.example.com"]);
assert!(store.is_blocked("ADS.Example.COM"));
assert!(!store.is_blocked("Safe.Example.COM"));
}
#[test]
fn domain_in_neither_list() {
let store = store_with(&["ads.example.com"], &[]);
let result = store.check("clean.example.org");
assert!(!result.blocked);
assert_eq!(result.reason, "not in blocklist");
assert!(result.matched_rule.is_none());
}
#[test] #[test]
fn heap_bytes_grows_with_domains() { fn heap_bytes_grows_with_domains() {
let mut store = BlocklistStore::new(); let mut store = BlocklistStore::new();

View File

@@ -84,6 +84,11 @@ impl BytePacketBuffer {
/// Read a qname, handling label compression (pointer jumps). /// Read a qname, handling label compression (pointer jumps).
/// Converts wire format like [3]www[6]google[3]com[0] into "www.google.com". /// Converts wire format like [3]www[6]google[3]com[0] into "www.google.com".
///
/// Label bytes are escaped per RFC 1035 §5.1:
/// - literal `.` within a label → `\.`
/// - literal `\` → `\\`
/// - bytes outside `0x21..=0x7E` (excluding `.` and `\`) → `\DDD` (3-digit decimal)
pub fn read_qname(&mut self, outstr: &mut String) -> Result<()> { pub fn read_qname(&mut self, outstr: &mut String) -> Result<()> {
let mut pos = self.pos(); let mut pos = self.pos();
let mut jumped = false; let mut jumped = false;
@@ -121,7 +126,18 @@ impl BytePacketBuffer {
let str_buffer = self.get_range(pos, len as usize)?; let str_buffer = self.get_range(pos, len as usize)?;
for &b in str_buffer { for &b in str_buffer {
outstr.push(b.to_ascii_lowercase() as char); let c = b.to_ascii_lowercase();
match c {
b'.' => outstr.push_str("\\."),
b'\\' => outstr.push_str("\\\\"),
0x21..=0x7E => outstr.push(c as char),
_ => {
outstr.push('\\');
outstr.push((b'0' + c / 100) as char);
outstr.push((b'0' + (c / 10) % 10) as char);
outstr.push((b'0' + c % 10) as char);
}
}
} }
delim = "."; delim = ".";
@@ -163,24 +179,68 @@ impl BytePacketBuffer {
Ok(()) Ok(())
} }
/// Write a qname in wire format, parsing RFC 1035 §5.1 text escapes.
/// See `read_qname` for the escape grammar.
pub fn write_qname(&mut self, qname: &str) -> Result<()> { pub fn write_qname(&mut self, qname: &str) -> Result<()> {
if qname.is_empty() || qname == "." { if qname.is_empty() || qname == "." {
self.write_u8(0)?; self.write_u8(0)?;
return Ok(()); return Ok(());
} }
for label in qname.split('.') { let bytes = qname.as_bytes();
let len = label.len(); let mut i = 0;
if len == 0 { while i < bytes.len() {
continue; // skip empty labels from trailing dot let len_pos = self.pos;
self.write_u8(0)?; // placeholder length byte, backpatched below
let body_start = self.pos;
while i < bytes.len() && bytes[i] != b'.' {
let b = bytes[i];
if b == b'\\' {
i += 1;
let c1 = *bytes.get(i).ok_or("trailing backslash in qname")?;
if c1.is_ascii_digit() {
let c2 = *bytes
.get(i + 1)
.ok_or("invalid \\DDD escape: expected 3 digits")?;
let c3 = *bytes
.get(i + 2)
.ok_or("invalid \\DDD escape: expected 3 digits")?;
if !c2.is_ascii_digit() || !c3.is_ascii_digit() {
return Err("invalid \\DDD escape: expected 3 digits".into());
} }
if len > 0x3f { let val =
return Err("Single label exceeds 63 characters of length".into()); (c1 - b'0') as u16 * 100 + (c2 - b'0') as u16 * 10 + (c3 - b'0') as u16;
if val > 255 {
return Err(format!("\\DDD escape out of range: {}", val).into());
}
self.write_u8(val as u8)?;
i += 3;
} else {
// \. \\ and any other \X → literal next byte
self.write_u8(c1)?;
i += 1;
}
} else {
self.write_u8(b)?;
i += 1;
} }
self.write_u8(len as u8)?; if self.pos - body_start > 0x3f {
for b in label.as_bytes() { return Err("Single label exceeds 63 characters of length".into());
self.write_u8(*b)?; }
}
let label_len = self.pos - body_start;
if label_len == 0 && i < bytes.len() {
// Empty label from leading/consecutive dots — roll back the placeholder.
self.pos = len_pos;
} else {
self.set(len_pos, label_len as u8)?;
}
if i < bytes.len() && bytes[i] == b'.' {
i += 1;
} }
} }
@@ -212,3 +272,160 @@ impl BytePacketBuffer {
Ok(()) Ok(())
} }
} }
#[cfg(test)]
mod tests {
use super::*;
fn roundtrip(wire: &[u8]) -> String {
let mut buf = BytePacketBuffer::from_bytes(wire);
let mut out = String::new();
buf.read_qname(&mut out).unwrap();
out
}
fn write_then_read(text: &str) -> String {
let mut buf = BytePacketBuffer::new();
buf.write_qname(text).unwrap();
let wire_end = buf.pos();
buf.seek(0).unwrap();
let mut out = String::new();
buf.read_qname(&mut out).unwrap();
assert_eq!(
buf.pos(),
wire_end,
"reader should consume exactly what writer wrote"
);
out
}
#[test]
fn read_plain_domain() {
// [3]www[6]google[3]com[0]
let wire = b"\x03www\x06google\x03com\x00";
assert_eq!(roundtrip(wire), "www.google.com");
}
#[test]
fn read_label_with_literal_dot_is_escaped() {
// fanf2's example: [8]exa.mple[3]com[0] — two labels, first contains 0x2E
let wire = b"\x08exa.mple\x03com\x00";
assert_eq!(roundtrip(wire), "exa\\.mple.com");
}
#[test]
fn read_label_with_backslash_is_escaped() {
// [4]a\bc[3]com[0]
let wire = b"\x04a\\bc\x03com\x00";
assert_eq!(roundtrip(wire), "a\\\\bc.com");
}
#[test]
fn read_label_with_nonprintable_byte_uses_decimal_escape() {
// [4]\x00foo[3]com[0] — null byte at label start
let wire = b"\x04\x00foo\x03com\x00";
assert_eq!(roundtrip(wire), "\\000foo.com");
}
#[test]
fn read_label_with_space_uses_decimal_escape() {
// Space (0x20) is outside 0x21..=0x7E, so it must be decimal-escaped.
let wire = b"\x05a b c\x00";
assert_eq!(roundtrip(wire), "a\\032b\\032c");
}
#[test]
fn write_plain_domain() {
let mut buf = BytePacketBuffer::new();
buf.write_qname("www.google.com").unwrap();
assert_eq!(&buf.buf[..buf.pos], b"\x03www\x06google\x03com\x00");
}
#[test]
fn write_escaped_dot_does_not_split_label() {
let mut buf = BytePacketBuffer::new();
buf.write_qname("exa\\.mple.com").unwrap();
assert_eq!(&buf.buf[..buf.pos], b"\x08exa.mple\x03com\x00");
}
#[test]
fn write_escaped_backslash() {
let mut buf = BytePacketBuffer::new();
buf.write_qname("a\\\\bc.com").unwrap();
assert_eq!(&buf.buf[..buf.pos], b"\x04a\\bc\x03com\x00");
}
#[test]
fn write_decimal_escape_yields_raw_byte() {
let mut buf = BytePacketBuffer::new();
buf.write_qname("\\000foo.com").unwrap();
assert_eq!(&buf.buf[..buf.pos], b"\x04\x00foo\x03com\x00");
}
#[test]
fn write_skips_empty_labels() {
// Leading dot — first (empty) label is rolled back.
let mut buf = BytePacketBuffer::new();
buf.write_qname(".foo.com").unwrap();
assert_eq!(&buf.buf[..buf.pos], b"\x03foo\x03com\x00");
// Consecutive dots — middle empty label is rolled back.
let mut buf = BytePacketBuffer::new();
buf.write_qname("foo..com").unwrap();
assert_eq!(&buf.buf[..buf.pos], b"\x03foo\x03com\x00");
}
#[test]
fn write_rejects_out_of_range_decimal_escape() {
let mut buf = BytePacketBuffer::new();
assert!(buf.write_qname("\\999foo.com").is_err());
}
#[test]
fn write_rejects_trailing_backslash() {
let mut buf = BytePacketBuffer::new();
assert!(buf.write_qname("foo\\").is_err());
}
#[test]
fn write_rejects_short_decimal_escape() {
let mut buf = BytePacketBuffer::new();
assert!(buf.write_qname("\\1").is_err());
}
#[test]
fn write_rejects_label_over_63_bytes() {
// 64 bytes exceeds the wire-format label cap.
let mut buf = BytePacketBuffer::new();
assert!(buf.write_qname(&"a".repeat(64)).is_err());
// 63 bytes is the maximum permitted label length.
let mut buf = BytePacketBuffer::new();
assert!(buf.write_qname(&"a".repeat(63)).is_ok());
}
#[test]
fn roundtrip_preserves_dot_in_label() {
assert_eq!(write_then_read("exa\\.mple.com"), "exa\\.mple.com");
}
#[test]
fn roundtrip_preserves_backslash_in_label() {
assert_eq!(write_then_read("a\\\\b.com"), "a\\\\b.com");
}
#[test]
fn roundtrip_preserves_nonprintable_byte() {
assert_eq!(write_then_read("\\000foo.com"), "\\000foo.com");
}
#[test]
fn root_name_empty_and_dot_both_produce_single_zero() {
let mut a = BytePacketBuffer::new();
a.write_qname("").unwrap();
let mut b = BytePacketBuffer::new();
b.write_qname(".").unwrap();
assert_eq!(&a.buf[..a.pos], b"\x00");
assert_eq!(&b.buf[..b.pos], b"\x00");
}
}

View File

@@ -82,6 +82,29 @@ impl DnsCache {
Some((packet, entry.dnssec_status)) Some((packet, entry.dnssec_status))
} }
pub fn ttl_remaining(&self, domain: &str, qtype: QueryType) -> Option<(u32, u32)> {
let type_map = self.entries.get(domain)?;
let entry = type_map.get(&qtype)?;
let elapsed = entry.inserted_at.elapsed();
if elapsed >= entry.ttl {
return None;
}
let total = entry.ttl.as_secs() as u32;
let remaining = (entry.ttl - elapsed).as_secs() as u32;
Some((remaining, total))
}
pub fn needs_warm(&self, domain: &str) -> bool {
for qtype in [QueryType::A, QueryType::AAAA] {
match self.ttl_remaining(domain, qtype) {
None => return true,
Some((remaining, total)) if remaining < total / 4 => return true,
_ => {}
}
}
false
}
pub fn insert(&mut self, domain: &str, qtype: QueryType, packet: &DnsPacket) { pub fn insert(&mut self, domain: &str, qtype: QueryType, packet: &DnsPacket) {
self.insert_with_status(domain, qtype, packet, DnssecStatus::Indeterminate); self.insert_with_status(domain, qtype, packet, DnssecStatus::Indeterminate);
} }
@@ -233,4 +256,66 @@ mod tests {
cache.insert("example.com", QueryType::A, &pkt); cache.insert("example.com", QueryType::A, &pkt);
assert!(cache.heap_bytes() > empty); assert!(cache.heap_bytes() > empty);
} }
#[test]
fn ttl_remaining_returns_values_for_fresh_entry() {
let mut cache = DnsCache::new(100, 60, 3600);
let mut pkt = DnsPacket::new();
pkt.answers.push(DnsRecord::A {
domain: "example.com".into(),
addr: "1.2.3.4".parse().unwrap(),
ttl: 300,
});
cache.insert("example.com", QueryType::A, &pkt);
let (remaining, total) = cache.ttl_remaining("example.com", QueryType::A).unwrap();
assert_eq!(total, 300);
assert!(remaining <= 300);
assert!(remaining > 0);
}
#[test]
fn ttl_remaining_none_for_missing() {
let cache = DnsCache::new(100, 1, 3600);
assert!(cache.ttl_remaining("missing.com", QueryType::A).is_none());
}
#[test]
fn needs_warm_true_when_missing() {
let cache = DnsCache::new(100, 1, 3600);
assert!(cache.needs_warm("missing.com"));
}
#[test]
fn needs_warm_false_when_fresh() {
let mut cache = DnsCache::new(100, 1, 3600);
let mut pkt_a = DnsPacket::new();
pkt_a.answers.push(DnsRecord::A {
domain: "example.com".into(),
addr: "1.2.3.4".parse().unwrap(),
ttl: 300,
});
let mut pkt_aaaa = DnsPacket::new();
pkt_aaaa.answers.push(DnsRecord::AAAA {
domain: "example.com".into(),
addr: "::1".parse().unwrap(),
ttl: 300,
});
cache.insert("example.com", QueryType::A, &pkt_a);
cache.insert("example.com", QueryType::AAAA, &pkt_aaaa);
assert!(!cache.needs_warm("example.com"));
}
#[test]
fn needs_warm_true_when_only_a_cached() {
let mut cache = DnsCache::new(100, 1, 3600);
let mut pkt = DnsPacket::new();
pkt.answers.push(DnsRecord::A {
domain: "example.com".into(),
addr: "1.2.3.4".parse().unwrap(),
ttl: 300,
});
cache.insert("example.com", QueryType::A, &pkt);
// AAAA missing → needs warm
assert!(cache.needs_warm("example.com"));
}
} }

View File

@@ -31,6 +31,8 @@ pub struct Config {
pub dnssec: DnssecConfig, pub dnssec: DnssecConfig,
#[serde(default)] #[serde(default)]
pub dot: DotConfig, pub dot: DotConfig,
#[serde(default)]
pub mobile: MobileConfig,
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@@ -95,10 +97,12 @@ impl UpstreamMode {
pub struct UpstreamConfig { pub struct UpstreamConfig {
#[serde(default)] #[serde(default)]
pub mode: UpstreamMode, pub mode: UpstreamMode,
#[serde(default = "default_upstream_addr")] #[serde(default, deserialize_with = "string_or_vec")]
pub address: String, pub address: Vec<String>,
#[serde(default = "default_upstream_port")] #[serde(default = "default_upstream_port")]
pub port: u16, pub port: u16,
#[serde(default)]
pub fallback: Vec<String>,
#[serde(default = "default_timeout_ms")] #[serde(default = "default_timeout_ms")]
pub timeout_ms: u64, pub timeout_ms: u64,
#[serde(default = "default_root_hints")] #[serde(default = "default_root_hints")]
@@ -113,8 +117,9 @@ impl Default for UpstreamConfig {
fn default() -> Self { fn default() -> Self {
UpstreamConfig { UpstreamConfig {
mode: UpstreamMode::default(), mode: UpstreamMode::default(),
address: default_upstream_addr(), address: Vec::new(),
port: default_upstream_port(), port: default_upstream_port(),
fallback: Vec::new(),
timeout_ms: default_timeout_ms(), timeout_ms: default_timeout_ms(),
root_hints: default_root_hints(), root_hints: default_root_hints(),
prime_tlds: default_prime_tlds(), prime_tlds: default_prime_tlds(),
@@ -123,6 +128,33 @@ impl Default for UpstreamConfig {
} }
} }
fn string_or_vec<'de, D>(deserializer: D) -> std::result::Result<Vec<String>, D::Error>
where
D: serde::Deserializer<'de>,
{
struct Visitor;
impl<'de> serde::de::Visitor<'de> for Visitor {
type Value = Vec<String>;
fn expecting(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
f.write_str("string or array of strings")
}
fn visit_str<E: serde::de::Error>(self, v: &str) -> std::result::Result<Self::Value, E> {
Ok(vec![v.to_string()])
}
fn visit_seq<A: serde::de::SeqAccess<'de>>(
self,
mut seq: A,
) -> std::result::Result<Self::Value, A::Error> {
let mut v = Vec::new();
while let Some(s) = seq.next_element::<String>()? {
v.push(s);
}
Ok(v)
}
}
deserializer.deserialize_any(Visitor)
}
fn default_true() -> bool { fn default_true() -> bool {
true true
} }
@@ -200,9 +232,6 @@ fn default_root_hints() -> Vec<String> {
] ]
} }
fn default_upstream_addr() -> String {
String::new() // empty = auto-detect from system resolver
}
fn default_upstream_port() -> u16 { fn default_upstream_port() -> u16 {
53 53
} }
@@ -218,6 +247,8 @@ pub struct CacheConfig {
pub min_ttl: u32, pub min_ttl: u32,
#[serde(default = "default_max_ttl")] #[serde(default = "default_max_ttl")]
pub max_ttl: u32, pub max_ttl: u32,
#[serde(default)]
pub warm: Vec<String>,
} }
impl Default for CacheConfig { impl Default for CacheConfig {
@@ -226,6 +257,7 @@ impl Default for CacheConfig {
max_entries: default_max_entries(), max_entries: default_max_entries(),
min_ttl: default_min_ttl(), min_ttl: default_min_ttl(),
max_ttl: default_max_ttl(), max_ttl: default_max_ttl(),
warm: Vec::new(),
} }
} }
} }
@@ -379,7 +411,7 @@ pub struct DnssecConfig {
#[derive(Deserialize, Clone)] #[derive(Deserialize, Clone)]
pub struct DotConfig { pub struct DotConfig {
#[serde(default)] #[serde(default = "default_dot_enabled")]
pub enabled: bool, pub enabled: bool,
#[serde(default = "default_dot_port")] #[serde(default = "default_dot_port")]
pub port: u16, pub port: u16,
@@ -396,7 +428,7 @@ pub struct DotConfig {
impl Default for DotConfig { impl Default for DotConfig {
fn default() -> Self { fn default() -> Self {
DotConfig { DotConfig {
enabled: false, enabled: default_dot_enabled(),
port: default_dot_port(), port: default_dot_port(),
bind_addr: default_dot_bind_addr(), bind_addr: default_dot_bind_addr(),
cert_path: None, cert_path: None,
@@ -405,6 +437,9 @@ impl Default for DotConfig {
} }
} }
fn default_dot_enabled() -> bool {
true
}
fn default_dot_port() -> u16 { fn default_dot_port() -> u16 {
853 853
} }
@@ -412,6 +447,53 @@ fn default_dot_bind_addr() -> String {
"0.0.0.0".to_string() "0.0.0.0".to_string()
} }
/// Configuration for the mobile API — a persistent HTTP listener that
/// serves a read-only subset of routes (`/health`, `/ca.pem`,
/// `/mobileconfig`, `/ca.mobileconfig`) on a LAN-reachable port, for
/// consumption by the iOS/Android companion apps.
///
/// Unlike the main API (port 5380, localhost-only by default, supports
/// state-mutating routes), the mobile API is safe to expose on the LAN
/// because every route is idempotent and read-only.
#[derive(Deserialize, Clone)]
pub struct MobileConfig {
/// If true, spawn the mobile API listener at startup. **Default false.**
/// Opt-in because the listener binds to the LAN by default and exposes
/// a few read-only endpoints to any device on the same network (`/health`,
/// `/ca.pem`, `/mobileconfig`, `/ca.mobileconfig`). None of those are
/// cryptographically sensitive (the CA private key is never served),
/// but users should enable this explicitly rather than have a new
/// LAN-reachable port appear after an upgrade.
#[serde(default)]
pub enabled: bool,
/// Port for the mobile API. Default 8765.
#[serde(default = "default_mobile_port")]
pub port: u16,
/// Bind address for the mobile API. Default "0.0.0.0" (all interfaces)
/// so phones on the LAN can reach it. Set to "127.0.0.1" to restrict
/// to localhost — useful if you're running behind another front-end.
#[serde(default = "default_mobile_bind_addr")]
pub bind_addr: String,
}
impl Default for MobileConfig {
fn default() -> Self {
MobileConfig {
enabled: false,
port: default_mobile_port(),
bind_addr: default_mobile_bind_addr(),
}
}
}
fn default_mobile_port() -> u16 {
8765
}
fn default_mobile_bind_addr() -> String {
"0.0.0.0".to_string()
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@@ -476,6 +558,33 @@ mod tests {
assert!(config.services[0].routes[0].strip); assert!(config.services[0].routes[0].strip);
assert!(!config.services[0].routes[1].strip); // default false assert!(!config.services[0].routes[1].strip); // default false
} }
#[test]
fn address_string_parses_to_vec() {
let config: Config = toml::from_str("[upstream]\naddress = \"1.2.3.4\"").unwrap();
assert_eq!(config.upstream.address, vec!["1.2.3.4"]);
}
#[test]
fn address_array_parses() {
let config: Config =
toml::from_str("[upstream]\naddress = [\"1.2.3.4\", \"5.6.7.8:5353\"]").unwrap();
assert_eq!(config.upstream.address, vec!["1.2.3.4", "5.6.7.8:5353"]);
}
#[test]
fn fallback_parses() {
let config: Config =
toml::from_str("[upstream]\nfallback = [\"8.8.8.8\", \"1.1.1.1\"]").unwrap();
assert_eq!(config.upstream.fallback, vec!["8.8.8.8", "1.1.1.1"]);
}
#[test]
fn empty_address_gives_empty_vec() {
let config: Config = toml::from_str("").unwrap();
assert!(config.upstream.address.is_empty());
assert!(config.upstream.fallback.is_empty());
}
} }
pub struct ConfigLoad { pub struct ConfigLoad {

View File

@@ -16,8 +16,9 @@ use crate::blocklist::BlocklistStore;
use crate::buffer::BytePacketBuffer; use crate::buffer::BytePacketBuffer;
use crate::cache::{DnsCache, DnssecStatus}; use crate::cache::{DnsCache, DnssecStatus};
use crate::config::{UpstreamMode, ZoneMap}; use crate::config::{UpstreamMode, ZoneMap};
use crate::forward::{forward_query, Upstream}; use crate::forward::{forward_query, forward_with_failover, Upstream, UpstreamPool};
use crate::header::ResultCode; use crate::header::ResultCode;
use crate::health::HealthMeta;
use crate::lan::PeerStore; use crate::lan::PeerStore;
use crate::override_store::OverrideStore; use crate::override_store::OverrideStore;
use crate::packet::DnsPacket; use crate::packet::DnsPacket;
@@ -41,7 +42,7 @@ pub struct ServerCtx {
pub services: Mutex<ServiceStore>, pub services: Mutex<ServiceStore>,
pub lan_peers: Mutex<PeerStore>, pub lan_peers: Mutex<PeerStore>,
pub forwarding_rules: Vec<ForwardingRule>, pub forwarding_rules: Vec<ForwardingRule>,
pub upstream: Mutex<Upstream>, pub upstream_pool: Mutex<UpstreamPool>,
pub upstream_auto: bool, pub upstream_auto: bool,
pub upstream_port: u16, pub upstream_port: u16,
pub lan_ip: Mutex<std::net::Ipv4Addr>, pub lan_ip: Mutex<std::net::Ipv4Addr>,
@@ -60,6 +61,17 @@ pub struct ServerCtx {
pub inflight: Mutex<InflightMap>, pub inflight: Mutex<InflightMap>,
pub dnssec_enabled: bool, pub dnssec_enabled: bool,
pub dnssec_strict: bool, pub dnssec_strict: bool,
/// Cached health metadata (version, hostname, DoT config, CA
/// fingerprint, features). Shared between the main and mobile
/// API `/health` handlers. Built once at startup in `main.rs`.
pub health_meta: HealthMeta,
/// CA certificate in PEM form, cached at startup. `None` if no
/// TLS-using feature is enabled and the CA hasn't been generated.
/// Used by `/ca.pem`, `/mobileconfig`, and `/ca.mobileconfig`
/// handlers to avoid per-request disk I/O on the hot path.
pub ca_pem: Option<String>,
pub mobile_enabled: bool,
pub mobile_port: u16,
} }
/// Transport-agnostic DNS resolution. Runs the full pipeline (overrides, blocklist, /// Transport-agnostic DNS resolution. Runs the full pipeline (overrides, blocklist,
@@ -98,6 +110,10 @@ pub async fn resolve_query(
300, 300,
)); ));
(resp, QueryPath::Local, DnssecStatus::Indeterminate) (resp, QueryPath::Local, DnssecStatus::Indeterminate)
} else if let Some(records) = ctx.zone_map.get(qname.as_str()).and_then(|m| m.get(&qtype)) {
let mut resp = DnsPacket::response_from(&query, ResultCode::NOERROR);
resp.answers = records.clone();
(resp, QueryPath::Local, DnssecStatus::Indeterminate)
} else if is_special_use_domain(&qname) { } else if is_special_use_domain(&qname) {
// RFC 6761/8880: private PTR, DDR, NAT64 — answer locally // RFC 6761/8880: private PTR, DDR, NAT64 — answer locally
let resp = special_use_response(&query, &qname, qtype); let resp = special_use_response(&query, &qname, qtype);
@@ -146,10 +162,6 @@ pub async fn resolve_query(
60, 60,
)); ));
(resp, QueryPath::Blocked, DnssecStatus::Indeterminate) (resp, QueryPath::Blocked, DnssecStatus::Indeterminate)
} else if let Some(records) = ctx.zone_map.get(qname.as_str()).and_then(|m| m.get(&qtype)) {
let mut resp = DnsPacket::response_from(&query, ResultCode::NOERROR);
resp.answers = records.clone();
(resp, QueryPath::Local, DnssecStatus::Indeterminate)
} else { } else {
let cached = ctx.cache.read().unwrap().lookup_with_status(&qname, qtype); let cached = ctx.cache.read().unwrap().lookup_with_status(&qname, qtype);
if let Some((cached, cached_dnssec)) = cached { if let Some((cached, cached_dnssec)) = cached {
@@ -208,12 +220,8 @@ pub async fn resolve_query(
} }
(resp, path, DnssecStatus::Indeterminate) (resp, path, DnssecStatus::Indeterminate)
} else { } else {
let upstream = let pool = ctx.upstream_pool.lock().unwrap().clone();
match crate::system_dns::match_forwarding_rule(&qname, &ctx.forwarding_rules) { match forward_with_failover(&query, &pool, &ctx.srtt, ctx.timeout).await {
Some(addr) => Upstream::Udp(addr),
None => ctx.upstream.lock().unwrap().clone(),
};
match forward_query(&query, &upstream, ctx.timeout).await {
Ok(resp) => { Ok(resp) => {
ctx.cache.write().unwrap().insert(&qname, qtype, &resp); ctx.cache.write().unwrap().insert(&qname, qtype, &resp);
(resp, QueryPath::Forwarded, DnssecStatus::Indeterminate) (resp, QueryPath::Forwarded, DnssecStatus::Indeterminate)

View File

@@ -5,6 +5,7 @@ use log::{debug, trace};
use ring::digest; use ring::digest;
use ring::signature; use ring::signature;
use crate::buffer::BytePacketBuffer;
use crate::cache::{DnsCache, DnssecStatus}; use crate::cache::{DnsCache, DnssecStatus};
use crate::packet::DnsPacket; use crate::packet::DnsPacket;
use crate::question::QueryType; use crate::question::QueryType;
@@ -720,22 +721,29 @@ pub fn verify_ds(ds: &DnsRecord, dnskey: &DnsRecord, owner: &str) -> bool {
// -- Canonical wire format -- // -- Canonical wire format --
/// Encode a DNS name in canonical wire form per RFC 4034 §6.2:
/// uncompressed, with ASCII letters lowercased.
///
/// Lowercasing happens *after* escape resolution because `\065` yields
/// `'A'`, which canonical form must convert to `'a'`.
pub fn name_to_wire(name: &str) -> Vec<u8> { pub fn name_to_wire(name: &str) -> Vec<u8> {
let mut wire = Vec::with_capacity(name.len() + 2); let mut buf = BytePacketBuffer::new();
if name == "." || name.is_empty() { buf.write_qname(name)
wire.push(0); .expect("name_to_wire: input must parse as a valid DNS name");
return wire; let mut wire = buf.filled().to_vec();
let mut i = 0;
while i < wire.len() {
let label_len = wire[i] as usize;
if label_len == 0 {
break;
} }
for label in name.split('.') { i += 1;
if label.is_empty() { let end = i + label_len;
continue; wire[i..end].make_ascii_lowercase();
i = end;
} }
wire.push(label.len() as u8);
for &b in label.as_bytes() {
wire.push(b.to_ascii_lowercase());
}
}
wire.push(0);
wire wire
} }
@@ -1475,6 +1483,23 @@ mod tests {
); );
} }
#[test]
fn name_to_wire_escaped_dot_in_label_is_not_a_separator() {
// `exa\.mple.com` is two labels: `exa.mple` (8 bytes including the 0x2E) and `com`.
let wire = name_to_wire("exa\\.mple.com");
assert_eq!(
wire,
vec![8, b'e', b'x', b'a', b'.', b'm', b'p', b'l', b'e', 3, b'c', b'o', b'm', 0]
);
}
#[test]
fn name_to_wire_decimal_escape_is_lowercased() {
// \065 = 'A', must become 'a' in canonical form.
let wire = name_to_wire("\\065bc.com");
assert_eq!(wire, vec![3, b'a', b'b', b'c', 3, b'c', b'o', b'm', 0]);
}
#[test] #[test]
fn parent_zone_cases() { fn parent_zone_cases() {
assert_eq!(parent_zone("example.com"), "com"); assert_eq!(parent_zone("example.com"), "com");

188
src/doh.rs Normal file
View File

@@ -0,0 +1,188 @@
use std::net::SocketAddr;
use axum::body::Bytes;
use axum::extract::{Request, State};
use axum::response::{IntoResponse, Response};
use hyper::StatusCode;
use log::warn;
use crate::buffer::BytePacketBuffer;
use crate::ctx::{resolve_query, ServerCtx};
use crate::header::ResultCode;
use crate::packet::DnsPacket;
const MAX_DNS_MSG: usize = 4096;
const DOH_CONTENT_TYPE: &str = "application/dns-message";
pub async fn doh_post(State(state): State<super::proxy::DohState>, req: Request) -> Response {
let host = super::proxy::extract_host(&req);
if !is_doh_host(host.as_deref(), &state.ctx.proxy_tld) {
return StatusCode::NOT_FOUND.into_response();
}
let content_type = req
.headers()
.get(hyper::header::CONTENT_TYPE)
.and_then(|v| v.to_str().ok())
.unwrap_or("");
if !content_type.starts_with(DOH_CONTENT_TYPE) {
return StatusCode::UNSUPPORTED_MEDIA_TYPE.into_response();
}
let body = match axum::body::to_bytes(req.into_body(), MAX_DNS_MSG).await {
Ok(b) => b,
Err(_) => {
return (StatusCode::PAYLOAD_TOO_LARGE, "body exceeds 4096 bytes").into_response()
}
};
if body.is_empty() {
return (StatusCode::BAD_REQUEST, "empty body").into_response();
}
let src = state
.remote_addr
.unwrap_or_else(|| SocketAddr::from(([127, 0, 0, 1], 0)));
resolve_doh(&body, src, &state.ctx).await
}
fn is_doh_host(host: Option<&str>, tld: &str) -> bool {
match host {
Some(h) if h == tld => true,
Some(h) => {
h.len() == 2 * tld.len() + 1
&& h.starts_with(tld)
&& h.as_bytes().get(tld.len()) == Some(&b'.')
&& h.ends_with(tld)
}
None => false,
}
}
async fn resolve_doh(dns_bytes: &[u8], src: SocketAddr, ctx: &ServerCtx) -> Response {
let mut buffer = BytePacketBuffer::from_bytes(dns_bytes);
let query = match DnsPacket::from_buffer(&mut buffer) {
Ok(q) => q,
Err(e) => {
warn!("DoH: parse error from {}: {}", src, e);
let query_id = u16::from_be_bytes([
dns_bytes.first().copied().unwrap_or(0),
dns_bytes.get(1).copied().unwrap_or(0),
]);
let mut resp = DnsPacket::new();
resp.header.id = query_id;
resp.header.response = true;
resp.header.rescode = ResultCode::FORMERR;
return serialize_response(&resp);
}
};
let query_id = query.header.id;
let query_rd = query.header.recursion_desired;
let questions = query.questions.clone();
match resolve_query(query, src, ctx).await {
Ok(resp_buffer) => {
let min_ttl = extract_min_ttl(resp_buffer.filled());
dns_response(resp_buffer.filled(), min_ttl)
}
Err(e) => {
warn!("DoH: resolve error for {}: {}", src, e);
let mut resp = DnsPacket::new();
resp.header.id = query_id;
resp.header.response = true;
resp.header.recursion_desired = query_rd;
resp.header.recursion_available = true;
resp.header.rescode = ResultCode::SERVFAIL;
resp.questions = questions;
serialize_response(&resp)
}
}
}
fn extract_min_ttl(wire: &[u8]) -> u32 {
let mut buf = BytePacketBuffer::from_bytes(wire);
match DnsPacket::from_buffer(&mut buf) {
Ok(pkt) => pkt.answers.iter().map(|r| r.ttl()).min().unwrap_or(0),
Err(_) => 0,
}
}
fn dns_response(wire: &[u8], min_ttl: u32) -> Response {
(
StatusCode::OK,
[
(hyper::header::CONTENT_TYPE, DOH_CONTENT_TYPE),
(
hyper::header::CACHE_CONTROL,
&format!("max-age={}", min_ttl),
),
],
Bytes::copy_from_slice(wire),
)
.into_response()
}
fn serialize_response(pkt: &DnsPacket) -> Response {
let mut buf = BytePacketBuffer::new();
match pkt.write(&mut buf) {
Ok(_) => dns_response(buf.filled(), 0),
Err(_) => StatusCode::INTERNAL_SERVER_ERROR.into_response(),
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::buffer::BytePacketBuffer;
use crate::header::ResultCode;
use crate::packet::DnsPacket;
use crate::record::DnsRecord;
#[test]
fn is_doh_host_matches_tld() {
assert!(is_doh_host(Some("numa"), "numa"));
assert!(is_doh_host(Some("numa.numa"), "numa"));
assert!(!is_doh_host(Some("foo.numa"), "numa"));
assert!(!is_doh_host(None, "numa"));
}
#[test]
fn extract_min_ttl_from_response() {
let mut pkt = DnsPacket::new();
pkt.header.response = true;
pkt.answers.push(DnsRecord::A {
domain: "example.com".to_string(),
addr: std::net::Ipv4Addr::new(1, 2, 3, 4),
ttl: 300,
});
pkt.answers.push(DnsRecord::A {
domain: "example.com".to_string(),
addr: std::net::Ipv4Addr::new(5, 6, 7, 8),
ttl: 60,
});
let mut buf = BytePacketBuffer::new();
pkt.write(&mut buf).unwrap();
assert_eq!(extract_min_ttl(buf.filled()), 60);
}
#[test]
fn extract_min_ttl_no_answers() {
let mut pkt = DnsPacket::new();
pkt.header.response = true;
let mut buf = BytePacketBuffer::new();
pkt.write(&mut buf).unwrap();
assert_eq!(extract_min_ttl(buf.filled()), 0);
}
#[test]
fn serialize_formerr_response() {
let mut pkt = DnsPacket::new();
pkt.header.id = 0xABCD;
pkt.header.response = true;
pkt.header.rescode = ResultCode::FORMERR;
let resp = serialize_response(&pkt);
assert_eq!(resp.status(), StatusCode::OK);
}
}

View File

@@ -362,7 +362,10 @@ mod tests {
services: Mutex::new(crate::service_store::ServiceStore::new()), services: Mutex::new(crate::service_store::ServiceStore::new()),
lan_peers: Mutex::new(crate::lan::PeerStore::new(90)), lan_peers: Mutex::new(crate::lan::PeerStore::new(90)),
forwarding_rules: Vec::new(), forwarding_rules: Vec::new(),
upstream: Mutex::new(crate::forward::Upstream::Udp(upstream_addr)), upstream_pool: Mutex::new(crate::forward::UpstreamPool::new(
vec![crate::forward::Upstream::Udp(upstream_addr)],
vec![],
)),
upstream_auto: false, upstream_auto: false,
upstream_port: 53, upstream_port: 53,
lan_ip: Mutex::new(std::net::Ipv4Addr::LOCALHOST), lan_ip: Mutex::new(std::net::Ipv4Addr::LOCALHOST),
@@ -381,6 +384,10 @@ mod tests {
inflight: Mutex::new(HashMap::new()), inflight: Mutex::new(HashMap::new()),
dnssec_enabled: false, dnssec_enabled: false,
dnssec_strict: false, dnssec_strict: false,
health_meta: crate::health::HealthMeta::test_fixture(),
ca_pem: None,
mobile_enabled: false,
mobile_port: 8765,
}); });
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap(); let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();

View File

@@ -1,12 +1,14 @@
use std::fmt; use std::fmt;
use std::net::SocketAddr; use std::net::{IpAddr, SocketAddr};
use std::time::Duration; use std::sync::RwLock;
use std::time::{Duration, Instant};
use tokio::net::UdpSocket; use tokio::net::UdpSocket;
use tokio::time::timeout; use tokio::time::timeout;
use crate::buffer::BytePacketBuffer; use crate::buffer::BytePacketBuffer;
use crate::packet::DnsPacket; use crate::packet::DnsPacket;
use crate::srtt::SrttCache;
use crate::Result; use crate::Result;
#[derive(Clone)] #[derive(Clone)]
@@ -37,6 +39,133 @@ impl fmt::Display for Upstream {
} }
} }
pub fn parse_upstream_addr(s: &str, default_port: u16) -> std::result::Result<SocketAddr, String> {
// Try full socket addr first: "1.2.3.4:5353" or "[::1]:5353"
if let Ok(addr) = s.parse::<SocketAddr>() {
return Ok(addr);
}
// Bare IP: "1.2.3.4" or "::1"
if let Ok(ip) = s.parse::<IpAddr>() {
return Ok(SocketAddr::new(ip, default_port));
}
Err(format!("invalid upstream address: {}", s))
}
pub fn parse_upstream(s: &str, default_port: u16) -> Result<Upstream> {
if s.starts_with("https://") {
let client = reqwest::Client::builder()
.use_rustls_tls()
.build()
.unwrap_or_default();
return Ok(Upstream::Doh {
url: s.to_string(),
client,
});
}
let addr = parse_upstream_addr(s, default_port)?;
Ok(Upstream::Udp(addr))
}
#[derive(Clone)]
pub struct UpstreamPool {
primary: Vec<Upstream>,
fallback: Vec<Upstream>,
}
impl UpstreamPool {
pub fn new(primary: Vec<Upstream>, fallback: Vec<Upstream>) -> Self {
Self { primary, fallback }
}
pub fn preferred(&self) -> Option<&Upstream> {
self.primary.first().or(self.fallback.first())
}
pub fn set_primary(&mut self, primary: Vec<Upstream>) {
self.primary = primary;
}
/// Update the primary upstream if `new_addr` (parsed with `port`) differs
/// from the current preferred upstream. Returns `true` if the pool changed.
pub fn maybe_update_primary(&mut self, new_addr: &str, port: u16) -> bool {
let Ok(new_sock) = format!("{}:{}", new_addr, port).parse::<SocketAddr>() else {
return false;
};
let new_upstream = Upstream::Udp(new_sock);
if self.preferred() == Some(&new_upstream) {
return false;
}
self.primary = vec![new_upstream];
true
}
pub fn label(&self) -> String {
match self.preferred() {
Some(u) => {
let total = self.primary.len() + self.fallback.len();
if total > 1 {
format!("{} (+{} more)", u, total - 1)
} else {
u.to_string()
}
}
None => "none".to_string(),
}
}
}
pub async fn forward_with_failover(
query: &DnsPacket,
pool: &UpstreamPool,
srtt: &RwLock<SrttCache>,
timeout_duration: Duration,
) -> Result<DnsPacket> {
// Build candidate list: primary (sorted by SRTT for UDP) then fallback
let mut candidates: Vec<(usize, u64)> = pool
.primary
.iter()
.enumerate()
.map(|(i, u)| {
let rtt = match u {
Upstream::Udp(addr) => srtt.read().unwrap().get(addr.ip()),
_ => 0, // DoH: keep config order (stable sort preserves it)
};
(i, rtt)
})
.collect();
candidates.sort_by_key(|&(_, rtt)| rtt);
let all_upstreams: Vec<&Upstream> = candidates
.iter()
.map(|&(i, _)| &pool.primary[i])
.chain(pool.fallback.iter())
.collect();
let mut last_err: Option<Box<dyn std::error::Error + Send + Sync>> = None;
for upstream in &all_upstreams {
let start = Instant::now();
match forward_query(query, upstream, timeout_duration).await {
Ok(resp) => {
if let Upstream::Udp(addr) = upstream {
let rtt_ms = start.elapsed().as_millis() as u64;
srtt.write().unwrap().record_rtt(addr.ip(), rtt_ms, false);
}
return Ok(resp);
}
Err(e) => {
if let Upstream::Udp(addr) = upstream {
srtt.write().unwrap().record_failure(addr.ip());
}
log::debug!("upstream {} failed: {}", upstream, e);
last_err = Some(e);
}
}
}
Err(last_err.unwrap_or_else(|| "no upstream configured".into()))
}
pub async fn forward_query( pub async fn forward_query(
query: &DnsPacket, query: &DnsPacket,
upstream: &Upstream, upstream: &Upstream,
@@ -271,4 +400,112 @@ mod tests {
let result = forward_query(&make_query(), &upstream, Duration::from_millis(100)).await; let result = forward_query(&make_query(), &upstream, Duration::from_millis(100)).await;
assert!(result.is_err()); assert!(result.is_err());
} }
#[test]
fn parse_addr_ip_only() {
let addr = parse_upstream_addr("1.2.3.4", 53).unwrap();
assert_eq!(addr, "1.2.3.4:53".parse::<SocketAddr>().unwrap());
}
#[test]
fn parse_addr_ip_port() {
let addr = parse_upstream_addr("1.2.3.4:5353", 53).unwrap();
assert_eq!(addr, "1.2.3.4:5353".parse::<SocketAddr>().unwrap());
}
#[test]
fn parse_addr_ipv6_bracketed() {
let addr = parse_upstream_addr("[::1]:5553", 53).unwrap();
assert_eq!(addr, "[::1]:5553".parse::<SocketAddr>().unwrap());
}
#[test]
fn parse_addr_ipv6_bare() {
let addr = parse_upstream_addr("::1", 53).unwrap();
assert_eq!(addr, "[::1]:53".parse::<SocketAddr>().unwrap());
}
#[test]
fn pool_label_single() {
let pool = UpstreamPool::new(vec![Upstream::Udp("1.2.3.4:53".parse().unwrap())], vec![]);
assert_eq!(pool.label(), "1.2.3.4:53");
}
#[test]
fn pool_label_multi() {
let pool = UpstreamPool::new(
vec![Upstream::Udp("1.2.3.4:53".parse().unwrap())],
vec![Upstream::Udp("8.8.8.8:53".parse().unwrap())],
);
assert_eq!(pool.label(), "1.2.3.4:53 (+1 more)");
}
#[tokio::test]
async fn failover_tries_next_on_failure() {
// First upstream is unreachable, second responds
let query = make_query();
let response_bytes = to_wire(&make_response(&query));
let app = axum::Router::new().route(
"/dns-query",
axum::routing::post(move || {
let body = response_bytes.clone();
async move {
(
[(axum::http::header::CONTENT_TYPE, "application/dns-message")],
body,
)
}
}),
);
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
let good_addr = listener.local_addr().unwrap();
tokio::spawn(axum::serve(listener, app).into_future());
// Unreachable UDP upstream + working DoH upstream
let pool = UpstreamPool::new(
vec![
Upstream::Udp("127.0.0.1:1".parse().unwrap()), // will fail
Upstream::Doh {
url: format!("http://{}/dns-query", good_addr),
client: reqwest::Client::new(),
},
],
vec![],
);
let srtt = RwLock::new(SrttCache::new(true));
let result = forward_with_failover(&query, &pool, &srtt, Duration::from_millis(500))
.await
.expect("should fail over to second upstream");
assert_eq!(result.header.id, 0xABCD);
assert_eq!(result.answers.len(), 1);
}
#[test]
fn maybe_update_primary_swaps_when_different() {
let mut pool = UpstreamPool::new(
vec![Upstream::Udp("1.2.3.4:53".parse().unwrap())],
vec![Upstream::Udp("8.8.8.8:53".parse().unwrap())],
);
assert!(pool.maybe_update_primary("5.6.7.8", 53));
assert_eq!(pool.preferred().unwrap().to_string(), "5.6.7.8:53");
}
#[test]
fn maybe_update_primary_noop_when_same() {
let mut pool =
UpstreamPool::new(vec![Upstream::Udp("1.2.3.4:53".parse().unwrap())], vec![]);
assert!(!pool.maybe_update_primary("1.2.3.4", 53));
}
#[test]
fn maybe_update_primary_rejects_invalid_addr() {
let mut pool =
UpstreamPool::new(vec![Upstream::Udp("1.2.3.4:53".parse().unwrap())], vec![]);
assert!(!pool.maybe_update_primary("not-an-ip", 53));
assert_eq!(pool.preferred().unwrap().to_string(), "1.2.3.4:53");
}
} }

258
src/health.rs Normal file
View File

@@ -0,0 +1,258 @@
//! Health metadata and `/health` response shape, shared between the main
//! HTTP API and the mobile API.
//!
//! The static fields (version, hostname, DoT config, CA fingerprint,
//! feature list) are computed once at startup and stored in [`HealthMeta`]
//! on `ServerCtx`. Per-request fields (uptime, LAN IP) are computed live.
//! Both handlers call [`HealthResponse::build`] to assemble the JSON
//! response from `HealthMeta` + live inputs.
//!
//! JSON schema is documented in `docs/implementation/ios-companion-app.md`
//! §4.2. The iOS companion app's `HealthInfo` struct is the canonical
//! consumer; any change to this response must keep that struct decoding
//! cleanly (all consumed fields are optional on the Swift side, but
//! `lan_ip` is load-bearing for the pipeline).
use std::net::Ipv4Addr;
use std::path::Path;
use std::time::Instant;
use ring::digest::{digest, SHA256};
use serde::Serialize;
/// Immutable health metadata cached on `ServerCtx`. Built once at startup
/// from config + file-system state (CA cert).
#[derive(Clone)]
pub struct HealthMeta {
pub version: &'static str,
pub hostname: String,
pub sni: String,
pub dot_enabled: bool,
pub dot_port: u16,
pub api_port: u16,
pub ca_fingerprint_sha256: Option<String>,
pub features: Vec<String>,
pub started_at: Instant,
}
impl HealthMeta {
/// Minimal `HealthMeta` for unit tests that construct a `ServerCtx`
/// without needing the real startup flow (CA file reads, hostname
/// detection, etc.). Deterministic values so test JSON assertions
/// stay stable.
#[cfg(test)]
pub fn test_fixture() -> Self {
HealthMeta {
version: env!("CARGO_PKG_VERSION"),
hostname: "test-host".to_string(),
sni: "numa.numa".to_string(),
dot_enabled: false,
dot_port: 853,
api_port: 8765,
ca_fingerprint_sha256: None,
features: vec![],
started_at: Instant::now(),
}
}
/// Build a new HealthMeta from config + startup-time environment.
/// Call once at server boot; the returned value is cheap to clone
/// (small number of short strings) and lives on `ServerCtx`.
///
/// The argument count is deliberate — each flag corresponds to a
/// specific config value and is clearly named at the call site.
/// Collapsing into a struct hides nothing meaningful for a one-call
/// initializer.
#[allow(clippy::too_many_arguments)]
pub fn build(
data_dir: &Path,
dot_enabled: bool,
dot_port: u16,
api_port: u16,
dnssec_enabled: bool,
recursive_enabled: bool,
mdns_enabled: bool,
blocking_enabled: bool,
doh_enabled: bool,
) -> Self {
let ca_path = data_dir.join("ca.pem");
let ca_fingerprint_sha256 = compute_ca_fingerprint(&ca_path);
let mut features = Vec::new();
if doh_enabled {
features.push("doh".to_string());
}
if dot_enabled {
features.push("dot".to_string());
}
if recursive_enabled {
features.push("recursive".to_string());
}
if blocking_enabled {
features.push("blocking".to_string());
}
if mdns_enabled {
features.push("mdns".to_string());
}
if dnssec_enabled {
features.push("dnssec".to_string());
}
HealthMeta {
version: env!("CARGO_PKG_VERSION"),
hostname: crate::hostname(),
sni: "numa.numa".to_string(),
dot_enabled,
dot_port,
api_port,
ca_fingerprint_sha256,
features,
started_at: Instant::now(),
}
}
}
/// JSON response shape returned by `GET /health` on both main and mobile APIs.
///
/// Fields are organized to match the iOS companion app's
/// `HealthInfo` Swift struct — see `ios-companion-app.md` §4.2.
#[derive(Serialize)]
pub struct HealthResponse {
pub status: &'static str,
pub version: &'static str,
pub uptime_secs: u64,
pub hostname: String,
pub lan_ip: Option<String>,
pub sni: String,
pub dot: DotBlock,
pub api: ApiBlock,
pub ca: CaBlock,
pub features: Vec<String>,
}
#[derive(Serialize)]
pub struct DotBlock {
pub enabled: bool,
pub port: Option<u16>,
}
#[derive(Serialize)]
pub struct ApiBlock {
pub port: u16,
}
#[derive(Serialize)]
pub struct CaBlock {
pub present: bool,
pub fingerprint_sha256: Option<String>,
}
impl HealthResponse {
/// Assemble a fresh `HealthResponse` from the cached metadata and
/// the current LAN IP (which may change across network transitions).
/// Pass `None` for `lan_ip` if detection fails — the response still
/// returns 200 OK, just without the LAN address.
pub fn build(meta: &HealthMeta, lan_ip: Option<Ipv4Addr>) -> Self {
HealthResponse {
status: "ok",
version: meta.version,
uptime_secs: meta.started_at.elapsed().as_secs(),
hostname: meta.hostname.clone(),
lan_ip: lan_ip.map(|ip| ip.to_string()),
sni: meta.sni.clone(),
dot: DotBlock {
enabled: meta.dot_enabled,
port: if meta.dot_enabled {
Some(meta.dot_port)
} else {
None
},
},
api: ApiBlock {
port: meta.api_port,
},
ca: CaBlock {
present: meta.ca_fingerprint_sha256.is_some(),
fingerprint_sha256: meta.ca_fingerprint_sha256.clone(),
},
features: meta.features.clone(),
}
}
}
/// Read the CA cert at `ca_path` and return its SHA-256 fingerprint as a
/// lowercase hex string, or None if the file doesn't exist or can't be read.
///
/// Hashes the raw PEM bytes for simplicity. A more canonical SPKI-based
/// fingerprint would require parsing the PEM → DER → extracting
/// SubjectPublicKeyInfo, which adds complexity without meaningful benefit
/// for our use case (the iOS app uses the fingerprint only for display
/// and to detect rotation).
fn compute_ca_fingerprint(ca_path: &Path) -> Option<String> {
let pem = std::fs::read(ca_path).ok()?;
let hash = digest(&SHA256, &pem);
let hex: String = hash.as_ref().iter().map(|b| format!("{:02x}", b)).collect();
Some(hex)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn health_response_contains_required_fields() {
let meta = HealthMeta {
version: "0.10.0",
hostname: "test-host".to_string(),
sni: "numa.numa".to_string(),
dot_enabled: true,
dot_port: 853,
api_port: 8765,
ca_fingerprint_sha256: Some("abcd1234".to_string()),
features: vec!["dot".to_string(), "dnssec".to_string()],
started_at: Instant::now(),
};
let response = HealthResponse::build(&meta, Some(Ipv4Addr::new(192, 168, 1, 50)));
let json = serde_json::to_string(&response).unwrap();
assert!(json.contains("\"status\":\"ok\""));
assert!(json.contains("\"version\":\"0.10.0\""));
assert!(json.contains("\"hostname\":\"test-host\""));
assert!(json.contains("\"lan_ip\":\"192.168.1.50\""));
assert!(json.contains("\"sni\":\"numa.numa\""));
assert!(json.contains("\"port\":853"));
assert!(json.contains("\"port\":8765"));
assert!(json.contains("\"fingerprint_sha256\":\"abcd1234\""));
assert!(json.contains("\"features\":[\"dot\",\"dnssec\"]"));
}
#[test]
fn health_response_omits_dot_port_when_disabled() {
let meta = HealthMeta {
version: "0.10.0",
hostname: "t".to_string(),
sni: "numa.numa".to_string(),
dot_enabled: false,
dot_port: 853,
api_port: 8765,
ca_fingerprint_sha256: None,
features: vec![],
started_at: Instant::now(),
};
let response = HealthResponse::build(&meta, None);
let json = serde_json::to_string(&response).unwrap();
assert!(json.contains("\"enabled\":false"));
assert!(json.contains("\"dot\":{\"enabled\":false,\"port\":null}"));
assert!(json.contains("\"present\":false"));
assert!(json.contains("\"lan_ip\":null"));
}
#[test]
fn ca_fingerprint_returns_none_for_missing_file() {
let fp = compute_ca_fingerprint(Path::new("/nonexistent/ca.pem"));
assert!(fp.is_none());
}
}

View File

@@ -9,6 +9,7 @@ use crate::buffer::BytePacketBuffer;
use crate::config::LanConfig; use crate::config::LanConfig;
use crate::ctx::ServerCtx; use crate::ctx::ServerCtx;
use crate::header::DnsHeader; use crate::header::DnsHeader;
use crate::health::HealthMeta;
use crate::question::{DnsQuestion, QueryType}; use crate::question::{DnsQuestion, QueryType};
// --- Constants --- // --- Constants ---
@@ -18,6 +19,18 @@ const MDNS_PORT: u16 = 5353;
const SERVICE_TYPE: &str = "_numa._tcp.local"; const SERVICE_TYPE: &str = "_numa._tcp.local";
const MDNS_TTL: u32 = 120; const MDNS_TTL: u32 = 120;
// TXT record key prefixes (including the trailing `=`). Shared between
// the sender (`build_announcement`) and the receiver (`parse_mdns_response`)
// to prevent drift — both sides match on the same literal, not on two
// independent string constants that could diverge.
const TXT_SERVICES: &str = "services=";
const TXT_ID: &str = "id=";
const TXT_VERSION: &str = "version=";
const TXT_API_PORT: &str = "api_port=";
const TXT_PROTO: &str = "proto=";
const TXT_DOT_PORT: &str = "dot_port=";
const TXT_CA_FP: &str = "ca_fp=";
// --- Peer Store --- // --- Peer Store ---
pub struct PeerStore { pub struct PeerStore {
@@ -97,14 +110,16 @@ pub fn detect_lan_ip() -> Option<Ipv4Addr> {
} }
} }
/// Short hostname for mDNS instance names (`<short>._numa._tcp.local`).
/// Truncates at the first `.` so `macbook-pro.local` becomes `macbook-pro`.
/// Uses the shared `crate::hostname()` helper as the source.
fn get_hostname() -> String { fn get_hostname() -> String {
std::process::Command::new("hostname") crate::hostname()
.output() .split('.')
.ok() .next()
.and_then(|o| String::from_utf8(o.stdout).ok()) .filter(|s| !s.is_empty())
.map(|h| h.trim().split('.').next().unwrap_or("numa").to_string()) .unwrap_or("numa")
.filter(|h| !h.is_empty()) .to_string()
.unwrap_or_else(|| "numa".to_string())
} }
/// Generate a per-process instance ID for self-filtering on multi-instance hosts /// Generate a per-process instance ID for self-filtering on multi-instance hosts
@@ -168,13 +183,22 @@ pub async fn start_lan_discovery(ctx: Arc<ServerCtx>, config: &LanConfig) {
.map(|e| (e.name.clone(), e.target_port)) .map(|e| (e.name.clone(), e.target_port))
.collect() .collect()
}; };
if services.is_empty() { // Note: we always announce ourselves, even when the
continue; // services list is empty. The announcement still carries
} // the mobile API port + version + CA fingerprint in TXT,
// which is what the iOS companion app browses for via
// NWBrowser on `_numa._tcp.local`. Other Numa peers
// receive these empty-services announcements too and
// correctly ignore them in parse_mdns_response (the
// receiver only processes when services is non-empty).
let current_ip = *sender_ctx.lan_ip.lock().unwrap(); let current_ip = *sender_ctx.lan_ip.lock().unwrap();
if let Ok(pkt) = if let Ok(pkt) = build_announcement(
build_announcement(&sender_hostname, current_ip, &services, &sender_instance_id) &sender_hostname,
{ current_ip,
&services,
&sender_instance_id,
&sender_ctx.health_meta,
) {
let _ = sender_socket.send_to(pkt.filled(), dest).await; let _ = sender_socket.send_to(pkt.filled(), dest).await;
} }
} }
@@ -240,6 +264,7 @@ fn build_announcement(
ip: Ipv4Addr, ip: Ipv4Addr,
services: &[(String, u16)], services: &[(String, u16)],
inst_id: &str, inst_id: &str,
meta: &HealthMeta,
) -> crate::Result<BytePacketBuffer> { ) -> crate::Result<BytePacketBuffer> {
let mut buf = BytePacketBuffer::new(); let mut buf = BytePacketBuffer::new();
let instance_name = format!("{}._numa._tcp.local", hostname); let instance_name = format!("{}._numa._tcp.local", hostname);
@@ -260,7 +285,11 @@ fn build_announcement(
patch_rdlen(&mut buf, rdlen_pos, rdata_start)?; patch_rdlen(&mut buf, rdlen_pos, rdata_start)?;
// SRV: <instance>._numa._tcp.local → <hostname>.local // SRV: <instance>._numa._tcp.local → <hostname>.local
// Port in SRV is informational; actual service ports are in TXT // Port = mobile API port, which is what the iOS companion app resolves
// the SRV record for. Legacy Numa peers don't read the SRV port (see
// parse_mdns_response — it only uses TXT services= for peer discovery),
// so changing the SRV port from "first service's port" to the mobile
// API port is backwards compatible.
write_record_header( write_record_header(
&mut buf, &mut buf,
&instance_name, &instance_name,
@@ -273,11 +302,13 @@ fn build_announcement(
let rdata_start = buf.pos(); let rdata_start = buf.pos();
buf.write_u16(0)?; // priority buf.write_u16(0)?; // priority
buf.write_u16(0)?; // weight buf.write_u16(0)?; // weight
buf.write_u16(services.first().map(|(_, p)| *p).unwrap_or(0))?; // first service port for SRV display buf.write_u16(meta.api_port)?; // mobile API port, for iOS companion app
buf.write_qname(&host_local)?; buf.write_qname(&host_local)?;
patch_rdlen(&mut buf, rdlen_pos, rdata_start)?; patch_rdlen(&mut buf, rdlen_pos, rdata_start)?;
// TXT: services + instance ID for self-filtering // TXT: legacy peer-discovery entries (services, id) + enriched entries
// for the iOS companion app (version, api_port, proto, dot_port, ca_fp).
// All in one TXT RRset per mDNS convention.
write_record_header( write_record_header(
&mut buf, &mut buf,
&instance_name, &instance_name,
@@ -293,8 +324,21 @@ fn build_announcement(
.map(|(name, port)| format!("{}:{}", name, port)) .map(|(name, port)| format!("{}:{}", name, port))
.collect::<Vec<_>>() .collect::<Vec<_>>()
.join(","); .join(",");
write_txt_string(&mut buf, &format!("services={}", svc_str))?; // Legacy peer-discovery entries (consumed by parse_mdns_response)
write_txt_string(&mut buf, &format!("id={}", inst_id))?; write_txt_string(&mut buf, &format!("{}{}", TXT_SERVICES, svc_str))?;
write_txt_string(&mut buf, &format!("{}{}", TXT_ID, inst_id))?;
// Enriched entries (consumed by the iOS/Android companion apps)
write_txt_string(&mut buf, &format!("{}{}", TXT_VERSION, meta.version))?;
write_txt_string(&mut buf, &format!("{}{}", TXT_API_PORT, meta.api_port))?;
if meta.dot_enabled {
write_txt_string(&mut buf, &format!("{}dot", TXT_PROTO))?;
write_txt_string(&mut buf, &format!("{}{}", TXT_DOT_PORT, meta.dot_port))?;
} else {
write_txt_string(&mut buf, &format!("{}plain", TXT_PROTO))?;
}
if let Some(fp) = &meta.ca_fingerprint_sha256 {
write_txt_string(&mut buf, &format!("{}{}", TXT_CA_FP, fp))?;
}
patch_rdlen(&mut buf, rdlen_pos, rdata_start)?; patch_rdlen(&mut buf, rdlen_pos, rdata_start)?;
// A: <hostname>.local → IP // A: <hostname>.local → IP
@@ -408,7 +452,7 @@ fn parse_mdns_response(data: &[u8]) -> Option<MdnsAnnouncement> {
break; break;
} }
if let Ok(txt) = std::str::from_utf8(&data[pos..pos + txt_len]) { if let Ok(txt) = std::str::from_utf8(&data[pos..pos + txt_len]) {
if let Some(val) = txt.strip_prefix("services=") { if let Some(val) = txt.strip_prefix(TXT_SERVICES) {
let svcs: Vec<(String, u16)> = val let svcs: Vec<(String, u16)> = val
.split(',') .split(',')
.filter_map(|s| { .filter_map(|s| {
@@ -421,7 +465,7 @@ fn parse_mdns_response(data: &[u8]) -> Option<MdnsAnnouncement> {
if !svcs.is_empty() { if !svcs.is_empty() {
txt_services = Some(svcs); txt_services = Some(svcs);
} }
} else if let Some(id) = txt.strip_prefix("id=") { } else if let Some(id) = txt.strip_prefix(TXT_ID) {
peer_instance_id = Some(id.to_string()); peer_instance_id = Some(id.to_string());
} }
} }

View File

@@ -5,10 +5,14 @@ pub mod cache;
pub mod config; pub mod config;
pub mod ctx; pub mod ctx;
pub mod dnssec; pub mod dnssec;
pub mod doh;
pub mod dot; pub mod dot;
pub mod forward; pub mod forward;
pub mod header; pub mod header;
pub mod health;
pub mod lan; pub mod lan;
pub mod mobile_api;
pub mod mobileconfig;
pub mod override_store; pub mod override_store;
pub mod packet; pub mod packet;
pub mod proxy; pub mod proxy;
@@ -17,6 +21,7 @@ pub mod question;
pub mod record; pub mod record;
pub mod recursive; pub mod recursive;
pub mod service_store; pub mod service_store;
pub mod setup_phone;
pub mod srtt; pub mod srtt;
pub mod stats; pub mod stats;
pub mod system_dns; pub mod system_dns;
@@ -25,8 +30,25 @@ pub mod tls;
pub type Error = Box<dyn std::error::Error + Send + Sync>; pub type Error = Box<dyn std::error::Error + Send + Sync>;
pub type Result<T> = std::result::Result<T, Error>; pub type Result<T> = std::result::Result<T, Error>;
/// Detect the machine hostname via the `hostname` command. Returns the
/// full hostname (e.g., `macbook-pro.local`), or `"numa"` if the command
/// fails. Call sites that need the short form (e.g., mDNS instance
/// names) should truncate at the first `.`.
pub fn hostname() -> String {
std::process::Command::new("hostname")
.output()
.ok()
.and_then(|o| String::from_utf8(o.stdout).ok())
.map(|h| h.trim().to_string())
.filter(|h| !h.is_empty())
.unwrap_or_else(|| "numa".to_string())
}
/// Shared config directory for persistent data (services.json, etc). /// Shared config directory for persistent data (services.json, etc).
/// Unix: ~/.config/numa/ (or /usr/local/var/numa/ when running as root daemon) /// Unix users: ~/.config/numa/
/// Linux root daemon: /var/lib/numa (FHS) — falls back to /usr/local/var/numa
/// if a pre-v0.10.1 install already lives there.
/// macOS root daemon: /usr/local/var/numa (Homebrew prefix)
/// Windows: %APPDATA%\numa /// Windows: %APPDATA%\numa
pub fn config_dir() -> std::path::PathBuf { pub fn config_dir() -> std::path::PathBuf {
#[cfg(windows)] #[cfg(windows)]
@@ -63,13 +85,15 @@ fn config_dir_unix() -> std::path::PathBuf {
} }
// Running as root daemon (launchd/systemd) — use system-wide path // Running as root daemon (launchd/systemd) — use system-wide path
std::path::PathBuf::from("/usr/local/var/numa") daemon_data_dir()
} }
/// Default system-wide data directory for TLS certs. Overridable via /// Default system-wide data directory for TLS certs. Overridable via
/// `[server] data_dir = "..."` in numa.toml — this function only provides /// `[server] data_dir = "..."` in numa.toml — this function only provides
/// the fallback when the config doesn't set it. /// the fallback when the config doesn't set it.
/// Unix: /usr/local/var/numa /// Linux: /var/lib/numa (FHS) — falls back to /usr/local/var/numa if a
/// pre-v0.10.1 install already has data there.
/// macOS: /usr/local/var/numa (Homebrew prefix)
/// Windows: %PROGRAMDATA%\numa /// Windows: %PROGRAMDATA%\numa
pub fn data_dir() -> std::path::PathBuf { pub fn data_dir() -> std::path::PathBuf {
#[cfg(windows)] #[cfg(windows)]
@@ -81,6 +105,62 @@ pub fn data_dir() -> std::path::PathBuf {
} }
#[cfg(not(windows))] #[cfg(not(windows))]
{ {
daemon_data_dir()
}
}
/// Resolve the system-wide data directory for the running platform.
/// Honors backwards compatibility with pre-v0.10.1 installs that still
/// have their CA cert + services.json under `/usr/local/var/numa`.
#[cfg(not(windows))]
fn daemon_data_dir() -> std::path::PathBuf {
#[cfg(target_os = "linux")]
{
std::path::PathBuf::from(resolve_linux_data_dir(
std::path::Path::new("/usr/local/var/numa").exists(),
std::path::Path::new("/var/lib/numa").exists(),
))
}
#[cfg(target_os = "macos")]
{
// macOS uses the Homebrew prefix convention; no FHS migration needed.
std::path::PathBuf::from("/usr/local/var/numa") std::path::PathBuf::from("/usr/local/var/numa")
} }
} }
/// Extracted as a pure function so the migration logic is unit-testable
/// without touching the real filesystem.
#[cfg(any(target_os = "linux", test))]
fn resolve_linux_data_dir(legacy_exists: bool, fhs_exists: bool) -> &'static str {
if legacy_exists && !fhs_exists {
"/usr/local/var/numa"
} else {
"/var/lib/numa"
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn linux_data_dir_fresh_install_uses_fhs() {
assert_eq!(resolve_linux_data_dir(false, false), "/var/lib/numa");
}
#[test]
fn linux_data_dir_upgrading_install_keeps_legacy() {
// Migration must keep legacy so the user doesn't lose their CA on upgrade.
assert_eq!(resolve_linux_data_dir(true, false), "/usr/local/var/numa");
}
#[test]
fn linux_data_dir_after_migration_uses_fhs() {
assert_eq!(resolve_linux_data_dir(true, true), "/var/lib/numa");
}
#[test]
fn linux_data_dir_only_fhs_uses_fhs() {
assert_eq!(resolve_linux_data_dir(false, true), "/var/lib/numa");
}
}

View File

@@ -11,7 +11,7 @@ use numa::buffer::BytePacketBuffer;
use numa::cache::DnsCache; use numa::cache::DnsCache;
use numa::config::{build_zone_map, load_config, ConfigLoad}; use numa::config::{build_zone_map, load_config, ConfigLoad};
use numa::ctx::{handle_query, ServerCtx}; use numa::ctx::{handle_query, ServerCtx};
use numa::forward::Upstream; use numa::forward::{parse_upstream, Upstream, UpstreamPool};
use numa::override_store::OverrideStore; use numa::override_store::OverrideStore;
use numa::query_log::QueryLog; use numa::query_log::QueryLog;
use numa::service_store::ServiceStore; use numa::service_store::ServiceStore;
@@ -54,6 +54,9 @@ async fn main() -> numa::Result<()> {
} }
}; };
} }
"setup-phone" => {
return numa::setup_phone::run().await.map_err(|e| e.into());
}
"lan" => { "lan" => {
let sub = std::env::args().nth(2).unwrap_or_default(); let sub = std::env::args().nth(2).unwrap_or_default();
let config_path = std::env::args() let config_path = std::env::args()
@@ -85,12 +88,27 @@ async fn main() -> numa::Result<()> {
eprintln!(" service status Check if the service is running"); eprintln!(" service status Check if the service is running");
eprintln!(" lan on Enable LAN service discovery (mDNS)"); eprintln!(" lan on Enable LAN service discovery (mDNS)");
eprintln!(" lan off Disable LAN service discovery"); eprintln!(" lan off Disable LAN service discovery");
eprintln!(" setup-phone Generate a QR code to install Numa DoT on a phone");
eprintln!(" help Show this help"); eprintln!(" help Show this help");
eprintln!(); eprintln!();
eprintln!("Config path defaults to numa.toml"); eprintln!("Config path defaults to numa.toml");
return Ok(()); return Ok(());
} }
_ => {} _ => {
if !arg1.is_empty()
&& arg1 != "run"
&& !arg1.contains('/')
&& !arg1.contains('\\')
&& !arg1.ends_with(".toml")
{
eprintln!(
"\x1b[1;38;2;192;98;58mNuma\x1b[0m — unknown command: \x1b[1m{}\x1b[0m\n",
arg1
);
eprintln!("Run \x1b[1mnuma help\x1b[0m for a list of commands.");
std::process::exit(1);
}
}
} }
let config_path = if arg1.is_empty() || arg1 == "run" { let config_path = if arg1.is_empty() || arg1 == "run" {
@@ -111,18 +129,18 @@ async fn main() -> numa::Result<()> {
let root_hints = numa::recursive::parse_root_hints(&config.upstream.root_hints); let root_hints = numa::recursive::parse_root_hints(&config.upstream.root_hints);
let (resolved_mode, upstream_auto, upstream, upstream_label) = match config.upstream.mode { let recursive_pool = || {
let dummy = UpstreamPool::new(vec![Upstream::Udp("0.0.0.0:0".parse().unwrap())], vec![]);
(dummy, "recursive (root hints)".to_string())
};
let (resolved_mode, upstream_auto, pool, upstream_label) = match config.upstream.mode {
numa::config::UpstreamMode::Auto => { numa::config::UpstreamMode::Auto => {
info!("auto mode: probing recursive resolution..."); info!("auto mode: probing recursive resolution...");
if numa::recursive::probe_recursive(&root_hints).await { if numa::recursive::probe_recursive(&root_hints).await {
info!("recursive probe succeeded — self-sovereign mode"); info!("recursive probe succeeded — self-sovereign mode");
let dummy = Upstream::Udp("0.0.0.0:0".parse().unwrap()); let (pool, label) = recursive_pool();
( (numa::config::UpstreamMode::Recursive, false, pool, label)
numa::config::UpstreamMode::Recursive,
false,
dummy,
"recursive (root hints)".to_string(),
)
} else { } else {
log::warn!("recursive probe failed — falling back to Quad9 DoH"); log::warn!("recursive probe failed — falling back to Quad9 DoH");
let client = reqwest::Client::builder() let client = reqwest::Client::builder()
@@ -131,55 +149,45 @@ async fn main() -> numa::Result<()> {
.unwrap_or_default(); .unwrap_or_default();
let url = DOH_FALLBACK.to_string(); let url = DOH_FALLBACK.to_string();
let label = url.clone(); let label = url.clone();
( let pool = UpstreamPool::new(vec![Upstream::Doh { url, client }], vec![]);
numa::config::UpstreamMode::Forward, (numa::config::UpstreamMode::Forward, false, pool, label)
false,
Upstream::Doh { url, client },
label,
)
} }
} }
numa::config::UpstreamMode::Recursive => { numa::config::UpstreamMode::Recursive => {
let dummy = Upstream::Udp("0.0.0.0:0".parse().unwrap()); let (pool, label) = recursive_pool();
( (numa::config::UpstreamMode::Recursive, false, pool, label)
numa::config::UpstreamMode::Recursive,
false,
dummy,
"recursive (root hints)".to_string(),
)
} }
numa::config::UpstreamMode::Forward => { numa::config::UpstreamMode::Forward => {
let upstream_addr = if config.upstream.address.is_empty() { let addrs = if config.upstream.address.is_empty() {
system_dns let detected = system_dns
.default_upstream .default_upstream
.or_else(numa::system_dns::detect_dhcp_dns) .or_else(numa::system_dns::detect_dhcp_dns)
.unwrap_or_else(|| { .unwrap_or_else(|| {
info!("could not detect system DNS, falling back to Quad9 DoH"); info!("could not detect system DNS, falling back to Quad9 DoH");
DOH_FALLBACK.to_string() DOH_FALLBACK.to_string()
}) });
vec![detected]
} else { } else {
config.upstream.address.clone() config.upstream.address.clone()
}; };
let upstream: Upstream = if upstream_addr.starts_with("https://") { let primary: Vec<Upstream> = addrs
let client = reqwest::Client::builder() .iter()
.use_rustls_tls() .map(|s| parse_upstream(s, config.upstream.port))
.build() .collect::<numa::Result<Vec<_>>>()?;
.unwrap_or_default(); let fallback: Vec<Upstream> = config
Upstream::Doh { .upstream
url: upstream_addr, .fallback
client, .iter()
} .map(|s| parse_upstream(s, config.upstream.port))
} else { .collect::<numa::Result<Vec<_>>>()?;
let addr: SocketAddr =
format!("{}:{}", upstream_addr, config.upstream.port).parse()?; let pool = UpstreamPool::new(primary, fallback);
Upstream::Udp(addr) let label = pool.label();
};
let label = upstream.to_string();
( (
numa::config::UpstreamMode::Forward, numa::config::UpstreamMode::Forward,
config.upstream.address.is_empty(), config.upstream.address.is_empty(),
upstream, pool,
label, label,
) )
} }
@@ -223,7 +231,11 @@ async fn main() -> numa::Result<()> {
) { ) {
Ok(tls_config) => Some(ArcSwap::from(tls_config)), Ok(tls_config) => Some(ArcSwap::from(tls_config)),
Err(e) => { Err(e) => {
if let Some(advisory) = numa::tls::try_data_dir_advisory(&e, &resolved_data_dir) {
eprint!("{}", advisory);
} else {
log::warn!("TLS setup failed, HTTPS proxy disabled: {}", e); log::warn!("TLS setup failed, HTTPS proxy disabled: {}", e);
}
None None
} }
} }
@@ -231,8 +243,36 @@ async fn main() -> numa::Result<()> {
None None
}; };
let doh_enabled = initial_tls.is_some();
let health_meta = numa::health::HealthMeta::build(
&resolved_data_dir,
config.dot.enabled,
config.dot.port,
config.mobile.port,
config.dnssec.enabled,
resolved_mode == numa::config::UpstreamMode::Recursive,
config.lan.enabled,
config.blocking.enabled,
doh_enabled,
);
let ca_pem = std::fs::read_to_string(resolved_data_dir.join("ca.pem")).ok();
let socket = match UdpSocket::bind(&config.server.bind_addr).await {
Ok(s) => s,
Err(e) => {
if let Some(advisory) =
numa::system_dns::try_port53_advisory(&config.server.bind_addr, &e)
{
eprint!("{}", advisory);
std::process::exit(1);
}
return Err(e.into());
}
};
let ctx = Arc::new(ServerCtx { let ctx = Arc::new(ServerCtx {
socket: UdpSocket::bind(&config.server.bind_addr).await?, socket,
zone_map: build_zone_map(&config.zones)?, zone_map: build_zone_map(&config.zones)?,
cache: RwLock::new(DnsCache::new( cache: RwLock::new(DnsCache::new(
config.cache.max_entries, config.cache.max_entries,
@@ -246,7 +286,7 @@ async fn main() -> numa::Result<()> {
services: Mutex::new(service_store), services: Mutex::new(service_store),
lan_peers: Mutex::new(numa::lan::PeerStore::new(config.lan.peer_timeout_secs)), lan_peers: Mutex::new(numa::lan::PeerStore::new(config.lan.peer_timeout_secs)),
forwarding_rules, forwarding_rules,
upstream: Mutex::new(upstream), upstream_pool: Mutex::new(pool),
upstream_auto, upstream_auto,
upstream_port: config.upstream.port, upstream_port: config.upstream.port,
lan_ip: Mutex::new(numa::lan::detect_lan_ip().unwrap_or(std::net::Ipv4Addr::LOCALHOST)), lan_ip: Mutex::new(numa::lan::detect_lan_ip().unwrap_or(std::net::Ipv4Addr::LOCALHOST)),
@@ -269,6 +309,10 @@ async fn main() -> numa::Result<()> {
inflight: std::sync::Mutex::new(std::collections::HashMap::new()), inflight: std::sync::Mutex::new(std::collections::HashMap::new()),
dnssec_enabled: config.dnssec.enabled, dnssec_enabled: config.dnssec.enabled,
dnssec_strict: config.dnssec.strict, dnssec_strict: config.dnssec.strict,
health_meta,
ca_pem,
mobile_enabled: config.mobile.enabled,
mobile_port: config.mobile.port,
}); });
let zone_count: usize = ctx.zone_map.values().map(|m| m.len()).sum(); let zone_count: usize = ctx.zone_map.values().map(|m| m.len()).sum();
@@ -360,6 +404,9 @@ async fn main() -> numa::Result<()> {
g, g,
&format!("max {} entries", config.cache.max_entries), &format!("max {} entries", config.cache.max_entries),
); );
if !config.cache.warm.is_empty() {
row("Warm", g, &format!("{} domains", config.cache.warm.len()));
}
row( row(
"Blocking", "Blocking",
g, g,
@@ -386,6 +433,13 @@ async fn main() -> numa::Result<()> {
if config.dot.enabled { if config.dot.enabled {
row("DoT", g, &format!("tls://:{}", config.dot.port)); row("DoT", g, &format!("tls://:{}", config.dot.port));
} }
if doh_enabled {
row(
"DoH",
g,
&format!("https://:{}/dns-query", config.proxy.tls_port),
);
}
if config.lan.enabled { if config.lan.enabled {
row("LAN", g, "mDNS (_numa._tcp.local)"); row("LAN", g, "mDNS (_numa._tcp.local)");
} }
@@ -442,6 +496,15 @@ async fn main() -> numa::Result<()> {
}); });
} }
// Spawn cache warming for user-configured domains
if !config.cache.warm.is_empty() {
let warm_ctx = Arc::clone(&ctx);
let warm_domains = config.cache.warm.clone();
tokio::spawn(async move {
cache_warm_loop(warm_ctx, warm_domains).await;
});
}
// Spawn HTTP API server // Spawn HTTP API server
let api_ctx = Arc::clone(&ctx); let api_ctx = Arc::clone(&ctx);
let api_addr: SocketAddr = format!("{}:{}", config.server.api_bind_addr, api_port).parse()?; let api_addr: SocketAddr = format!("{}:{}", config.server.api_bind_addr, api_port).parse()?;
@@ -452,6 +515,21 @@ async fn main() -> numa::Result<()> {
axum::serve(listener, app).await.unwrap(); axum::serve(listener, app).await.unwrap();
}); });
// Spawn Mobile API listener (read-only subset for iOS/Android companion
// apps, LAN-bound by default so phones can reach it). Only idempotent
// GETs; no state-mutating routes are exposed here regardless of
// the main API's bind address.
if config.mobile.enabled {
let mobile_ctx = Arc::clone(&ctx);
let mobile_bind = config.mobile.bind_addr.clone();
let mobile_port = config.mobile.port;
tokio::spawn(async move {
if let Err(e) = numa::mobile_api::start(mobile_ctx, mobile_bind, mobile_port).await {
log::warn!("Mobile API listener failed: {}", e);
}
});
}
let proxy_bind: std::net::Ipv4Addr = config let proxy_bind: std::net::Ipv4Addr = config
.proxy .proxy
.bind_addr .bind_addr
@@ -546,29 +624,19 @@ async fn network_watch_loop(ctx: Arc<numa::ctx::ServerCtx>) {
} }
} }
// Re-detect upstream every 30s or on LAN IP change (UDP only // Re-detect upstream every 30s or on LAN IP change (auto-detect only)
// DoH upstreams are explicitly configured via URL, not auto-detected) if ctx.upstream_auto && (changed || tick.is_multiple_of(6)) {
if ctx.upstream_auto
&& matches!(*ctx.upstream.lock().unwrap(), Upstream::Udp(_))
&& (changed || tick.is_multiple_of(6))
{
let dns_info = numa::system_dns::discover_system_dns(); let dns_info = numa::system_dns::discover_system_dns();
let new_addr = dns_info let new_addr = dns_info
.default_upstream .default_upstream
.or_else(numa::system_dns::detect_dhcp_dns) .or_else(numa::system_dns::detect_dhcp_dns)
.unwrap_or_else(|| QUAD9_IP.to_string()); .unwrap_or_else(|| QUAD9_IP.to_string());
if let Ok(new_sock) = let mut pool = ctx.upstream_pool.lock().unwrap();
format!("{}:{}", new_addr, ctx.upstream_port).parse::<SocketAddr>() if pool.maybe_update_primary(&new_addr, ctx.upstream_port) {
{ info!("upstream changed → {}", pool.label());
let new_upstream = Upstream::Udp(new_sock);
let mut upstream = ctx.upstream.lock().unwrap();
if *upstream != new_upstream {
info!("upstream changed: {} → {}", upstream, new_upstream);
*upstream = new_upstream;
changed = true; changed = true;
} }
} }
}
// Flush stale LAN peers on any network change // Flush stale LAN peers on any network change
if changed { if changed {
@@ -673,3 +741,53 @@ async fn load_blocklists(ctx: &ServerCtx, lists: &[String]) {
downloaded.len() downloaded.len()
); );
} }
async fn warm_domain(ctx: &ServerCtx, domain: &str) {
use numa::question::QueryType;
for qtype in [QueryType::A, QueryType::AAAA] {
let query = numa::packet::DnsPacket::query(0, domain, qtype);
let result = if ctx.upstream_mode == numa::config::UpstreamMode::Recursive {
numa::recursive::resolve_recursive(
domain,
qtype,
&ctx.cache,
&query,
&ctx.root_hints,
&ctx.srtt,
)
.await
} else {
let pool = ctx.upstream_pool.lock().unwrap().clone();
numa::forward::forward_with_failover(&query, &pool, &ctx.srtt, ctx.timeout).await
};
match result {
Ok(resp) => {
ctx.cache.write().unwrap().insert(domain, qtype, &resp);
log::debug!("cache warm: {} {:?}", domain, qtype);
}
Err(e) => log::warn!("cache warm: {} {:?} failed: {}", domain, qtype, e),
}
}
}
async fn cache_warm_loop(ctx: Arc<ServerCtx>, domains: Vec<String>) {
tokio::time::sleep(Duration::from_secs(2)).await;
for domain in &domains {
warm_domain(&ctx, domain).await;
}
info!("cache warm: {} domains resolved at startup", domains.len());
let mut interval = tokio::time::interval(Duration::from_secs(30));
interval.tick().await;
loop {
interval.tick().await;
for domain in &domains {
let refresh = ctx.cache.read().unwrap().needs_warm(domain);
if refresh {
warm_domain(&ctx, domain).await;
}
}
}
}

107
src/mobile_api.rs Normal file
View File

@@ -0,0 +1,107 @@
//! Mobile API — persistent HTTP listener for iOS/Android companion apps.
//!
//! Read-only subset of Numa's HTTP surface served on a separate port
//! (default 8765) bound to the LAN. Unlike the main API on port 5380
//! (which defaults to `127.0.0.1` and serves mutating routes like
//! `DELETE /services/{name}` or `PUT /blocking/toggle`), this listener
//! is safe to expose on the LAN because every route is idempotent and
//! read-only.
//!
//! Routes (all GET):
//!
//! - `/health` — enriched status + metadata, shares the handler with the
//! main API via `crate::api::health`
//! - `/ca.pem` — Numa local CA in PEM form, shares the handler with the
//! main API via `crate::api::serve_ca`
//! - `/mobileconfig` — combined CA + DNS settings profile (Full mode)
//! - `/ca.mobileconfig` — CA-only trust profile (no DNS override)
//!
//! The mobile API does NOT include the mutating routes (overrides, cache
//! flush, blocking toggle, service CRUD, etc.). Even if a user sets
//! `api_bind_addr` to `0.0.0.0` for the main API, those routes stay on
//! port 5380; the mobile API on port 8765 never serves them. This is the
//! primary security boundary: anything exposed to the LAN is read-only.
use std::net::Ipv4Addr;
use std::sync::Arc;
use axum::extract::State;
use axum::http::{header, StatusCode};
use axum::response::IntoResponse;
use axum::routing::get;
use axum::Router;
use log::info;
use crate::ctx::ServerCtx;
use crate::mobileconfig::{build_mobileconfig, ProfileMode};
/// Content-Disposition for the full CA + DNS profile download.
const FULL_PROFILE_DISPOSITION: &str = "attachment; filename=\"numa.mobileconfig\"";
/// Content-Disposition for the CA-only profile download.
const CA_ONLY_PROFILE_DISPOSITION: &str = "attachment; filename=\"numa-ca.mobileconfig\"";
/// Build the axum router for the mobile API.
///
/// Shares handler functions with the main API where possible (`health`,
/// `serve_ca`) so the response shapes are identical across both ports.
pub fn router(ctx: Arc<ServerCtx>) -> Router {
Router::new()
.route("/health", get(crate::api::health))
.route("/ca.pem", get(crate::api::serve_ca))
.route("/mobileconfig", get(serve_full_mobileconfig))
.route("/ca.mobileconfig", get(serve_ca_only_mobileconfig))
.with_state(ctx)
}
/// Start the mobile API listener on `bind_addr:port`. Runs until the
/// caller cancels the spawned task. Logs the URL on successful bind.
pub async fn start(ctx: Arc<ServerCtx>, bind_addr: String, port: u16) -> crate::Result<()> {
let addr: std::net::SocketAddr = format!("{}:{}", bind_addr, port).parse()?;
let listener = tokio::net::TcpListener::bind(addr).await?;
info!("Mobile API listening on http://{}", addr);
let app = router(ctx);
axum::serve(listener, app).await?;
Ok(())
}
/// Serve the full mobileconfig profile (CA + DNS settings), with the
/// DNS payload pointing at the current LAN IP. Each request reads the
/// fresh LAN IP from `ctx.lan_ip` so the profile always reflects the
/// laptop's current network state.
async fn serve_full_mobileconfig(
State(ctx): State<Arc<ServerCtx>>,
) -> Result<impl IntoResponse, StatusCode> {
let ca_pem = ctx.ca_pem.as_deref().ok_or(StatusCode::NOT_FOUND)?;
let lan_ip: Ipv4Addr = *ctx.lan_ip.lock().unwrap();
let profile = build_mobileconfig(ProfileMode::Full { lan_ip }, ca_pem);
Ok(profile_response(profile, FULL_PROFILE_DISPOSITION))
}
/// Serve the CA-only mobileconfig profile. Trusts the Numa local CA but
/// does NOT change the device's DNS settings. Used by the iOS companion
/// app's DoT mode, where the app configures DNS via `NEDNSSettingsManager`
/// and only needs the system trust store to accept Numa's self-signed cert.
async fn serve_ca_only_mobileconfig(
State(ctx): State<Arc<ServerCtx>>,
) -> Result<impl IntoResponse, StatusCode> {
let ca_pem = ctx.ca_pem.as_deref().ok_or(StatusCode::NOT_FOUND)?;
let profile = build_mobileconfig(ProfileMode::CaOnly, ca_pem);
Ok(profile_response(profile, CA_ONLY_PROFILE_DISPOSITION))
}
/// Shared response constructor for both mobileconfig variants.
/// Identical headers; only the Content-Disposition filename differs.
fn profile_response(profile: String, disposition: &'static str) -> impl IntoResponse {
(
[
(header::CONTENT_TYPE, "application/x-apple-aspen-config"),
(header::CONTENT_DISPOSITION, disposition),
(header::CACHE_CONTROL, "no-store"),
],
profile,
)
}

305
src/mobileconfig.rs Normal file
View File

@@ -0,0 +1,305 @@
//! Apple `.mobileconfig` profile generator.
//!
//! Builds iOS Configuration Profiles that Numa serves to phones for one-tap
//! CA trust and DNS-over-TLS setup. The plist structure is hand-rendered
//! via `format!` — no plist crate dependency, deterministic output, small
//! binary footprint.
//!
//! Two modes:
//!
//! - [`ProfileMode::Full`]: CA trust payload + DNS settings payload pointing
//! at a specific LAN IP over DoT. This is what `numa setup-phone` has
//! always produced — the user scans a QR, installs this profile, and the
//! phone is configured for DoT through Numa in a single step (after the
//! iOS Certificate Trust Settings toggle, which is a separate system
//! gate we can't bypass).
//!
//! - [`ProfileMode::CaOnly`]: CA trust payload only, no DNS settings. Used
//! by the future iOS companion app flow where `NEDNSSettingsManager`
//! configures DNS programmatically and we only need the system trust
//! store to accept Numa's DoT cert. Installing this profile does NOT
//! change the user's DNS at all.
//!
//! Payload identifiers and UUIDs are fixed (not randomized) so iOS replaces
//! the existing profile on re-install rather than accumulating duplicates.
//! The `Full` and `CaOnly` profiles have distinct top-level UUIDs so they
//! can coexist as separate installed profiles, but they share the same CA
//! payload UUID since the CA itself is the same trust anchor in both.
use std::net::Ipv4Addr;
/// Top-level UUID and PayloadIdentifier for the full profile (CA + DNS).
/// Changing this breaks in-place replacement on existing iOS installs.
const FULL_PROFILE_UUID: &str = "F1E2D3C4-B5A6-7890-1234-567890ABCDEF";
const FULL_PROFILE_ID: &str = "com.numa.dns.profile";
/// Top-level UUID and PayloadIdentifier for the CA-only profile.
/// Distinct from `FULL_PROFILE_UUID` so a user can install one, the other,
/// or both without the latest install silently replacing a different mode.
const CA_ONLY_PROFILE_UUID: &str = "F2E3D4C5-B6A7-8901-2345-67890ABCDEF0";
const CA_ONLY_PROFILE_ID: &str = "com.numa.dns.ca.profile";
/// CA trust payload UUID. Same in both modes — iOS will see "the same CA
/// trust anchor" regardless of which wrapping profile contains it.
const CA_PAYLOAD_UUID: &str = "B2C3D4E5-F6A7-8901-BCDE-F12345678901";
const CA_PAYLOAD_ID: &str = "com.numa.dns.ca";
/// DNS settings payload UUID (Full mode only).
const DNS_PAYLOAD_UUID: &str = "A1B2C3D4-E5F6-7890-ABCD-EF1234567890";
const DNS_PAYLOAD_ID: &str = "com.numa.dns.dot";
/// Profile mode determines which payloads are included in the generated
/// `.mobileconfig`.
#[derive(Debug, Clone)]
pub enum ProfileMode {
/// Full profile: CA trust anchor + managed DNS settings payload
/// pointing at the given LAN IP over DoT. This is what the classic
/// `numa setup-phone` QR flow serves.
Full { lan_ip: Ipv4Addr },
/// CA-only profile: just the trust anchor, no DNS settings. For use
/// with the iOS companion app which manages DNS programmatically via
/// `NEDNSSettingsManager` and only needs the system trust store to
/// accept Numa's self-signed DoT cert.
CaOnly,
}
/// Build a full `.mobileconfig` profile as an XML plist string.
pub fn build_mobileconfig(mode: ProfileMode, ca_pem: &str) -> String {
let ca_payload = build_ca_payload(ca_pem);
match mode {
ProfileMode::Full { lan_ip } => {
let dns_payload = build_dns_payload(lan_ip);
let payloads = format!("{}\n{}", ca_payload, dns_payload);
let description = format!(
"Trusts the Numa local CA and routes DNS queries to Numa over DoT on your local network ({lan_ip})"
);
wrap_plist(
&payloads,
FULL_PROFILE_UUID,
FULL_PROFILE_ID,
&description,
"Numa DNS",
)
}
ProfileMode::CaOnly => wrap_plist(
&ca_payload,
CA_ONLY_PROFILE_UUID,
CA_ONLY_PROFILE_ID,
"Trusts the Numa local Certificate Authority. Does not change your DNS settings.",
"Numa CA",
),
}
}
/// Strip the PEM header/footer and newlines from a CA cert, leaving raw
/// base64 for embedding in a plist `<data>` block.
fn pem_to_base64(pem: &str) -> String {
pem.lines()
.filter(|line| !line.starts_with("-----"))
.collect::<String>()
}
/// Wrap the base64 CA cert at 52 chars per line for plist readability
/// (matches Apple convention in hand-written profiles).
fn chunk_base64(base64: &str) -> String {
base64
.chars()
.collect::<Vec<_>>()
.chunks(52)
.map(|chunk| format!("\t\t\t{}", chunk.iter().collect::<String>()))
.collect::<Vec<_>>()
.join("\n")
}
/// Render the `com.apple.security.root` payload dict containing the CA cert.
fn build_ca_payload(ca_pem: &str) -> String {
let ca_wrapped = chunk_base64(&pem_to_base64(ca_pem));
format!(
r#" <dict>
<key>PayloadCertificateFileName</key>
<string>numa-ca.pem</string>
<key>PayloadContent</key>
<data>
{ca}
</data>
<key>PayloadDescription</key>
<string>Numa local Certificate Authority — required for DoT trust</string>
<key>PayloadDisplayName</key>
<string>Numa Local CA</string>
<key>PayloadIdentifier</key>
<string>{ca_id}</string>
<key>PayloadType</key>
<string>com.apple.security.root</string>
<key>PayloadUUID</key>
<string>{ca_uuid}</string>
<key>PayloadVersion</key>
<integer>1</integer>
</dict>"#,
ca = ca_wrapped,
ca_id = CA_PAYLOAD_ID,
ca_uuid = CA_PAYLOAD_UUID,
)
}
/// Render the `com.apple.dnsSettings.managed` payload dict for Full mode.
fn build_dns_payload(lan_ip: Ipv4Addr) -> String {
format!(
r#" <dict>
<key>DNSSettings</key>
<dict>
<key>DNSProtocol</key>
<string>TLS</string>
<key>ServerAddresses</key>
<array>
<string>{ip}</string>
</array>
<key>ServerName</key>
<string>numa.numa</string>
</dict>
<key>OnDemandRules</key>
<array>
<dict>
<key>Action</key>
<string>Connect</string>
<key>InterfaceTypeMatch</key>
<string>WiFi</string>
</dict>
<dict>
<key>Action</key>
<string>Disconnect</string>
</dict>
</array>
<key>PayloadDescription</key>
<string>Routes DNS queries through Numa over DoT when on Wi-Fi</string>
<key>PayloadDisplayName</key>
<string>Numa DNS-over-TLS</string>
<key>PayloadIdentifier</key>
<string>{dns_id}</string>
<key>PayloadType</key>
<string>com.apple.dnsSettings.managed</string>
<key>PayloadUUID</key>
<string>{dns_uuid}</string>
<key>PayloadVersion</key>
<integer>1</integer>
</dict>"#,
ip = lan_ip,
dns_id = DNS_PAYLOAD_ID,
dns_uuid = DNS_PAYLOAD_UUID,
)
}
/// Wrap one or more payload dicts in the top-level plist structure
/// with Configuration type, PayloadContent array, and profile metadata.
fn wrap_plist(
payloads: &str,
top_uuid: &str,
top_id: &str,
description: &str,
display_name: &str,
) -> String {
format!(
r#"<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>PayloadContent</key>
<array>
{payloads}
</array>
<key>PayloadDescription</key>
<string>{description}</string>
<key>PayloadDisplayName</key>
<string>{display_name}</string>
<key>PayloadIdentifier</key>
<string>{top_id}</string>
<key>PayloadRemovalDisallowed</key>
<false/>
<key>PayloadType</key>
<string>Configuration</string>
<key>PayloadUUID</key>
<string>{top_uuid}</string>
<key>PayloadVersion</key>
<integer>1</integer>
</dict>
</plist>
"#,
payloads = payloads,
description = description,
display_name = display_name,
top_id = top_id,
top_uuid = top_uuid,
)
}
#[cfg(test)]
mod tests {
use super::*;
const SAMPLE_PEM: &str =
"-----BEGIN CERTIFICATE-----\nMIIBkDCCATagAwIBAgIUTEST\n-----END CERTIFICATE-----\n";
#[test]
fn pem_to_base64_strips_headers() {
let pem = "-----BEGIN CERTIFICATE-----\nABCDEF\nGHIJKL\n-----END CERTIFICATE-----\n";
assert_eq!(pem_to_base64(pem), "ABCDEFGHIJKL");
}
#[test]
fn full_profile_contains_ip_and_ca() {
let config = build_mobileconfig(
ProfileMode::Full {
lan_ip: Ipv4Addr::new(192, 168, 1, 100),
},
SAMPLE_PEM,
);
assert!(config.contains("192.168.1.100"));
assert!(config.contains("MIIBkDCCATagAwIBAgIUTEST"));
assert!(config.contains("com.apple.security.root"));
assert!(config.contains("com.apple.dnsSettings.managed"));
assert!(config.contains("DNSProtocol"));
assert!(config.contains(FULL_PROFILE_UUID));
assert!(config.contains(FULL_PROFILE_ID));
}
#[test]
fn ca_only_profile_contains_ca_but_not_dns() {
let config = build_mobileconfig(ProfileMode::CaOnly, SAMPLE_PEM);
assert!(config.contains("MIIBkDCCATagAwIBAgIUTEST"));
assert!(config.contains("com.apple.security.root"));
assert!(!config.contains("com.apple.dnsSettings.managed"));
assert!(!config.contains("DNSProtocol"));
assert!(!config.contains("ServerAddresses"));
assert!(config.contains(CA_ONLY_PROFILE_UUID));
assert!(config.contains(CA_ONLY_PROFILE_ID));
}
#[test]
fn full_and_ca_only_have_distinct_top_uuids() {
let full = build_mobileconfig(
ProfileMode::Full {
lan_ip: Ipv4Addr::new(10, 0, 0, 1),
},
SAMPLE_PEM,
);
let ca_only = build_mobileconfig(ProfileMode::CaOnly, SAMPLE_PEM);
assert!(full.contains(FULL_PROFILE_UUID));
assert!(!full.contains(CA_ONLY_PROFILE_UUID));
assert!(ca_only.contains(CA_ONLY_PROFILE_UUID));
assert!(!ca_only.contains(FULL_PROFILE_UUID));
}
#[test]
fn both_modes_share_ca_payload_uuid() {
let full = build_mobileconfig(
ProfileMode::Full {
lan_ip: Ipv4Addr::new(10, 0, 0, 1),
},
SAMPLE_PEM,
);
let ca_only = build_mobileconfig(ProfileMode::CaOnly, SAMPLE_PEM);
assert!(full.contains(CA_PAYLOAD_UUID));
assert!(ca_only.contains(CA_PAYLOAD_UUID));
}
}

View File

@@ -4,7 +4,7 @@ use std::sync::Arc;
use axum::body::Body; use axum::body::Body;
use axum::extract::{Request, State}; use axum::extract::{Request, State};
use axum::response::IntoResponse; use axum::response::IntoResponse;
use axum::routing::any; use axum::routing::{any, post};
use axum::Router; use axum::Router;
use http_body_util::BodyExt; use http_body_util::BodyExt;
use hyper::StatusCode; use hyper::StatusCode;
@@ -18,6 +18,14 @@ use crate::ctx::ServerCtx;
type HttpClient = Client<hyper_util::client::legacy::connect::HttpConnector, Body>; type HttpClient = Client<hyper_util::client::legacy::connect::HttpConnector, Body>;
/// State passed to the DoH handler. Includes the remote address so
/// `resolve_query` can log the client IP.
#[derive(Clone)]
pub struct DohState {
pub ctx: Arc<ServerCtx>,
pub remote_addr: Option<std::net::SocketAddr>,
}
#[derive(Clone)] #[derive(Clone)]
struct ProxyState { struct ProxyState {
ctx: Arc<ServerCtx>, ctx: Arc<ServerCtx>,
@@ -74,9 +82,17 @@ pub async fn start_proxy_tls(ctx: Arc<ServerCtx>, port: u16, bind_addr: Ipv4Addr
// Hold a separate Arc so we can access tls_config after ctx moves into ProxyState // Hold a separate Arc so we can access tls_config after ctx moves into ProxyState
let tls_holder = Arc::clone(&ctx); let tls_holder = Arc::clone(&ctx);
let state = ProxyState { ctx, client }; let proxy_state = ProxyState {
ctx: Arc::clone(&ctx),
client,
};
let app = Router::new().fallback(any(proxy_handler)).with_state(state); // DoH route (RFC 8484) served only on the TLS listener.
// DohState.remote_addr is set per-connection below.
let doh_state = DohState {
ctx,
remote_addr: None,
};
loop { loop {
let (tcp_stream, remote_addr) = match listener.accept().await { let (tcp_stream, remote_addr) = match listener.accept().await {
@@ -91,7 +107,17 @@ pub async fn start_proxy_tls(ctx: Arc<ServerCtx>, port: u16, bind_addr: Ipv4Addr
// unwrap safe: guarded by is_none() check above // unwrap safe: guarded by is_none() check above
let acceptor = let acceptor =
TlsAcceptor::from(Arc::clone(&*tls_holder.tls_config.as_ref().unwrap().load())); TlsAcceptor::from(Arc::clone(&*tls_holder.tls_config.as_ref().unwrap().load()));
let app = app.clone();
let mut conn_doh_state = doh_state.clone();
conn_doh_state.remote_addr = Some(remote_addr);
let app = Router::new()
.route(
"/dns-query",
post(crate::doh::doh_post).with_state(conn_doh_state),
)
.fallback(any(proxy_handler))
.with_state(proxy_state.clone());
tokio::spawn(async move { tokio::spawn(async move {
let tls_stream = match acceptor.accept(tcp_stream).await { let tls_stream = match acceptor.accept(tcp_stream).await {
@@ -232,7 +258,7 @@ pre .str {{ color: #d48a5a }}
) )
} }
fn extract_host(req: &Request) -> Option<String> { pub fn extract_host(req: &Request) -> Option<String> {
req.headers() req.headers()
.get(hyper::header::HOST) .get(hyper::header::HOST)
.and_then(|v| v.to_str().ok()) .and_then(|v| v.to_str().ok())

126
src/setup_phone.rs Normal file
View File

@@ -0,0 +1,126 @@
//! `numa setup-phone` CLI — thin QR wrapper over the persistent mobile API.
//!
//! Before the mobile API existed, this command spawned its own one-shot
//! HTTP server on port 8765 to serve a freshly-generated mobileconfig
//! for a single download. That role now belongs to
//! [`crate::mobile_api`], which runs persistently alongside the main
//! API and serves `/mobileconfig` at the same port whenever Numa is
//! running.
//!
//! This command is now a thin terminal-side wrapper:
//!
//! 1. Detect the current LAN IP
//! 2. Render a terminal QR code pointing at
//! `http://<lan_ip>:8765/mobileconfig`
//! 3. Print install instructions and exit
//!
//! The user scans the QR, iOS fetches the profile from the mobile API
//! (which is always up as long as `numa` is running), installs, and the
//! user walks through Settings → Certificate Trust Settings to enable
//! trust.
//!
//! Numa must be running for the profile download to succeed; if the
//! mobile API is not listening on port 8765, the download will fail
//! and the user will see Safari's "Cannot Connect to Server" error.
//! The CLI prints a reminder about this at the bottom of the output.
use qrcode::render::unicode;
use qrcode::QrCode;
/// Default port where the persistent mobile API serves `/mobileconfig`.
/// Matches `MobileConfig::default().port` in `config.rs`. If the user
/// has overridden `[mobile] port = N` in `numa.toml`, they'll need to
/// adjust the URL manually — this CLI uses the default without parsing
/// `numa.toml`.
const SETUP_PORT: u16 = 8765;
fn render_qr(url: &str) -> Result<String, String> {
let code = QrCode::new(url).map_err(|e| format!("failed to encode QR: {}", e))?;
Ok(code
.render::<unicode::Dense1x2>()
.dark_color(unicode::Dense1x2::Light)
.light_color(unicode::Dense1x2::Dark)
.build())
}
/// Run the `numa setup-phone` flow.
pub async fn run() -> Result<(), String> {
let lan_ip = crate::lan::detect_lan_ip()
.ok_or("could not detect LAN IP — are you connected to a network?")?;
let addr = std::net::SocketAddr::from(([127, 0, 0, 1], SETUP_PORT));
let api_reachable = tokio::time::timeout(
std::time::Duration::from_millis(500),
tokio::net::TcpStream::connect(addr),
)
.await
.map(|r| r.is_ok())
.unwrap_or(false);
if !api_reachable {
eprintln!();
eprintln!(
" \x1b[1;38;2;192;98;58mNuma\x1b[0m — mobile API is not reachable on port {}.",
SETUP_PORT
);
eprintln!();
eprintln!(" The phone won't be able to download the profile until the mobile");
eprintln!(" API is running. Add this to your numa.toml and restart Numa:");
eprintln!();
eprintln!(" [mobile]");
eprintln!(" enabled = true");
eprintln!();
return Err("mobile API not running".into());
}
let url = format!("http://{}:{}/mobileconfig", lan_ip, SETUP_PORT);
let qr = render_qr(&url)?;
eprintln!();
eprintln!(" \x1b[1;38;2;192;98;58mNuma Phone Setup\x1b[0m");
eprintln!();
eprintln!(" Profile URL: \x1b[36m{}\x1b[0m", url);
eprintln!();
for line in qr.lines() {
eprintln!(" {}", line);
}
eprintln!();
eprintln!(" \x1b[1mOn your iPhone:\x1b[0m");
eprintln!(" 1. Open Camera, point at the QR code, tap the yellow banner");
eprintln!(" 2. Allow the download when Safari asks");
eprintln!(" 3. Open Settings — tap \"Profile Downloaded\" near the top");
eprintln!(" (or: Settings → General → VPN & Device Management → Numa DNS)");
eprintln!(" 4. Tap Install (top right), enter passcode, Install again");
eprintln!(" 5. \x1b[1mSettings → General → About → Certificate Trust Settings\x1b[0m");
eprintln!(" Toggle ON \"Numa Local CA\" — required for DoT to work");
eprintln!();
eprintln!(
" \x1b[33mNote:\x1b[0m profile uses your laptop's current IP ({}). If your",
lan_ip
);
eprintln!(" laptop changes networks, re-scan this QR — iOS will replace the");
eprintln!(" existing profile automatically (fixed UUID).");
eprintln!();
eprintln!(
" \x1b[90mThe profile is served by Numa's persistent mobile API on port {}.\x1b[0m",
SETUP_PORT
);
eprintln!(" \x1b[90mMake sure `numa` is running before scanning. If it's not,\x1b[0m");
eprintln!(" \x1b[90mstart it with `sudo numa install` or run it interactively.\x1b[0m");
eprintln!();
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn render_qr_produces_unicode() {
let qr = render_qr("http://192.168.1.9:8765/mobileconfig").unwrap();
assert!(!qr.is_empty());
// Dense1x2 uses these block characters
assert!(qr.chars().any(|c| matches!(c, '█' | '▀' | '▄' | ' ')));
}
}

View File

@@ -2,6 +2,17 @@ use std::net::SocketAddr;
use log::info; use log::info;
fn print_recursive_hint() {
let is_recursive = crate::config::load_config("numa.toml")
.map(|c| c.config.upstream.mode == crate::config::UpstreamMode::Recursive)
.unwrap_or(false);
if !is_recursive {
eprintln!(" Want full DNS sovereignty? Add to numa.toml:");
eprintln!(" [upstream]");
eprintln!(" mode = \"recursive\"\n");
}
}
fn is_loopback_or_stub(addr: &str) -> bool { fn is_loopback_or_stub(addr: &str) -> bool {
matches!(addr, "127.0.0.1" | "127.0.0.53" | "0.0.0.0" | "::1" | "") matches!(addr, "127.0.0.1" | "127.0.0.53" | "0.0.0.0" | "::1" | "")
} }
@@ -46,6 +57,60 @@ pub fn discover_system_dns() -> SystemDnsInfo {
} }
} }
/// Advisory for port-53 bind failures (EADDRINUSE or EACCES); `None`
/// if not applicable so the caller can fall back to the raw error.
pub fn try_port53_advisory(bind_addr: &str, err: &std::io::Error) -> Option<String> {
if !is_port_53(bind_addr) {
return None;
}
let (title, cause) = match err.kind() {
std::io::ErrorKind::AddrInUse => (
"port 53 is already in use",
"Another process is already bound to port 53. On Linux this is\n \
typically systemd-resolved; on Windows, the DNS Client service.",
),
std::io::ErrorKind::PermissionDenied => (
"permission denied",
"Port 53 is privileged — binding it requires root on Linux/macOS\n \
or Administrator on Windows.",
),
_ => return None,
};
let o = "\x1b[1;38;2;192;98;58m"; // bold orange
let r = "\x1b[0m";
Some(format!(
"
{o}Numa{r} — cannot bind to {bind_addr}: {title}.
{cause}
Fix — pick one:
1. Install Numa as the system resolver (frees port 53):
sudo numa install (on Windows, run as Administrator)
2. Run on a non-privileged port for testing.
Create ~/.config/numa/numa.toml with:
[server]
bind_addr = \"127.0.0.1:5354\"
api_port = 5380
Then run: numa
Test with: dig @127.0.0.1 -p 5354 example.com
"
))
}
fn is_port_53(bind_addr: &str) -> bool {
bind_addr
.parse::<SocketAddr>()
.map(|s| s.port() == 53)
.unwrap_or(false)
}
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
fn discover_macos() -> SystemDnsInfo { fn discover_macos() -> SystemDnsInfo {
use log::{debug, warn}; use log::{debug, warn};
@@ -174,6 +239,9 @@ fn discover_linux() -> SystemDnsInfo {
let default_upstream = if let Some(ns) = upstream { let default_upstream = if let Some(ns) = upstream {
info!("detected system upstream: {}", ns); info!("detected system upstream: {}", ns);
Some(ns) Some(ns)
} else if let Some(ns) = resolvectl_dns_server() {
info!("detected system upstream via resolvectl: {}", ns);
Some(ns)
} else { } else {
// Fallback to backup from a previous `numa install` // Fallback to backup from a previous `numa install`
let backup = { let backup = {
@@ -214,7 +282,18 @@ fn discover_linux() -> SystemDnsInfo {
} }
} }
/// Parse resolv.conf in a single pass, extracting both the first non-loopback /// Yield each `nameserver` address from resolv.conf content. No filtering —
/// callers decide what counts as a real upstream.
#[cfg(any(target_os = "linux", test))]
fn iter_nameservers(content: &str) -> impl Iterator<Item = &str> {
content.lines().filter_map(|line| {
let mut parts = line.split_whitespace();
(parts.next() == Some("nameserver")).then_some(())?;
parts.next()
})
}
/// Parse resolv.conf in a single pass, extracting the first non-loopback
/// nameserver and all search domains. /// nameserver and all search domains.
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
fn parse_resolv_conf(path: &str) -> (Option<String>, Vec<String>) { fn parse_resolv_conf(path: &str) -> (Option<String>, Vec<String>) {
@@ -222,19 +301,13 @@ fn parse_resolv_conf(path: &str) -> (Option<String>, Vec<String>) {
Ok(t) => t, Ok(t) => t,
Err(_) => return (None, Vec::new()), Err(_) => return (None, Vec::new()),
}; };
let mut upstream = None; let upstream = iter_nameservers(&text)
.find(|ns| !is_loopback_or_stub(ns))
.map(str::to_string);
let mut search_domains = Vec::new(); let mut search_domains = Vec::new();
for line in text.lines() { for line in text.lines() {
let line = line.trim(); let line = line.trim();
if line.starts_with("nameserver") { if line.starts_with("search") || line.starts_with("domain") {
if upstream.is_none() {
if let Some(ns) = line.split_whitespace().nth(1) {
if !is_loopback_or_stub(ns) {
upstream = Some(ns.to_string());
}
}
}
} else if line.starts_with("search") || line.starts_with("domain") {
for domain in line.split_whitespace().skip(1) { for domain in line.split_whitespace().skip(1) {
search_domains.push(domain.to_string()); search_domains.push(domain.to_string());
} }
@@ -243,6 +316,21 @@ fn parse_resolv_conf(path: &str) -> (Option<String>, Vec<String>) {
(upstream, search_domains) (upstream, search_domains)
} }
/// True if the resolv.conf *content* appears to be written by numa itself,
/// or has no real upstream — either way, it's not a safe source of truth
/// for a backup.
#[cfg(any(target_os = "linux", test))]
fn resolv_conf_is_numa_managed(content: &str) -> bool {
content.contains("Generated by Numa") || !resolv_conf_has_real_upstream(content)
}
/// True if the resolv.conf content has at least one non-loopback, non-stub
/// nameserver. An all-loopback resolv.conf is self-referential.
#[cfg(any(target_os = "linux", test))]
fn resolv_conf_has_real_upstream(content: &str) -> bool {
iter_nameservers(content).any(|ns| !is_loopback_or_stub(ns))
}
/// Query resolvectl for the real upstream DNS server (e.g. VPC resolver on AWS). /// Query resolvectl for the real upstream DNS server (e.g. VPC resolver on AWS).
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
fn resolvectl_dns_server() -> Option<String> { fn resolvectl_dns_server() -> Option<String> {
@@ -526,9 +614,19 @@ fn enable_dnscache() {
.status(); .status();
} }
/// True if the backup map has at least one real upstream (non-loopback, non-stub).
#[cfg(any(windows, test))]
fn backup_has_real_upstream_windows(
interfaces: &std::collections::HashMap<String, WindowsInterfaceDns>,
) -> bool {
interfaces
.values()
.any(|iface| iface.servers.iter().any(|s| !is_loopback_or_stub(s)))
}
#[cfg(windows)] #[cfg(windows)]
fn install_windows() -> Result<(), String> { fn install_windows() -> Result<(), String> {
let interfaces = get_windows_interfaces()?; let mut interfaces = get_windows_interfaces()?;
if interfaces.is_empty() { if interfaces.is_empty() {
return Err("no active network interfaces found".to_string()); return Err("no active network interfaces found".to_string());
} }
@@ -538,9 +636,30 @@ fn install_windows() -> Result<(), String> {
std::fs::create_dir_all(parent) std::fs::create_dir_all(parent)
.map_err(|e| format!("failed to create {}: {}", parent.display(), e))?; .map_err(|e| format!("failed to create {}: {}", parent.display(), e))?;
} }
// Preserve an existing useful backup rather than overwriting it with
// numa-managed state (which would be self-referential after uninstall).
let existing: Option<std::collections::HashMap<String, WindowsInterfaceDns>> =
std::fs::read_to_string(&path)
.ok()
.and_then(|json| serde_json::from_str(&json).ok());
let has_useful_existing = existing
.as_ref()
.map(backup_has_real_upstream_windows)
.unwrap_or(false);
if has_useful_existing {
eprintln!(" Existing DNS backup preserved at {}", path.display());
} else {
// Filter loopback/stub addresses before saving so a fresh backup
// captured from already-numa-managed state isn't self-referential.
for iface in interfaces.values_mut() {
iface.servers.retain(|s| !is_loopback_or_stub(s));
}
let json = serde_json::to_string_pretty(&interfaces) let json = serde_json::to_string_pretty(&interfaces)
.map_err(|e| format!("failed to serialize backup: {}", e))?; .map_err(|e| format!("failed to serialize backup: {}", e))?;
std::fs::write(&path, json).map_err(|e| format!("failed to write backup: {}", e))?; std::fs::write(&path, json).map_err(|e| format!("failed to write backup: {}", e))?;
}
for name in interfaces.keys() { for name in interfaces.keys() {
let status = std::process::Command::new("netsh") let status = std::process::Command::new("netsh")
@@ -570,16 +689,17 @@ fn install_windows() -> Result<(), String> {
let needs_reboot = disable_dnscache()?; let needs_reboot = disable_dnscache()?;
register_autostart(); register_autostart();
eprintln!("\n Original DNS saved to {}", path.display()); eprintln!();
if !has_useful_existing {
eprintln!(" Original DNS saved to {}", path.display());
}
eprintln!(" Run 'numa uninstall' to restore.\n"); eprintln!(" Run 'numa uninstall' to restore.\n");
if needs_reboot { if needs_reboot {
eprintln!(" *** Reboot required. Numa will start automatically. ***\n"); eprintln!(" *** Reboot required. Numa will start automatically. ***\n");
} else { } else {
eprintln!(" Numa will start automatically on next boot.\n"); eprintln!(" Numa will start automatically on next boot.\n");
} }
eprintln!(" Want full DNS sovereignty? Add to numa.toml:"); print_recursive_hint();
eprintln!(" [upstream]");
eprintln!(" mode = \"recursive\"\n");
Ok(()) Ok(())
} }
@@ -754,27 +874,60 @@ fn get_dns_servers(service: &str) -> Result<Vec<String>, String> {
} }
} }
/// True if the backup map has at least one real upstream (non-loopback, non-stub).
/// An all-loopback backup is self-referential — restoring it is a no-op.
#[cfg(any(target_os = "macos", test))]
fn backup_has_real_upstream_macos(
servers: &std::collections::HashMap<String, Vec<String>>,
) -> bool {
servers
.values()
.any(|list| list.iter().any(|s| !is_loopback_or_stub(s)))
}
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
fn install_macos() -> Result<(), String> { fn install_macos() -> Result<(), String> {
use std::collections::HashMap; use std::collections::HashMap;
let services = get_network_services()?; let services = get_network_services()?;
let mut original: HashMap<String, Vec<String>> = HashMap::new();
// Save current DNS for each service
for service in &services {
let servers = get_dns_servers(service)?;
original.insert(service.clone(), servers);
}
// Save backup
let dir = numa_data_dir(); let dir = numa_data_dir();
std::fs::create_dir_all(&dir) std::fs::create_dir_all(&dir)
.map_err(|e| format!("failed to create {}: {}", dir.display(), e))?; .map_err(|e| format!("failed to create {}: {}", dir.display(), e))?;
// If a useful backup already exists (at least one non-loopback upstream),
// preserve it — overwriting would destroy the original DNS state when
// re-installing on top of a numa-managed configuration.
let existing_backup: Option<HashMap<String, Vec<String>>> =
std::fs::read_to_string(backup_path())
.ok()
.and_then(|json| serde_json::from_str(&json).ok());
let has_useful_existing = existing_backup
.as_ref()
.map(backup_has_real_upstream_macos)
.unwrap_or(false);
if has_useful_existing {
eprintln!(
" Existing DNS backup preserved at {}",
backup_path().display()
);
} else {
// Capture fresh, filtering out loopback and stub addresses so we
// never record a self-referential backup.
let mut original: HashMap<String, Vec<String>> = HashMap::new();
for service in &services {
let servers: Vec<String> = get_dns_servers(service)?
.into_iter()
.filter(|s| !is_loopback_or_stub(s))
.collect();
original.insert(service.clone(), servers);
}
let json = serde_json::to_string_pretty(&original) let json = serde_json::to_string_pretty(&original)
.map_err(|e| format!("failed to serialize backup: {}", e))?; .map_err(|e| format!("failed to serialize backup: {}", e))?;
std::fs::write(backup_path(), json).map_err(|e| format!("failed to write backup: {}", e))?; std::fs::write(backup_path(), json)
.map_err(|e| format!("failed to write backup: {}", e))?;
}
// Set DNS to 127.0.0.1 and add "numa" search domain for each service // Set DNS to 127.0.0.1 and add "numa" search domain for each service
for service in &services { for service in &services {
@@ -795,7 +948,10 @@ fn install_macos() -> Result<(), String> {
.status(); .status();
} }
eprintln!("\n Original DNS saved to {}", backup_path().display()); eprintln!();
if !has_useful_existing {
eprintln!(" Original DNS saved to {}", backup_path().display());
}
eprintln!(" Run 'sudo numa uninstall' to restore.\n"); eprintln!(" Run 'sudo numa uninstall' to restore.\n");
Ok(()) Ok(())
@@ -990,14 +1146,23 @@ fn install_service_macos() -> Result<(), String> {
std::fs::write(PLIST_DEST, plist) std::fs::write(PLIST_DEST, plist)
.map_err(|e| format!("failed to write {}: {}", PLIST_DEST, e))?; .map_err(|e| format!("failed to write {}: {}", PLIST_DEST, e))?;
// Load the service first so numa is listening before DNS redirect // Modern launchctl API: explicitly tear down any existing in-memory
// state, then bootstrap fresh from the on-disk plist. The deprecated
// `load -w` returns exit 0 even when it cannot actually reload (label
// already in launchd state), silently leaving the daemon running a
// stale binary path after `numa install` rewrites the plist on disk —
// which is exactly what `brew upgrade numa` does.
let _ = std::process::Command::new("launchctl")
.args(["bootout", "system", PLIST_DEST])
.status();
let status = std::process::Command::new("launchctl") let status = std::process::Command::new("launchctl")
.args(["load", "-w", PLIST_DEST]) .args(["bootstrap", "system", PLIST_DEST])
.status() .status()
.map_err(|e| format!("failed to run launchctl: {}", e))?; .map_err(|e| format!("failed to run launchctl: {}", e))?;
if !status.success() { if !status.success() {
return Err("launchctl load failed".to_string()); return Err("launchctl bootstrap failed".to_string());
} }
// Wait for numa to be ready before redirecting DNS // Wait for numa to be ready before redirecting DNS
@@ -1010,7 +1175,7 @@ fn install_service_macos() -> Result<(), String> {
if !api_up { if !api_up {
// Service failed to start — don't redirect DNS to a dead endpoint // Service failed to start — don't redirect DNS to a dead endpoint
let _ = std::process::Command::new("launchctl") let _ = std::process::Command::new("launchctl")
.args(["unload", PLIST_DEST]) .args(["bootout", "system", PLIST_DEST])
.status(); .status();
return Err( return Err(
"numa service did not start (port 53 may be in use). Service unloaded.".to_string(), "numa service did not start (port 53 may be in use). Service unloaded.".to_string(),
@@ -1025,9 +1190,7 @@ fn install_service_macos() -> Result<(), String> {
eprintln!(" Numa will auto-start on boot and restart if killed."); eprintln!(" Numa will auto-start on boot and restart if killed.");
eprintln!(" Logs: /usr/local/var/log/numa.log"); eprintln!(" Logs: /usr/local/var/log/numa.log");
eprintln!(" Run 'sudo numa uninstall' to restore original DNS.\n"); eprintln!(" Run 'sudo numa uninstall' to restore original DNS.\n");
eprintln!(" Want full DNS sovereignty? Add to numa.toml:"); print_recursive_hint();
eprintln!(" [upstream]");
eprintln!(" mode = \"recursive\"\n");
Ok(()) Ok(())
} }
@@ -1038,22 +1201,25 @@ fn uninstall_service_macos() -> Result<(), String> {
eprintln!(" warning: failed to restore system DNS: {}", e); eprintln!(" warning: failed to restore system DNS: {}", e);
} }
// Remove plist first so service won't restart on boot even if unload fails // Bootout the service from launchd's in-memory state BEFORE removing
if let Err(e) = std::fs::remove_file(PLIST_DEST) { // the plist. The modern API needs the file path as the specifier;
if e.kind() != std::io::ErrorKind::NotFound { // doing this in the wrong order would leave the service loaded in
return Err(format!("failed to remove {}: {}", PLIST_DEST, e)); // memory until reboot. (Deprecated `unload -w` had the same issue.)
let bootout_status = std::process::Command::new("launchctl")
.args(["bootout", "system", PLIST_DEST])
.status();
if let Ok(s) = bootout_status {
if !s.success() {
eprintln!(
" warning: launchctl bootout returned non-zero (service may not have been loaded)"
);
} }
} }
// Unload the service // Remove plist so the service won't restart on boot
let status = std::process::Command::new("launchctl") if let Err(e) = std::fs::remove_file(PLIST_DEST) {
.args(["unload", "-w", PLIST_DEST]) if e.kind() != std::io::ErrorKind::NotFound {
.status(); return Err(format!("failed to remove {}: {}", PLIST_DEST, e));
if let Ok(s) = status {
if !s.success() {
eprintln!(
" warning: launchctl unload returned non-zero (service may still be running)"
);
} }
} }
@@ -1132,11 +1298,31 @@ fn install_linux() -> Result<(), String> {
.map_err(|e| format!("failed to create {}: {}", parent.display(), e))?; .map_err(|e| format!("failed to create {}: {}", parent.display(), e))?;
} }
// Back up current resolv.conf (ignore NotFound) // Back up current resolv.conf, but never overwrite a useful existing
match std::fs::copy(resolv, &backup) { // backup with a numa-managed file — that would leave uninstall with
Ok(_) => eprintln!(" Saved /etc/resolv.conf to {}", backup.display()), // nothing to restore to.
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {} let current = std::fs::read_to_string(resolv).ok();
Err(e) => return Err(format!("failed to backup /etc/resolv.conf: {}", e)), let current_is_numa_managed = current
.as_deref()
.map(resolv_conf_is_numa_managed)
.unwrap_or(false);
let existing_backup_is_useful = std::fs::read_to_string(&backup)
.ok()
.as_deref()
.map(resolv_conf_has_real_upstream)
.unwrap_or(false);
if existing_backup_is_useful {
eprintln!(
" Existing resolv.conf backup preserved at {}",
backup.display()
);
} else if current_is_numa_managed {
eprintln!(" warning: /etc/resolv.conf is already numa-managed; no fresh backup written");
} else if let Some(content) = current.as_deref() {
std::fs::write(&backup, content)
.map_err(|e| format!("failed to backup /etc/resolv.conf: {}", e))?;
eprintln!(" Saved /etc/resolv.conf to {}", backup.display());
} }
if resolv if resolv
@@ -1209,9 +1395,7 @@ fn install_service_linux() -> Result<(), String> {
eprintln!(" Numa will auto-start on boot and restart if killed."); eprintln!(" Numa will auto-start on boot and restart if killed.");
eprintln!(" Logs: journalctl -u numa -f"); eprintln!(" Logs: journalctl -u numa -f");
eprintln!(" Run 'sudo numa uninstall' to restore original DNS.\n"); eprintln!(" Run 'sudo numa uninstall' to restore original DNS.\n");
eprintln!(" Want full DNS sovereignty? Add to numa.toml:"); print_recursive_hint();
eprintln!(" [upstream]");
eprintln!(" mode = \"recursive\"\n");
Ok(()) Ok(())
} }
@@ -1278,14 +1462,86 @@ fn run_systemctl(args: &[&str]) -> Result<(), String> {
// --- CA trust management --- // --- CA trust management ---
/// One Linux trust-store backend (Debian, Fedora pki, Arch p11-kit).
#[cfg(target_os = "linux")]
struct LinuxTrustStore {
name: &'static str,
anchor_dir: &'static str,
anchor_file: &'static str,
refresh_install: &'static [&'static str],
refresh_uninstall: &'static [&'static str],
}
// If you change this table, update tests/docker/install-trust.sh to match —
// it asserts the same paths/commands against real distro images.
#[cfg(target_os = "linux")]
const LINUX_TRUST_STORES: &[LinuxTrustStore] = &[
// Debian / Ubuntu / Mint
LinuxTrustStore {
name: "debian",
anchor_dir: "/usr/local/share/ca-certificates",
anchor_file: "numa-local-ca.crt",
refresh_install: &["update-ca-certificates"],
refresh_uninstall: &["update-ca-certificates", "--fresh"],
},
// Fedora / RHEL / CentOS / SUSE (p11-kit via update-ca-trust wrapper)
LinuxTrustStore {
name: "pki",
anchor_dir: "/etc/pki/ca-trust/source/anchors",
anchor_file: "numa-local-ca.pem",
refresh_install: &["update-ca-trust", "extract"],
refresh_uninstall: &["update-ca-trust", "extract"],
},
// Arch / Manjaro (raw p11-kit)
LinuxTrustStore {
name: "p11kit",
anchor_dir: "/etc/ca-certificates/trust-source/anchors",
anchor_file: "numa-local-ca.pem",
refresh_install: &["trust", "extract-compat"],
refresh_uninstall: &["trust", "extract-compat"],
},
];
#[cfg(target_os = "linux")]
fn detect_linux_trust_store() -> Option<&'static LinuxTrustStore> {
LINUX_TRUST_STORES
.iter()
.find(|s| std::path::Path::new(s.anchor_dir).is_dir())
}
fn trust_ca() -> Result<(), String> { fn trust_ca() -> Result<(), String> {
let ca_path = crate::data_dir().join("ca.pem"); let ca_path = crate::data_dir().join(crate::tls::CA_FILE_NAME);
if !ca_path.exists() { if !ca_path.exists() {
return Err("CA not generated yet — start numa first to create certificates".into()); return Err("CA not generated yet — start numa first to create certificates".into());
} }
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
{ let result = trust_ca_macos(&ca_path);
#[cfg(target_os = "linux")]
let result = trust_ca_linux(&ca_path);
#[cfg(windows)]
let result = trust_ca_windows(&ca_path);
#[cfg(not(any(target_os = "macos", target_os = "linux", windows)))]
let result = Err::<(), String>("CA trust not supported on this OS".to_string());
result
}
fn untrust_ca() -> Result<(), String> {
#[cfg(target_os = "macos")]
let result = untrust_ca_macos();
#[cfg(target_os = "linux")]
let result = untrust_ca_linux();
#[cfg(windows)]
let result = untrust_ca_windows();
#[cfg(not(any(target_os = "macos", target_os = "linux", windows)))]
let result = Ok::<(), String>(());
result
}
#[cfg(target_os = "macos")]
fn trust_ca_macos(ca_path: &std::path::Path) -> Result<(), String> {
let status = std::process::Command::new("security") let status = std::process::Command::new("security")
.args([ .args([
"add-trusted-cert", "add-trusted-cert",
@@ -1295,48 +1551,23 @@ fn trust_ca() -> Result<(), String> {
"-k", "-k",
"/Library/Keychains/System.keychain", "/Library/Keychains/System.keychain",
]) ])
.arg(&ca_path) .arg(ca_path)
.status() .status()
.map_err(|e| format!("security: {}", e))?; .map_err(|e| format!("security: {}", e))?;
if !status.success() { if !status.success() {
return Err("security add-trusted-cert failed".into()); return Err("security add-trusted-cert failed".into());
} }
eprintln!(" Trusted Numa CA in system keychain"); eprintln!(" Trusted Numa CA in system keychain");
}
#[cfg(target_os = "linux")]
{
let dest = std::path::Path::new("/usr/local/share/ca-certificates/numa-local-ca.crt");
std::fs::copy(&ca_path, dest).map_err(|e| format!("copy CA: {}", e))?;
let status = std::process::Command::new("update-ca-certificates")
.status()
.map_err(|e| format!("update-ca-certificates: {}", e))?;
if !status.success() {
return Err("update-ca-certificates failed".into());
}
eprintln!(" Trusted Numa CA system-wide");
}
#[cfg(not(any(target_os = "macos", target_os = "linux")))]
{
Err("CA trust not supported on this OS".into())
}
#[cfg(any(target_os = "macos", target_os = "linux"))]
Ok(()) Ok(())
} }
fn untrust_ca() -> Result<(), String> {
let ca_path = crate::data_dir().join("ca.pem");
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
{ fn untrust_ca_macos() -> Result<(), String> {
// Find all Numa CA certs by hash and delete each one
if let Ok(out) = std::process::Command::new("security") if let Ok(out) = std::process::Command::new("security")
.args([ .args([
"find-certificate", "find-certificate",
"-c", "-c",
"Numa Local CA", crate::tls::CA_COMMON_NAME,
"-a", "-a",
"-Z", "-Z",
"/Library/Keychains/System.keychain", "/Library/Keychains/System.keychain",
@@ -1359,21 +1590,81 @@ fn untrust_ca() -> Result<(), String> {
} }
} }
eprintln!(" Removed Numa CA from system keychain"); eprintln!(" Removed Numa CA from system keychain");
Ok(())
} }
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
{ fn trust_ca_linux(ca_path: &std::path::Path) -> Result<(), String> {
let dest = std::path::Path::new("/usr/local/share/ca-certificates/numa-local-ca.crt"); let store = detect_linux_trust_store().ok_or_else(|| {
if dest.exists() { let names: Vec<&str> = LINUX_TRUST_STORES.iter().map(|s| s.name).collect();
let _ = std::fs::remove_file(dest); format!(
let _ = std::process::Command::new("update-ca-certificates") "no supported CA trust store found (tried: {}). \
.arg("--fresh") Please report at https://github.com/razvandimescu/numa/issues",
.status(); names.join(", ")
eprintln!(" Removed Numa CA from system trust store"); )
} })?;
let dest = std::path::Path::new(store.anchor_dir).join(store.anchor_file);
std::fs::copy(ca_path, &dest).map_err(|e| format!("copy CA to {}: {}", dest.display(), e))?;
run_refresh(store.name, store.refresh_install)?;
eprintln!(" Trusted Numa CA system-wide ({})", store.name);
Ok(())
} }
let _ = ca_path; // suppress unused warning on other platforms #[cfg(target_os = "linux")]
fn untrust_ca_linux() -> Result<(), String> {
let Some(store) = detect_linux_trust_store() else {
return Ok(());
};
let dest = std::path::Path::new(store.anchor_dir).join(store.anchor_file);
match std::fs::remove_file(&dest) {
Ok(()) => {
let _ = run_refresh(store.name, store.refresh_uninstall);
eprintln!(" Removed Numa CA from system trust store ({})", store.name);
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {}
Err(_) => {} // best-effort uninstall
}
Ok(())
}
#[cfg(target_os = "linux")]
fn run_refresh(store_name: &str, argv: &[&str]) -> Result<(), String> {
let (cmd, args) = argv
.split_first()
.expect("refresh command must be non-empty");
let status = std::process::Command::new(cmd)
.args(args)
.status()
.map_err(|e| format!("{} ({}): {}", cmd, store_name, e))?;
if !status.success() {
return Err(format!("{} ({}) failed", cmd, store_name));
}
Ok(())
}
#[cfg(windows)]
fn trust_ca_windows(ca_path: &std::path::Path) -> Result<(), String> {
let status = std::process::Command::new("certutil")
.args(["-addstore", "-f", "Root"])
.arg(ca_path)
.status()
.map_err(|e| format!("certutil: {}", e))?;
if !status.success() {
return Err("certutil -addstore Root failed (run as Administrator?)".into());
}
eprintln!(" Trusted Numa CA in Windows Root store");
Ok(())
}
#[cfg(windows)]
fn untrust_ca_windows() -> Result<(), String> {
let _ = std::process::Command::new("certutil")
.args(["-delstore", "Root", crate::tls::CA_COMMON_NAME])
.status();
eprintln!(" Removed Numa CA from Windows Root store");
Ok(()) Ok(())
} }
@@ -1432,6 +1723,82 @@ Wireless LAN adapter Wi-Fi:
assert!(!result.contains("{{exe_path}}")); assert!(!result.contains("{{exe_path}}"));
} }
#[test]
fn macos_backup_real_upstream_detection() {
use std::collections::HashMap;
let mut map: HashMap<String, Vec<String>> = HashMap::new();
// Empty backup → no real upstream
assert!(!backup_has_real_upstream_macos(&map));
// All-loopback backup → still no real upstream (the bug case)
map.insert("Wi-Fi".into(), vec!["127.0.0.1".into()]);
map.insert("Ethernet".into(), vec!["::1".into()]);
assert!(!backup_has_real_upstream_macos(&map));
// One real entry → useful
map.insert("Tailscale".into(), vec!["192.168.1.1".into()]);
assert!(backup_has_real_upstream_macos(&map));
}
#[test]
fn windows_backup_filters_loopback() {
use std::collections::HashMap;
let mut map: HashMap<String, WindowsInterfaceDns> = HashMap::new();
// Empty backup → no real upstream
assert!(!backup_has_real_upstream_windows(&map));
// All-loopback backup → still no real upstream (the bug case)
map.insert(
"Wi-Fi".into(),
WindowsInterfaceDns {
dhcp: false,
servers: vec!["127.0.0.1".into()],
},
);
map.insert(
"Ethernet".into(),
WindowsInterfaceDns {
dhcp: false,
servers: vec!["::1".into(), "0.0.0.0".into()],
},
);
assert!(!backup_has_real_upstream_windows(&map));
// One real entry alongside loopback → useful
map.insert(
"Ethernet 2".into(),
WindowsInterfaceDns {
dhcp: false,
servers: vec!["192.168.1.1".into()],
},
);
assert!(backup_has_real_upstream_windows(&map));
}
#[test]
fn resolv_conf_real_upstream_detection() {
let real = "nameserver 192.168.1.1\nsearch lan\n";
assert!(resolv_conf_has_real_upstream(real));
assert!(!resolv_conf_is_numa_managed(real));
let self_ref = "nameserver 127.0.0.1\nsearch numa\n";
assert!(!resolv_conf_has_real_upstream(self_ref));
assert!(resolv_conf_is_numa_managed(self_ref));
let numa_marker =
"# Generated by Numa — run 'sudo numa uninstall' to restore\nnameserver 127.0.0.1\nsearch numa\n";
assert!(resolv_conf_is_numa_managed(numa_marker));
let systemd_stub = "nameserver 127.0.0.53\noptions edns0\n";
assert!(!resolv_conf_has_real_upstream(systemd_stub));
let mixed = "nameserver 127.0.0.1\nnameserver 1.1.1.1\n";
assert!(resolv_conf_has_real_upstream(mixed));
assert!(!resolv_conf_is_numa_managed(mixed));
}
#[test] #[test]
fn parse_ipconfig_skips_disconnected() { fn parse_ipconfig_skips_disconnected() {
let sample = "\ let sample = "\
@@ -1448,4 +1815,43 @@ Wireless LAN adapter Wi-Fi:
assert_eq!(result.len(), 1); assert_eq!(result.len(), 1);
assert!(result.contains_key("Wi-Fi")); assert!(result.contains_key("Wi-Fi"));
} }
#[test]
fn try_port53_advisory_addr_in_use() {
let err = std::io::Error::from(std::io::ErrorKind::AddrInUse);
let msg = try_port53_advisory("0.0.0.0:53", &err).expect("should advise on port 53");
assert!(msg.contains("cannot bind to"));
assert!(msg.contains("already in use"));
assert!(msg.contains("numa install"));
assert!(msg.contains("bind_addr"));
}
#[test]
fn try_port53_advisory_permission_denied() {
let err = std::io::Error::from(std::io::ErrorKind::PermissionDenied);
let msg = try_port53_advisory("0.0.0.0:53", &err).expect("should advise on port 53");
assert!(msg.contains("cannot bind to"));
assert!(msg.contains("permission denied"));
assert!(msg.contains("numa install"));
assert!(msg.contains("bind_addr"));
}
#[test]
fn try_port53_advisory_skips_non_53_ports() {
let err = std::io::Error::from(std::io::ErrorKind::AddrInUse);
assert!(try_port53_advisory("127.0.0.1:5354", &err).is_none());
assert!(try_port53_advisory("[::]:853", &err).is_none());
}
#[test]
fn try_port53_advisory_skips_unrelated_error_kinds() {
let err = std::io::Error::from(std::io::ErrorKind::NotFound);
assert!(try_port53_advisory("0.0.0.0:53", &err).is_none());
}
#[test]
fn try_port53_advisory_skips_malformed_bind_addr() {
let err = std::io::Error::from(std::io::ErrorKind::AddrInUse);
assert!(try_port53_advisory("not-an-address", &err).is_none());
}
} }

View File

@@ -5,7 +5,9 @@ use std::sync::Arc;
use log::{info, warn}; use log::{info, warn};
use crate::ctx::ServerCtx; use crate::ctx::ServerCtx;
use rcgen::{BasicConstraints, CertificateParams, DnType, IsCa, KeyPair, KeyUsagePurpose, SanType}; use rcgen::{
BasicConstraints, CertificateParams, DnType, IsCa, Issuer, KeyPair, KeyUsagePurpose, SanType,
};
use rustls::pki_types::{CertificateDer, PrivateKeyDer, PrivatePkcs8KeyDer}; use rustls::pki_types::{CertificateDer, PrivateKeyDer, PrivatePkcs8KeyDer};
use rustls::ServerConfig; use rustls::ServerConfig;
use time::{Duration, OffsetDateTime}; use time::{Duration, OffsetDateTime};
@@ -13,6 +15,13 @@ use time::{Duration, OffsetDateTime};
const CA_VALIDITY_DAYS: i64 = 3650; // 10 years const CA_VALIDITY_DAYS: i64 = 3650; // 10 years
const CERT_VALIDITY_DAYS: i64 = 365; // 1 year const CERT_VALIDITY_DAYS: i64 = 365; // 1 year
/// Common Name on Numa's local CA. Referenced by trust-store helpers
/// (`security`, `certutil`) when locating the cert for removal.
pub const CA_COMMON_NAME: &str = "Numa Local CA";
/// Filename of the CA certificate inside the data dir.
pub const CA_FILE_NAME: &str = "ca.pem";
/// Collect all service + LAN peer names and regenerate the TLS cert. /// Collect all service + LAN peer names and regenerate the TLS cert.
pub fn regenerate_tls(ctx: &ServerCtx) { pub fn regenerate_tls(ctx: &ServerCtx) {
let tls = match &ctx.tls_config { let tls = match &ctx.tls_config {
@@ -33,6 +42,40 @@ pub fn regenerate_tls(ctx: &ServerCtx) {
} }
} }
/// Advisory for TLS-setup failures caused by a non-writable data dir;
/// `None` if not applicable so the caller can fall back to the raw error.
pub fn try_data_dir_advisory(err: &crate::Error, data_dir: &Path) -> Option<String> {
let io_err = err.downcast_ref::<std::io::Error>()?;
if io_err.kind() != std::io::ErrorKind::PermissionDenied {
return None;
}
let o = "\x1b[1;38;2;192;98;58m";
let r = "\x1b[0m";
Some(format!(
"
{o}Numa{r} — HTTPS proxy disabled: cannot write TLS CA to {}.
The data directory is not writable by the current user. Numa needs
to persist a local Certificate Authority there to serve .numa over
HTTPS. DNS resolution and plain-HTTP proxy continue to work.
Fix — pick one:
1. Install Numa as the system resolver (sets up a writable data dir):
sudo numa install (on Windows, run as Administrator)
2. Point data_dir at a path you can write.
Create ~/.config/numa/numa.toml with:
[server]
data_dir = \"/path/you/can/write\"
",
data_dir.display()
))
}
/// Build a TLS config with a cert covering all provided service names. /// Build a TLS config with a cert covering all provided service names.
/// Wildcards under single-label TLDs (*.numa) are rejected by browsers, /// Wildcards under single-label TLDs (*.numa) are rejected by browsers,
/// so we list each service explicitly as a SAN. /// so we list each service explicitly as a SAN.
@@ -46,8 +89,8 @@ pub fn build_tls_config(
alpn: Vec<Vec<u8>>, alpn: Vec<Vec<u8>>,
data_dir: &Path, data_dir: &Path,
) -> crate::Result<Arc<ServerConfig>> { ) -> crate::Result<Arc<ServerConfig>> {
let (ca_cert, ca_key) = ensure_ca(data_dir)?; let (ca_der, issuer) = ensure_ca(data_dir)?;
let (cert_chain, key) = generate_service_cert(&ca_cert, &ca_key, tld, service_names)?; let (cert_chain, key) = generate_service_cert(&ca_der, &issuer, tld, service_names)?;
// Ensure a crypto provider is installed (rustls needs one) // Ensure a crypto provider is installed (rustls needs one)
let _ = rustls::crypto::ring::default_provider().install_default(); let _ = rustls::crypto::ring::default_provider().install_default();
@@ -65,18 +108,20 @@ pub fn build_tls_config(
Ok(Arc::new(config)) Ok(Arc::new(config))
} }
fn ensure_ca(dir: &Path) -> crate::Result<(rcgen::Certificate, KeyPair)> { fn ensure_ca(dir: &Path) -> crate::Result<(CertificateDer<'static>, Issuer<'static, KeyPair>)> {
let ca_key_path = dir.join("ca.key"); let ca_key_path = dir.join("ca.key");
let ca_cert_path = dir.join("ca.pem"); let ca_cert_path = dir.join(CA_FILE_NAME);
if ca_key_path.exists() && ca_cert_path.exists() { if ca_key_path.exists() && ca_cert_path.exists() {
let key_pem = std::fs::read_to_string(&ca_key_path)?; let key_pem = std::fs::read_to_string(&ca_key_path)?;
let cert_pem = std::fs::read_to_string(&ca_cert_path)?; let cert_pem = std::fs::read_to_string(&ca_cert_path)?;
let key_pair = KeyPair::from_pem(&key_pem)?; let key_pair = KeyPair::from_pem(&key_pem)?;
let params = CertificateParams::from_ca_cert_pem(&cert_pem)?; let ca_der = rustls_pemfile::certs(&mut cert_pem.as_bytes())
let cert = params.self_signed(&key_pair)?; .next()
.ok_or("empty CA PEM file")??;
let issuer = Issuer::from_ca_cert_der(&ca_der, key_pair)?;
info!("loaded CA from {:?}", ca_cert_path); info!("loaded CA from {:?}", ca_cert_path);
return Ok((cert, key_pair)); return Ok((ca_der, issuer));
} }
// Generate new CA // Generate new CA
@@ -86,7 +131,7 @@ fn ensure_ca(dir: &Path) -> crate::Result<(rcgen::Certificate, KeyPair)> {
let mut params = CertificateParams::default(); let mut params = CertificateParams::default();
params params
.distinguished_name .distinguished_name
.push(DnType::CommonName, "Numa Local CA"); .push(DnType::CommonName, CA_COMMON_NAME);
params.is_ca = IsCa::Ca(BasicConstraints::Unconstrained); params.is_ca = IsCa::Ca(BasicConstraints::Unconstrained);
params.key_usages = vec![KeyUsagePurpose::KeyCertSign, KeyUsagePurpose::CrlSign]; params.key_usages = vec![KeyUsagePurpose::KeyCertSign, KeyUsagePurpose::CrlSign];
params.not_before = OffsetDateTime::now_utc(); params.not_before = OffsetDateTime::now_utc();
@@ -104,14 +149,16 @@ fn ensure_ca(dir: &Path) -> crate::Result<(rcgen::Certificate, KeyPair)> {
} }
info!("generated CA at {:?}", ca_cert_path); info!("generated CA at {:?}", ca_cert_path);
Ok((cert, key_pair)) let ca_der = cert.der().clone();
let issuer = Issuer::new(params, key_pair);
Ok((ca_der, issuer))
} }
/// Generate a cert with explicit SANs for each service name. /// Generate a cert with explicit SANs for each service name.
/// Always regenerated at startup (~5ms) — no disk caching needed. /// Always regenerated at startup (~5ms) — no disk caching needed.
fn generate_service_cert( fn generate_service_cert(
ca_cert: &rcgen::Certificate, ca_der: &CertificateDer<'static>,
ca_key: &KeyPair, issuer: &Issuer<'_, KeyPair>,
tld: &str, tld: &str,
service_names: &[String], service_names: &[String],
) -> crate::Result<(Vec<CertificateDer<'static>>, PrivateKeyDer<'static>)> { ) -> crate::Result<(Vec<CertificateDer<'static>>, PrivateKeyDer<'static>)> {
@@ -146,7 +193,7 @@ fn generate_service_cert(
params.not_before = OffsetDateTime::now_utc(); params.not_before = OffsetDateTime::now_utc();
params.not_after = OffsetDateTime::now_utc() + Duration::days(CERT_VALIDITY_DAYS); params.not_after = OffsetDateTime::now_utc() + Duration::days(CERT_VALIDITY_DAYS);
let cert = params.signed_by(&key_pair, ca_cert, ca_key)?; let cert = params.signed_by(&key_pair, issuer)?;
info!( info!(
"generated TLS cert for: {}", "generated TLS cert for: {}",
@@ -157,9 +204,39 @@ fn generate_service_cert(
.join(", ") .join(", ")
); );
let cert_der = CertificateDer::from(cert.der().to_vec()); let cert_der = cert.der().clone();
let ca_der = CertificateDer::from(ca_cert.der().to_vec()); let ca_cert_der = ca_der.clone();
let key_der = PrivateKeyDer::Pkcs8(PrivatePkcs8KeyDer::from(key_pair.serialize_der())); let key_der = PrivateKeyDer::Pkcs8(PrivatePkcs8KeyDer::from(key_pair.serialize_der()));
Ok((vec![cert_der, ca_der], key_der)) Ok((vec![cert_der, ca_cert_der], key_der))
}
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
#[test]
fn try_data_dir_advisory_permission_denied() {
let err: crate::Error =
Box::new(std::io::Error::from(std::io::ErrorKind::PermissionDenied));
let path = PathBuf::from("/usr/local/var/numa");
let msg = try_data_dir_advisory(&err, &path).expect("should advise");
assert!(msg.contains("HTTPS proxy disabled"));
assert!(msg.contains("/usr/local/var/numa"));
assert!(msg.contains("numa install"));
assert!(msg.contains("data_dir"));
}
#[test]
fn try_data_dir_advisory_skips_other_io_kinds() {
let err: crate::Error = Box::new(std::io::Error::from(std::io::ErrorKind::NotFound));
assert!(try_data_dir_advisory(&err, &PathBuf::from("/x")).is_none());
}
#[test]
fn try_data_dir_advisory_skips_non_io_errors() {
let err: crate::Error = "rcgen failure".into();
assert!(try_data_dir_advisory(&err, &PathBuf::from("/x")).is_none());
}
} }

123
tests/docker/install-trust.sh Executable file
View File

@@ -0,0 +1,123 @@
#!/usr/bin/env bash
#
# Cross-distro CA trust contract test for issue #35.
#
# Runs the exact shell commands `src/system_dns.rs::trust_ca_linux` would run
# on each Linux trust-store family (Debian, Fedora pki, Arch p11-kit), and
# asserts the certificate ends up in (and is removed from) the system bundle.
#
# This is a contract test, not an integration test: it doesn't drive the Rust
# code (that would need systemd-in-container). It verifies the assumptions in
# `LINUX_TRUST_STORES` against the real distro behavior. If you change that
# table in src/system_dns.rs, update the per-distro cases below to match.
#
# Requirements: docker, openssl (host).
# Usage: ./tests/docker/install-trust.sh
set -euo pipefail
cd "$(dirname "$0")/../.."
GREEN="\033[32m"; RED="\033[31m"; RESET="\033[0m"
# Self-signed CA fixture, mounted into each container as ca.pem.
# basicConstraints=CA:TRUE is required — without it, Debian's
# update-ca-certificates silently skips the cert during bundle build.
FIXTURE_DIR=$(mktemp -d)
trap 'rm -rf "$FIXTURE_DIR"' EXIT
openssl req -x509 -newkey rsa:2048 -nodes -days 1 \
-keyout "$FIXTURE_DIR/ca.key" \
-out "$FIXTURE_DIR/ca.pem" \
-subj "/CN=Numa Local CA Test $(date +%s)" \
-addext "basicConstraints=critical,CA:TRUE" \
-addext "keyUsage=critical,keyCertSign,cRLSign" >/dev/null 2>&1
# Distro bundles store certs differently — Debian writes raw PEM only,
# Fedora prepends "# CN" comment headers, Arch via extract-compat is
# raw PEM. To detect cert presence uniformly we grep for a deterministic
# substring of the base64 body (first base64 line is unique per cert).
CERT_TAG=$(sed -n '2p' "$FIXTURE_DIR/ca.pem")
PASSED=0; FAILED=0
run_case() {
local distro="$1"; shift
local image="$1"; shift
local platform="$1"; shift
local script="$1"
printf "── %s (%s) ──\n" "$distro" "$image"
if docker run --rm \
--platform "$platform" \
--security-opt seccomp=unconfined \
-e CERT_TAG="$CERT_TAG" \
-e DEBIAN_FRONTEND=noninteractive \
-v "$FIXTURE_DIR/ca.pem:/fixture/ca.pem:ro" \
"$image" bash -c "$script"; then
printf "${GREEN}${RESET} %s\n\n" "$distro"
PASSED=$((PASSED + 1))
else
printf "${RED}${RESET} %s\n\n" "$distro"
FAILED=$((FAILED + 1))
fi
}
# Debian / Ubuntu / Mint — anchor: /usr/local/share/ca-certificates/*.crt
run_case "debian" "debian:stable" "linux/amd64" '
set -e
apt-get update -qq
apt-get install -qq -y ca-certificates >/dev/null
install -m 0644 /fixture/ca.pem /usr/local/share/ca-certificates/numa-local-ca.crt
update-ca-certificates >/dev/null 2>&1
grep -q "$CERT_TAG" /etc/ssl/certs/ca-certificates.crt
echo " install: cert present in bundle"
rm /usr/local/share/ca-certificates/numa-local-ca.crt
update-ca-certificates --fresh >/dev/null 2>&1
if grep -q "$CERT_TAG" /etc/ssl/certs/ca-certificates.crt; then
echo " uninstall: cert STILL present (regression)" >&2
exit 1
fi
echo " uninstall: cert removed from bundle"
'
# Fedora / RHEL / CentOS / SUSE — anchor: /etc/pki/ca-trust/source/anchors/*.pem
run_case "fedora" "fedora:latest" "linux/amd64" '
set -e
dnf install -q -y ca-certificates >/dev/null
install -m 0644 /fixture/ca.pem /etc/pki/ca-trust/source/anchors/numa-local-ca.pem
update-ca-trust extract
grep -q "$CERT_TAG" /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
echo " install: cert present in bundle"
rm /etc/pki/ca-trust/source/anchors/numa-local-ca.pem
update-ca-trust extract
if grep -q "$CERT_TAG" /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem; then
echo " uninstall: cert STILL present (regression)" >&2
exit 1
fi
echo " uninstall: cert removed from bundle"
'
# Arch / Manjaro — anchor: /etc/ca-certificates/trust-source/anchors/*.pem
# archlinux:latest is x86_64-only; --platform forces emulation on Apple Silicon.
run_case "arch" "archlinux:latest" "linux/amd64" '
set -e
# pacman 7+ filters syscalls in its own sandbox; disable for Rosetta/qemu emulation.
sed -i "s/^#DisableSandboxSyscalls/DisableSandboxSyscalls/" /etc/pacman.conf
pacman -Sy --noconfirm --needed ca-certificates p11-kit >/dev/null 2>&1
install -m 0644 /fixture/ca.pem /etc/ca-certificates/trust-source/anchors/numa-local-ca.pem
trust extract-compat
grep -q "$CERT_TAG" /etc/ssl/certs/ca-certificates.crt
echo " install: cert present in bundle"
rm /etc/ca-certificates/trust-source/anchors/numa-local-ca.pem
trust extract-compat
if grep -q "$CERT_TAG" /etc/ssl/certs/ca-certificates.crt; then
echo " uninstall: cert STILL present (regression)" >&2
exit 1
fi
echo " uninstall: cert removed from bundle"
'
printf "── summary ──\n"
printf " ${GREEN}passed${RESET}: %d\n" "$PASSED"
printf " ${RED}failed${RESET}: %d\n" "$FAILED"
[ "$FAILED" -eq 0 ]

147
tests/docker/smoke-arch.sh Executable file
View File

@@ -0,0 +1,147 @@
#!/usr/bin/env bash
#
# Arch Linux compatibility smoke test.
#
# Builds numa from source inside an archlinux:latest container, runs it
# in forward mode on port 5354, and verifies a single DNS query returns
# an A record. Validates the "Arch compatible" claim end-to-end before
# release announcements.
#
# Dogfooding: the test numa forwards to the host's running numa via
# host.docker.internal (Docker Desktop's host gateway). This avoids the
# Docker NAT/UDP issues with public resolvers and exercises the realistic
# numa-on-numa shape. Requires the host to be running numa on port 53.
#
# First run is slow (~8-12 min): image pull + pacman + cold cargo build.
# No caching across runs.
#
# Requirements: docker, host running numa on 0.0.0.0:53
# Usage: ./tests/docker/smoke-arch.sh
set -euo pipefail
cd "$(dirname "$0")/../.."
GREEN="\033[32m"; RED="\033[31m"; RESET="\033[0m"
# Precondition: the test numa-on-arch forwards to the host numa as its
# upstream (dogfood pattern). Fail fast with a clear error if there is
# no working DNS on the host, rather than letting the dig inside the
# container time out with "deadline has elapsed".
if ! dig @127.0.0.1 google.com A +short +time=1 +tries=1 >/dev/null 2>&1; then
printf "${RED}error:${RESET} host numa is not answering on 127.0.0.1:53\n" >&2
echo " This test forwards to the host numa via host.docker.internal." >&2
echo " Start numa on the host first (sudo numa install), then rerun." >&2
exit 1
fi
echo "── building + running numa on archlinux:latest ──"
echo " (first run is slow: image pull + pacman + cold cargo build, ~8-12 min)"
echo
docker run --rm \
--platform linux/amd64 \
--security-opt seccomp=unconfined \
-v "$PWD:/src:ro" \
-v numa-arch-cargo:/root/.cargo \
-v numa-arch-target:/work/target \
archlinux:latest bash -c '
set -e
# pacman 7+ filters syscalls in its own sandbox; disable for Rosetta/qemu
sed -i "s/^#DisableSandboxSyscalls/DisableSandboxSyscalls/" /etc/pacman.conf
echo "── pacman: installing build + runtime deps ──"
pacman -Sy --noconfirm --needed rust gcc pkgconf cmake make perl bind 2>&1 | tail -3
echo
# Copy source to a writable workdir, skipping target/ + .git so we
# do not pull in the host (macOS) build artifacts.
mkdir -p /work
tar -C /src --exclude=./target --exclude=./.git -cf - . | tar -C /work -xf -
cd /work
echo "── cargo build --release --locked ──"
cargo build --release --locked 2>&1 | tail -5
echo
# Dogfood: forward to the host numa via host.docker.internal.
# numa parses upstream.address as a literal SocketAddr, so we resolve
# the hostname to an IPv4 address first (force v4 — getent hosts may
# return IPv6 first, and IPv6 addresses need bracketed addr:port form).
HOST_IP=$(getent ahostsv4 host.docker.internal | awk "/STREAM/ {print \$1; exit}")
if [ -z "$HOST_IP" ]; then
echo " ✗ could not resolve host.docker.internal to IPv4 (not on Docker Desktop?)"
exit 1
fi
echo "── starting numa on :5354 (forward to host numa at $HOST_IP:53) ──"
# Intentionally NOT setting [server] data_dir — we want to exercise the
# default code path (data_dir() → daemon_data_dir() → /var/lib/numa) so
# the FHS-path assertion below verifies the live wiring, not just the
# unit-tested helper.
cat > /tmp/numa.toml <<EOF
[server]
bind_addr = "127.0.0.1:5354"
api_port = 5381
[upstream]
mode = "forward"
address = "$HOST_IP"
port = 53
EOF
./target/release/numa /tmp/numa.toml > /tmp/numa.log 2>&1 &
NUMA_PID=$!
# Poll for readiness — numa is ready when it answers a query
READY=0
for i in 1 2 3 4 5 6 7 8; do
sleep 1
if dig @127.0.0.1 -p 5354 google.com A +short +time=1 +tries=1 2>/dev/null \
| grep -qE "^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$"; then
READY=1
break
fi
done
if [ "$READY" -ne 1 ]; then
echo " ✗ numa did not return an A record after 8s"
echo " numa log:"
cat /tmp/numa.log
kill $NUMA_PID 2>/dev/null || true
exit 1
fi
echo "── dig @127.0.0.1 -p 5354 google.com A ──"
ANSWER=$(dig @127.0.0.1 -p 5354 google.com A +short +time=2 +tries=1)
echo "$ANSWER" | sed "s/^/ /"
kill $NUMA_PID 2>/dev/null || true
# FHS path assertion: the default data dir on Linux must be /var/lib/numa
# (not the legacy /usr/local/var/numa). The CA cert generated at startup
# is the canonical proof that numa wrote to the right place.
echo
echo "── FHS path check ──"
if [ -f /var/lib/numa/ca.pem ]; then
echo " ✓ CA cert at /var/lib/numa/ca.pem (FHS path)"
else
echo " ✗ CA cert NOT at /var/lib/numa/ca.pem"
echo " ls /var/lib/numa/:"
ls -la /var/lib/numa/ 2>&1 | sed "s/^/ /"
echo " ls /usr/local/var/numa/:"
ls -la /usr/local/var/numa/ 2>&1 | sed "s/^/ /"
exit 1
fi
if [ -e /usr/local/var/numa ]; then
echo " ✗ legacy path /usr/local/var/numa unexpectedly exists on a fresh container"
exit 1
fi
echo " ✓ legacy path /usr/local/var/numa absent (fresh install used FHS)"
echo
echo " ✓ numa built, ran, answered a forward query, and used the FHS data dir on Arch"
'
echo
printf "${GREEN}── smoke-arch passed ──${RESET}\n"

138
tests/docker/smoke-port53.sh Executable file
View File

@@ -0,0 +1,138 @@
#!/usr/bin/env bash
#
# Port-53 conflict advisory integration test.
#
# Builds numa from source inside a debian:bookworm container, pre-binds
# port 53 with a UDP socket, then runs numa bare (default bind_addr
# 0.0.0.0:53). Verifies:
# - process exits with code 1
# - stderr contains the advisory ("cannot bind to")
# - stderr contains both fix suggestions ("numa install", "bind_addr")
#
# This is the end-to-end test for the fix in:
# src/main.rs — AddrInUse match arm → eprint advisory + process::exit(1)
#
# No systemd-resolved needed — the conflict is simulated by a Python
# UDP socket held open before numa starts.
#
# Requirements: docker
# Usage: ./tests/docker/smoke-port53.sh
set -euo pipefail
cd "$(dirname "$0")/../.."
GREEN="\033[32m"; RED="\033[31m"; RESET="\033[0m"
pass() { printf " ${GREEN}${RESET} %s\n" "$1"; }
fail() { printf " ${RED}${RESET} %s\n" "$1"; printf " %s\n" "$2"; FAILED=$((FAILED+1)); }
FAILED=0
echo "── smoke-port53: building + testing numa on debian:bookworm ──"
echo " (first run is slow: image pull + cold cargo build, ~5-8 min)"
echo
OUTPUT=$(docker run --rm \
--platform linux/amd64 \
-v "$PWD:/src:ro" \
-v numa-port53-cargo:/root/.cargo \
-v numa-port53-target:/work/target \
debian:bookworm bash -c '
set -e
apt-get update -qq && apt-get install -y -qq curl build-essential python3 2>&1 | tail -3
# Install rustup if not already in the cargo cache volume
if ! command -v cargo &>/dev/null; then
curl -sSf https://sh.rustup.rs | sh -s -- -y --profile minimal --quiet
fi
. "$HOME/.cargo/env"
# Copy source to a writable workdir
mkdir -p /work
tar -C /src --exclude=./target --exclude=./.git -cf - . | tar -C /work -xf -
cd /work
echo "── cargo build --release --locked ──"
cargo build --release --locked 2>&1 | tail -5
echo
# Write the holder script to a file to avoid quoting hell.
# Holds port 53 until killed — no sleep race.
cat > /tmp/hold53.py << '"'"'PYEOF'"'"'
import socket, signal
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 0)
s.bind(("", 53))
signal.pause()
PYEOF
python3 /tmp/hold53.py &
HOLDER_PID=$!
# Verify the holder is actually up before proceeding
sleep 0.3
if ! kill -0 $HOLDER_PID 2>/dev/null; then
echo "holder_failed=1"
exit 1
fi
echo "── running numa with port 53 already bound ──"
# timeout 5: guards against numa not exiting (advisory not fired, bug present)
# Capture stderr to a file so the exit code is not clobbered by || or $()
set +e
timeout 5 ./target/release/numa > /tmp/numa-stderr.txt 2>&1
EXIT_CODE=$?
set -e
STDERR=$(cat /tmp/numa-stderr.txt)
kill $HOLDER_PID 2>/dev/null || true
echo "exit_code=$EXIT_CODE"
printf "%s" "$STDERR" | sed "s/^/ numa: /"
' 2>&1)
echo "$OUTPUT"
echo
echo "── assertions ──"
if echo "$OUTPUT" | grep -q "holder_failed=1"; then
echo " SETUP FAILED: could not pre-bind port 53 inside container"
exit 1
fi
EXIT_CODE=$(echo "$OUTPUT" | grep '^exit_code=' | cut -d= -f2)
if [ "${EXIT_CODE:-}" = "1" ]; then
pass "exits with code 1"
else
fail "exits with code 1" "got: exit_code=${EXIT_CODE:-<missing>}"
fi
if echo "$OUTPUT" | grep -q "cannot bind to"; then
pass "advisory printed to stderr"
else
fail "advisory printed to stderr" "stderr did not contain 'cannot bind to'"
fi
if echo "$OUTPUT" | grep -q "numa install"; then
pass "advisory offers 'sudo numa install'"
else
fail "advisory offers 'sudo numa install'" "not found in output"
fi
if echo "$OUTPUT" | grep -q "bind_addr"; then
pass "advisory offers non-privileged port alternative"
else
fail "advisory offers non-privileged port alternative" "'bind_addr' not found in output"
fi
echo
if [ "$FAILED" -eq 0 ]; then
printf "${GREEN}── smoke-port53 passed ──${RESET}\n"
exit 0
else
printf "${RED}── smoke-port53 failed ($FAILED assertion(s)) ──${RESET}\n"
exit 1
fi

View File

@@ -622,6 +622,54 @@ CONF
"10.0.0.1" \ "10.0.0.1" \
"$($KDIG +short dot-test.example A 2>/dev/null)" "$($KDIG +short dot-test.example A 2>/dev/null)"
echo ""
echo "=== DNS-over-HTTPS (RFC 8484) ==="
DOH_QUERY_FILE=/tmp/numa-doh-query.bin
DOH_RESP_FILE=/tmp/numa-doh-resp.bin
# Build DNS wire-format query for dot-test.example A
printf '\x00\x01\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x08dot-test\x07example\x00\x00\x01\x00\x01' > "$DOH_QUERY_FILE"
# POST valid DoH query
DOH_CODE=$(curl -sk -X POST \
--resolve "numa.numa:$PROXY_HTTPS_PORT:127.0.0.1" \
-H "Content-Type: application/dns-message" \
--data-binary @"$DOH_QUERY_FILE" \
--cacert "$CA" \
-o "$DOH_RESP_FILE" \
-w "%{http_code}" \
"https://numa.numa:$PROXY_HTTPS_PORT/dns-query")
check "DoH POST returns HTTP 200" "200" "$DOH_CODE"
# Check response contains IP 10.0.0.1 (hex: 0a000001)
DOH_HEX=$(xxd -p "$DOH_RESP_FILE" | tr -d '\n')
if echo "$DOH_HEX" | grep -q "0a000001"; then
check "DoH response resolves dot-test.example → 10.0.0.1" "found" "found"
else
check "DoH response resolves dot-test.example → 10.0.0.1" "0a000001" "$DOH_HEX"
fi
# Wrong Content-Type → 415
DOH_CT_CODE=$(curl -sk -X POST \
-H "Host: numa.numa" \
-H "Content-Type: text/plain" \
--data-binary @"$DOH_QUERY_FILE" \
-o /dev/null -w "%{http_code}" \
"https://127.0.0.1:$PROXY_HTTPS_PORT/dns-query")
check "DoH wrong Content-Type → 415" "415" "$DOH_CT_CODE"
# Wrong host → 404 (DoH only serves numa.numa)
DOH_HOST_CODE=$(curl -sk -X POST \
-H "Host: foo.numa" \
-H "Content-Type: application/dns-message" \
--data-binary @"$DOH_QUERY_FILE" \
-o /dev/null -w "%{http_code}" \
"https://127.0.0.1:$PROXY_HTTPS_PORT/dns-query")
check "DoH wrong host → 404" "404" "$DOH_HOST_CODE"
rm -f "$DOH_QUERY_FILE" "$DOH_RESP_FILE"
echo "" echo ""
echo "=== Proxy TLS works with DoT enabled ===" echo "=== Proxy TLS works with DoT enabled ==="

View File

@@ -0,0 +1,94 @@
#!/usr/bin/env bash
#
# Manual macOS CA trust contract test.
#
# Mirrors src/system_dns.rs::trust_ca_macos / untrust_ca_macos by running
# the same `security` shell commands against a fixture cert with a unique
# CN. Safe to run alongside a production numa install:
#
# - Test cert CN = "Numa Local CA Test <pid-ts>", always strictly longer
# than the production CN "Numa Local CA". `security find-certificate -c`
# does substring matching, so the test's search for $TEST_CN can never
# match the production cert (the search term is longer than the prod CN).
# - All deletes use `delete-certificate -Z <hash>`, which only touches the
# cert with that exact hash. Production and test certs have different
# hashes by construction (different key material), so the delete cannot
# reach the production cert even if a CN search somehow returned both.
#
# Mutates the System keychain (briefly). Cleans up on success or interrupt.
# Requires sudo for `security add-trusted-cert` and `delete-certificate`.
#
# Usage: ./tests/manual/install-trust-macos.sh
set -euo pipefail
if [[ "$OSTYPE" != darwin* ]]; then
echo "This test is macOS-only." >&2
exit 1
fi
GREEN="\033[32m"; RED="\033[31m"; RESET="\033[0m"
# Production constant from src/tls.rs::CA_COMMON_NAME — keep in sync.
PROD_CN="Numa Local CA"
KEYCHAIN="/Library/Keychains/System.keychain"
# Notice if production numa is already installed. We proceed regardless —
# see header for why coexistence is safe (unique CN + by-hash deletion).
if security find-certificate -c "$PROD_CN" "$KEYCHAIN" >/dev/null 2>&1; then
echo " note: production '$PROD_CN' detected — proceeding alongside (test cert can't touch it)"
echo
fi
# Unique CN ensures the test cert can never collide with production.
TEST_CN="Numa Local CA Test $$-$(date +%s)"
FIXTURE_DIR=$(mktemp -d)
cleanup() {
# Best-effort: remove any test certs by hash if still present.
if security find-certificate -c "$TEST_CN" "$KEYCHAIN" >/dev/null 2>&1; then
echo " cleanup: removing leftover test cert"
security find-certificate -c "$TEST_CN" -a -Z "$KEYCHAIN" 2>/dev/null \
| awk '/^SHA-1 hash:/ {print $NF}' \
| while read -r hash; do
sudo security delete-certificate -Z "$hash" "$KEYCHAIN" >/dev/null 2>&1 || true
done
fi
rm -rf "$FIXTURE_DIR"
}
trap cleanup EXIT
echo "── generating fixture CA ──"
openssl req -x509 -newkey rsa:2048 -nodes -days 1 \
-keyout "$FIXTURE_DIR/ca.key" \
-out "$FIXTURE_DIR/ca.pem" \
-subj "/CN=$TEST_CN" \
-addext "basicConstraints=critical,CA:TRUE" \
-addext "keyUsage=critical,keyCertSign,cRLSign" >/dev/null 2>&1
echo " CN: $TEST_CN"
echo
echo "── trust step (mirrors trust_ca_macos) ──"
sudo security add-trusted-cert -d -r trustRoot -k "$KEYCHAIN" "$FIXTURE_DIR/ca.pem"
if security find-certificate -c "$TEST_CN" "$KEYCHAIN" >/dev/null 2>&1; then
printf " ${GREEN}${RESET} test cert found in keychain\n"
else
printf " ${RED}${RESET} test cert NOT found after add-trusted-cert\n"
exit 1
fi
echo
echo "── untrust step (mirrors untrust_ca_macos) ──"
security find-certificate -c "$TEST_CN" -a -Z "$KEYCHAIN" 2>/dev/null \
| awk '/^SHA-1 hash:/ {print $NF}' \
| while read -r hash; do
sudo security delete-certificate -Z "$hash" "$KEYCHAIN" >/dev/null
done
if security find-certificate -c "$TEST_CN" "$KEYCHAIN" >/dev/null 2>&1; then
printf " ${RED}${RESET} test cert STILL present after delete (regression)\n"
exit 1
fi
printf " ${GREEN}${RESET} test cert removed from keychain\n"
echo
printf "${GREEN}all checks passed${RESET}\n"