34 Commits

Author SHA1 Message Date
Razvan Dimescu
1f6bdff8f8 chore: bump version to 0.10.2 2026-04-09 22:59:10 +03:00
Razvan Dimescu
643d6b01e1 fix(linux): consult resolvectl when resolv.conf only shows the stub (#52)
On modern Arch / Ubuntu 22.04+ / Fedora desktops, NetworkManager +
systemd-resolved symlink /etc/resolv.conf to stub-resolv.conf, which
contains only:

  nameserver 127.0.0.53

The real upstream servers (router, ISP, configured DoT providers) live
inside systemd-resolved's per-link state, exposed via 'resolvectl status'.

discover_linux() was parsing /etc/resolv.conf, correctly filtering the
stub address, and then falling through to the Quad9 DoH fallback because
detect_dhcp_dns() is macOS-only on Linux. Net effect: on a large chunk of
Linux installs, numa silently defaulted to Quad9 instead of the user's
actual DNS — visible in Casey's AUR test banner (#33) as
'Upstream https://9.9.9.9/dns-query' despite his machine having working
router DNS the entire time.

resolvectl_dns_server() already exists — it was introduced for cloud VPC
forwarding-rule discovery and knows how to ask systemd-resolved for the
real active DNS server. This commit wires it into the default-upstream
fallback chain, between the primary resolv.conf parse and the
~/.numa/original-resolv.conf backup.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 22:32:57 +03:00
Razvan Dimescu
17c8e70aa3 fix(ci): skip prepare() in publish-aur metadata container (#51)
Follow-up to #49 and #50. With ownership and quoting fixed, the next run
([24199871832](https://github.com/razvandimescu/numa/actions/runs/24199871832))
reached makepkg and failed with:

  /pkg/PKGBUILD: line 34: cargo: command not found
  ==> ERROR: A failure occurred in prepare().

The publish job only installs 'binutils git sudo' since its sole purpose
is to regenerate .SRCINFO. 'makepkg -od' still runs prepare(), which
calls cargo. The sibling validate job avoids this by passing --noprepare
(and installs rust anyway).

Mirror that pattern: add --noprepare to the metadata-generation invocation.
pkgver() runs before prepare() in makepkg's pipeline, so .SRCINFO still
captures the computed version. Keeps the container minimal (no rust toolchain).

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 19:39:28 +03:00
Razvan Dimescu
389ac09907 fix(ci): repair broken quoting in publish-aur docker heredoc (#50)
The docker block runs as '/bin/bash -c "<multi-line script>"'. A comment
inside the script contained embedded double quotes:

  # "makepkg -od" fetches the source first so pkgver() can calculate the version.

The first embedded '"' prematurely closes the outer string. Bash then
parses the remainder into a second argument to 'bash -c' which becomes
$0 inside the container and is silently discarded. Net effect: the
in-container script stops at 'git config --add safe.directory', neither
'makepkg -od' nor 'makepkg --printsrcinfo > .SRCINFO' ever run, and the
host-side 'git add PKGBUILD .SRCINFO' fails with:

  fatal: pathspec '.SRCINFO' did not match any files

This bug was masked by the earlier ownership bug fixed in #49 — once
that permission error was removed, this one surfaced.

Fix: drop the embedded double quotes from the comment.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 18:55:03 +03:00
Razvan Dimescu
5308e9648c fix(ci): reclaim aur-repo ownership after docker chown (#49)
The 'Push to AUR' step failed on run 24195384571 with:
  error: could not lock config file .git/config: Permission denied

Inside the docker block we 'chown -R builduser:builduser /pkg', which
propagates through the bind mount and transfers ownership of aur-repo/
(including .git/) to the container's builduser UID. When control returns
to the runner user, 'git config user.name' can no longer write .git/config
and the step exits 255.

Chown the directory back to the runner's UID/GID before resuming host-side
git operations.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 18:24:30 +03:00
Casey Labs
819614fa7d [Feature] Add GitHub Action Workflow for Arch Linux AUR Package publishing (#33)
* Feature: add GitHub Actions workflow for publishing Arch Linux AUR package

* Fix issues in Arch Linux AUR publishing process

* Add patch to fix default Arch Linux binary path location issues

* fix: PKGBUILD compatibility with numa v0.10.1, fix QEMU action SHA pin

Three small bug fixes that make this PR mergeable end-to-end against
current main, without changing the package design (still numa-git,
still pushed on every main commit, still tracking HEAD via pkgver()):

1. Simplified prepare() — drop the obsolete sed patching for
   /usr/local/bin/numa. That literal only appears in a comment
   in current main; the actual binary path is determined at
   runtime via std::env::current_exe(). Additionally, numa
   v0.10.1 ships PR #43 which makes numa FHS-compliant on Linux
   out of the box (/var/lib/numa for data dir), so no source
   patching is needed at all on Arch.

2. Fixed package() sed for the systemd unit. The previous sed
   targeted "ExecStart=/usr/local/bin/numa" but numa.service
   actually uses "{{exe_path}}" as a templating placeholder
   that's substituted at runtime by replace_exe_path() when
   `numa install` runs. The sed silently did nothing, and the
   AUR-installed unit file would have a literal "{{exe_path}}"
   that systemd cannot start. Fixed sed:

     sed 's|{{exe_path}}|/usr/bin/numa /etc/numa.toml|g' \
       numa.service > numa.service.patched

3. Fixed broken docker/setup-qemu-action SHA pin in
   publish-aur.yml. The pinned SHA
   6882732593b27c7f95a044d559b586a46371a68e doesn't exist as
   a commit in upstream docker/setup-qemu-action. Verified
   v3.0.0 SHA is 68827325e0b33c7199eb31dd4e31fbe9023e06e3.
   Without this fix the aarch64 validate job would fail to
   load the action at workflow start.

Also refreshed the stale pkgver placeholder in PKGBUILD and
.SRCINFO from 0.9.1.r0.g1234abc to 0.10.1.r0.g0000000 — purely
cosmetic since pkgver() auto-overrides on every makepkg run,
but at least the in-VC value reflects the current era.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: make AUR packaging x86_64-only and stabilize local validation

Turns out Arch Linux doesn't officially support aarch64 architecture, so we will drop if from this AUR build process.

Changes:

- drop aarch64 from PKGBUILD, .SRCINFO, and AUR validation workflow
- keep AUR process aligned with official Arch Linux x86_64 support
- install rust directly in CI to avoid Arch cargo provider prompts
- fetch sources before running cargo audit and audit inside the
fetched repo
- disable makepkg LTO for this package to avoid Arch packaging link
failures
- mark /etc/numa.toml as a backup file
- Add local AUR build scratch directory exclusion to .gitignore

* Add temporary AUR test workflow

* Update github actions checkout workflow version

* remove temporary AUR test workflow

* fix: correct AUR SSH host key fingerprint

The previously pinned ed25519 key was truncated (52 chars) and did not
match the actual aur.archlinux.org host key. Verified via ssh-keyscan.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Razvan Dimescu <ssaricu@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 17:22:38 +03:00
Razvan Dimescu
fab8b698d8 fix: human-readable advisories for TLS data_dir + port-53 EACCES (#48)
* fix: human-readable advisory when TLS data_dir is not writable

When numa runs as non-root on a system with a privileged default
data_dir (e.g. /usr/local/var/numa on macOS), TLS CA setup fails with
a raw "Permission denied (os error 13)" and HTTPS proxy is silently
disabled. The user sees a cryptic warning with no path forward.

Detect std::io::ErrorKind::PermissionDenied on the tls error, print a
diagnostic naming the data_dir and offering two fixes (install as
system resolver, or point data_dir at a writable path), and keep the
graceful-degradation behavior — DNS resolution and plain-HTTP proxy
continue to work without HTTPS.

All other TLS setup errors fall through to the existing log::warn!.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: port-53 advisory also handles EACCES (non-root privileged bind)

The original port-53 match arm only caught EADDRINUSE, so a fresh
non-root user on macOS/Linux hitting EACCES when trying to bind a
privileged port saw the raw OS error instead of the advisory.

Collapse the scoping helper and the advisory into a single
`try_port53_advisory(bind_addr, &io::Error) -> Option<String>` that
returns the formatted diagnostic when both the port is 53 and the
error kind is one we can speak to (AddrInUse or PermissionDenied),
and `None` otherwise. The two failure modes share one body with a
cause-sentence variant — no duplicated fix text.

Caller becomes a plain if-let: no match guard, no separate is_port_53
helper exposed on the public API. is_port_53 goes back to private.

Unit tests cover all branches: AddrInUse, PermissionDenied, non-53
bind_addr, unrelated ErrorKind, and malformed bind_addr.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: move TLS error classification into tls module

main.rs no longer downcasts a boxed error to figure out whether it's
a permission-denied case. tls::try_data_dir_advisory(&err, &dir)
encapsulates the downcast + kind match and returns Some(advisory) or
None, mirroring system_dns::try_port53_advisory. main.rs becomes a
plain if-let, symmetric with the port-53 path.

Trim the docstrings on both advisory functions: they were narrating
the implementation (errno mapping) instead of stating the contract.

Add unit tests for try_data_dir_advisory covering PermissionDenied,
other io::ErrorKind, and non-io errors.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 16:27:08 +03:00
Razvan Dimescu
a6f23a5ddb fix: advisory + exit(1) when port 53 is already in use (#45) (#47)
* fix: advisory + exit(1) when port 53 is already in use (#45)

Detect AddrInUse on bind, print a human-readable diagnostic explaining
systemd-resolved / Dnscache as the likely cause and offer two concrete
fixes (sudo numa install, or bind_addr on a non-privileged port), then
exit(1) instead of surfacing a raw OS error.

Adds tests/docker/smoke-port53.sh: end-to-end Docker test that
pre-binds port 53 with a Python UDP socket and asserts the advisory +
exit code.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: collapse port53 advisory to single flat path

The per-platform cause sentences were cosmetic — they didn't change
the user's actions (install, or bind_addr on a non-privileged port),
but they introduced duplicated "another process..." strings, a
dead-from-CI branch (is_systemd_resolved_active() == true is never
reached by any test), and a pub visibility bump on
is_systemd_resolved_active for a single caller.

Replace with one flat format! whose cause line mentions both
systemd-resolved and the Windows DNS Client inline. The existing
smoke test now exercises 100% of the function.

is_systemd_resolved_active reverts to private.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 15:03:58 +03:00
Razvan Dimescu
27dfaab360 ci: pass PAT to action-gh-release so release events propagate (#44)
GitHub Actions deliberately does not propagate workflow events triggered
by the default GITHUB_TOKEN — a safety feature against infinite loops.
softprops/action-gh-release falls back to GITHUB_TOKEN when no `token`
is supplied, so the resulting `release: published` event was silently
swallowed and never reached homebrew-bump.yml.

Discovered shipping v0.10.1: tag pushed cleanly, crates.io published
cleanly, GitHub release page created cleanly, but the brew tap never
auto-bumped. Had to trigger homebrew-bump.yml manually via
workflow_dispatch.

Fix: pass HOMEBREW_TAP_GITHUB_TOKEN explicitly. This is already a PAT
(used by homebrew-bump.yml to push cross-repo to razvandimescu/
homebrew-tap), so reusing it keeps the secret surface flat. PAT-authored
release events are the documented escape hatch from the GITHUB_TOKEN
no-propagation rule.

Applies to v0.10.2+. v0.10.1 was bumped manually.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 18:26:21 +03:00
Razvan Dimescu
b2ed2e6aec chore: bump version to 0.10.1 2026-04-08 18:05:00 +03:00
Razvan Dimescu
79ecb73d87 fix: use FHS-compliant /var/lib/numa as Linux data dir default (#43)
* fix: use FHS-compliant /var/lib/numa as Linux data dir default

numa's default system-wide data directory was hardcoded to
/usr/local/var/numa for all Unix platforms. This is the right path on
macOS (Homebrew prefix convention) but non-FHS on Linux, where Arch /
Fedora / Debian / etc. expect persistent state under /var/lib/<pkg>.
The mismatch was invisible to existing users (numa creates the dir
silently on first run) but immediately surfaces when packaging for a
distro — see PR #33 (community contribution to add an Arch AUR package)
which had to add fragile sed-based path patching at PKGBUILD build time.

The fix moves the path decision into a small helper:

  - daemon_data_dir()        — cfg-gated platform dispatch (linux/macos)
  - resolve_linux_data_dir() — pure function, takes "does X exist?"
                               as parameters, returns the right path

Linux behavior:
  - Fresh install                       → /var/lib/numa (FHS)
  - Upgrading from pre-v0.10.1 install  → /usr/local/var/numa (legacy)
  - Both paths exist                    → /var/lib/numa (FHS wins)

The legacy fallback is critical: existing v0.10.0 Linux users have
their CA cert + services.json under /usr/local/var/numa. Returning
the new path unconditionally would cause CA regeneration on upgrade,
breaking every browser that had trusted the previous CA. The fallback
is checked at startup via std::path::Path::exists, so the upgrade is
seamless and zero-config.

macOS behavior is unchanged — /usr/local/var/numa is still correct
because Homebrew's prefix is /usr/local.

Test coverage:

  - resolve_linux_data_dir is a pure function gated cfg(any(linux,test))
    so the same code path is unit-tested on every platform's CI run.
  - Four tests cover all combinations of (legacy_exists, fhs_exists),
    asserting the migration logic stays correct under future edits.

The default config in numa.toml is also updated to document the new
per-platform default paths.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: end-to-end FHS path verification + simplify cleanup

Two related changes from a /simplify pass and a follow-up testing
finalization:

1. lib.rs cleanup (no behavior change):
   - Drop FHS_LINUX_DATA_DIR and LEGACY_LINUX_DATA_DIR consts. Both
     were used in only 4 places total and the unit tests already
     bypassed them with string literals, so they were over-engineering.
     Inline the strings in daemon_data_dir() and resolve_linux_data_dir().
   - Trim narrating doc/comments on the helper and the test bodies.
     Keep only the non-obvious WHY (the macOS Homebrew note and the
     migration-keeps-legacy rationale).

2. tests/docker/smoke-arch.sh:
   - Cherry-picked the previously-uncommitted Arch compatibility smoke
     test from feat/smoke-arch.
   - Removed the [server] data_dir = "/tmp/numa-smoke" override from
     the test config so the script now exercises the DEFAULT data dir
     code path — which is exactly what the FHS fix touches.
   - Added a path assertion after the dig succeeds: verify that
     /var/lib/numa/ca.pem exists (FHS) and /usr/local/var/numa is
     absent (no accidental dual-creation on a fresh install).

Verified end-to-end on archlinux:latest (Apple Silicon, Rosetta):

  ── building + running numa on archlinux:latest ──
  ── cargo build --release --locked ──
      Finished `release` profile [optimized] target(s) in 24.02s
  ── dig @127.0.0.1 -p 5354 google.com A ──
    142.251.38.206
  ── FHS path check ──
    ✓ CA cert at /var/lib/numa/ca.pem (FHS path)
    ✓ legacy path /usr/local/var/numa absent (fresh install used FHS)
  ── smoke-arch passed ──

This closes the testing gap where the unit tests covered the
path-decision LOGIC in isolation but nothing exercised the live
wiring on a real Linux filesystem.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 18:00:27 +03:00
Razvan Dimescu
bf5565ac26 fix: macOS use launchctl bootout/bootstrap instead of deprecated load (#42)
The deprecated `launchctl load -w` returns exit code 0 even when it
cannot actually reload a service whose label is already in launchd's
in-memory state. It prints `Load failed: 5: Input/output error` to
stderr but exits 0, so the install path interprets it as success and
continues — silently leaving the running daemon on whatever binary
was first loaded, even though the on-disk plist now points elsewhere.

The consequence: every macOS user running `brew upgrade numa` rewrites
the plist to point at the new binary, but launchctl never actually
loads it. They think they upgraded; they're still running the old
version. Neither #41 (cross-platform CA trust) nor #40 (self-referential
backup) would actually take effect for them until they manually run:

  sudo launchctl bootout system /Library/LaunchDaemons/com.numa.dns.plist
  sudo launchctl bootstrap system /Library/LaunchDaemons/com.numa.dns.plist

The fix uses the modern API symmetrically across all three call sites:

- install_service_macos: bootout (best-effort cleanup, no-op on first
  install) → bootstrap → wait for readiness → configure DNS
- install_service_macos rollback path: bootout instead of `unload`
- uninstall_service_macos: bootout BEFORE remove_file (the modern API
  needs the plist file path as the specifier; doing it after remove
  would leave the service in memory until reboot)

No new tests — this is a shell-call substitution with no logic to
unit-test. Verified manually on macOS: `sudo numa install` no longer
prints `Load failed`, and the daemon is correctly running the binary
the plist points at.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:54:21 +03:00
Razvan Dimescu
679b346246 fix: prevent self-referential DNS backup on re-install (#40)
* fix: prevent self-referential DNS backup on re-install

The install flow previously captured current system DNS servers
verbatim into the backup file. If numa was already installed, current
DNS was 127.0.0.1, so the "backup" recorded 127.0.0.1 as the "original"
— making a subsequent uninstall a no-op self-reference.

Reproduced 2026-04-08 during v0.10.0 brew dogfood: after
`sudo numa uninstall; sudo /opt/homebrew/bin/numa install`,
`sudo numa uninstall` printed `restored DNS for "Wi-Fi" -> 127.0.0.1`
because the brew binary's install step had overwritten the backup with
the already-stub state.

Fix (all three platforms):
- macOS/Windows: if the existing backup already contains at least one
  non-loopback/non-stub upstream, preserve it as-is. If writing a fresh
  backup, filter loopback/stub addresses first so a capture from
  already-numa-managed state isn't self-referential.
- Linux (resolv.conf fallback path): detect numa-managed or all-loopback
  resolv.conf content and skip the file copy in that case; preserve an
  existing useful backup rather than overwriting it. systemd-resolved
  path is unaffected (uses a drop-in, no backup file).

Adds three unit tests for the predicates: macOS HashMap detection,
Windows interface filter, and resolv.conf parsing (real upstream,
self-referential, numa-marker, systemd stub, mixed).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: share iter_nameservers helper and reuse resolv.conf content

Post-review simplifications on the stale-backup fix:

- Extract iter_nameservers(&str) helper used by both parse_resolv_conf
  and resolv_conf_has_real_upstream. Eliminates the duplicated
  line-by-line nameserver parsing (findings from reuse review).
- install_linux: reuse the already-read resolv.conf content via
  std::fs::write instead of a second read via std::fs::copy.
- install_macos / install_windows: flatten the conditional eprintln
  pattern — always print a blank line, conditionally print the save
  message. Equivalent output, less branching.

Net −12 lines. All 130 tests still pass, clippy clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: drop redundant trim before split_whitespace

CI caught `clippy::trim_split_whitespace` on Rust 1.94: `split_whitespace()`
already skips leading/trailing whitespace, so `.trim()` first is redundant.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: extract load_backup helper

Remove duplicated read+deserialize boilerplate shared by install_macos
and install_windows. The two call sites each had an identical 4-line
chain of read_to_string().ok().and_then(serde_json::from_str).ok() —
collapse into a single generic helper load_backup<T>().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Revert "refactor: extract load_backup helper"

This reverts commit a54fb99428.

* test: drop windows_backup_filters_loopback

The test inlined the 3-line filter block from install_windows rather
than calling a production helper, so it was testing stdlib Vec::retain
+ is_loopback_or_stub — both already covered elsewhere. Deleting it
removes a test that would silently pass even if install_windows stopped
filtering altogether.

The predicate logic for macOS-shaped backups stays covered by
macos_backup_real_upstream_detection (same inner Vec<String> type).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: add windows_backup_filters_loopback unit test

The PR description mentioned this test but it was missing from the
diff, leaving backup_has_real_upstream_windows untested. Mirrors the
shape of macos_backup_real_upstream_detection: empty map → false,
all-loopback (127.0.0.1, ::1, 0.0.0.0) → false, one real entry
alongside loopback → true.

Also relax the cfg gate on backup_has_real_upstream_windows from
cfg(windows) to cfg(any(windows, test)) so the test compiles
cross-platform, matching how backup_has_real_upstream_macos and
the resolv_conf helpers are gated.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:38:37 +03:00
Razvan Dimescu
039254280b fix: cross-platform CA trust (Arch/Fedora + Windows) (#41)
* fix: cross-platform CA trust (Arch/Fedora + Windows)

Closes #35.

trust_ca_linux now detects which trust store the distro ships and
runs the matching refresh command, instead of hardcoding Debian's
update-ca-certificates. Detection walks a const table in priority
order, picking the first whose anchor dir exists:

  - debian: /usr/local/share/ca-certificates  (update-ca-certificates)
  - pki:    /etc/pki/ca-trust/source/anchors  (update-ca-trust extract)
  - p11kit: /etc/ca-certificates/trust-source/anchors (trust extract-compat)

Falls back with a clear error listing every backend tried.

Adds Windows support via certutil -addstore Root / -delstore Root,
removing the silent CA-trust gap on numa install (previously the
service installed but the trust step quietly errored, leaving every
HTTPS .numa request throwing browser warnings).

Refactor: trust_ca and untrust_ca are now thin dispatchers calling
per-platform helpers. CA_COMMON_NAME and CA_FILE_NAME are centralized
in tls.rs and reused from system_dns.rs and api.rs. untrust_ca_linux
no longer pre-checks file existence (TOCTOU) and skips the refresh
when no file was actually removed.

Test: tests/docker/install-trust.sh runs the install/uninstall
contract against debian:stable, fedora:latest, and archlinux:latest
in containers, asserting the cert lands in (and is removed from)
the system bundle. All three pass locally.

README notes the Firefox/NSS limitation (separate trust store).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: rustfmt fixes for trust_ca_linux helpers

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: macOS CA trust contract test (manual)

Adds tests/manual/install-trust-macos.sh — a sudo bash script that
mirrors trust_ca_macos / untrust_ca_macos against a fixture cert with
a unique CN. Designed to coexist with a running production numa:

- Refuses to run if a real "Numa Local CA" is already in System.keychain
  (fail-closed protection for dogfood installs)
- Uses a unique CN ("Numa Local CA Test <pid-timestamp>") so the test
  cert can never collide with production
- Mirrors the by-hash deletion loop from untrust_ca_macos
- Trap-cleanup on success or interrupt

Lives under tests/manual/ to signal "host-mutating, dev-only" — distinct
from tests/docker/install-trust.sh which is hermetic.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: relax bail-out in macOS trust test (safe alongside production)

The bail-out was overly defensive. The test cert uses a unique CN
("Numa Local CA Test <pid-ts>") that is strictly longer than the
production CN, so `security find-certificate -c $TEST_CN` cannot
substring-match the production cert. All deletes are by-hash, which
can only target the test cert's specific hash. Coexistence is
provably safe; document the reasoning in the header comment block
and replace the refusal with an informational notice.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 15:18:01 +03:00
Razvan Dimescu
1b2f682026 ci: auto-bump homebrew formula on release (#39)
Add a workflow that runs on release:published (and via manual
workflow_dispatch), fetches sha256 checksums from the published release
assets, and rewrites razvandimescu/homebrew-tap/numa.rb in place:
version, URL paths, and sha256 lines after each url. The formula's
existing on_macos/on_linux structure is preserved.

Uses HOMEBREW_TAP_GITHUB_TOKEN (already set as a repo secret) to push
directly to the tap's main branch.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 03:47:43 +03:00
Razvan Dimescu
82cc588c67 docs: explain the two DoT cert modes in README
Expands the DoT paragraph to make the trust model explicit. The
previous version said "self-signed or bring your own cert" without
explaining when to pick which or what the user experience looks like.

The two modes close numa's gap vs AdGuard Home: BYO cert mode is
functionally identical (Let's Encrypt via DNS-01 + cert_path/key_path),
and the self-signed mode is numa's advantage on LAN-only deploys.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
bc54ea930f docs: document DNS-over-TLS listener in README
Adds DoT to the four existing touchpoints in the README where the
feature naturally belongs:

- Hero paragraph: mentions DoT alongside DNSSEC as a headline feature
- Ad Blocking & Privacy section: dedicated paragraph with RFC 7858
  reference, config hint, and the ALPN strictness guarantee
- Comparison table: new "Encrypted clients (DoT listener)" row.
  Pi-hole "Needs stunnel sidecar" (verified — Pi-hole explicitly
  closed the native-DoT feature request as out of scope; community
  uses stunnel or AdGuard DNS Proxy as a TLS terminator)
- Roadmap: checks off "DNS-over-TLS listener" alongside the existing
  DoH entry

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
7001ba2e51 chore: bump version to 0.10.0
v0.10.0 ships DNS-over-TLS. Tagged release v0.10.0 on main after
merge will pick up this Cargo.toml version, keeping tag and manifest
aligned for release.yml.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
6887c8e02e refactor: move data_dir override from env var to [server] TOML field
Reverts the NUMA_DATA_DIR env var added in the previous commit and
replaces it with a [server] data_dir TOML field. Numa already has a
well-developed config system; adding a parallel env-var mechanism
for a single knob was wrong.

The principle: TOML is for application behavior configuration. Env
vars are for bootstrap values (HOME, SUDO_USER to discover paths
before config loads) and standard ecosystem conventions (RUST_LOG).
data_dir is neither — it's an app knob, so it belongs in the TOML.

Changes:
- lib.rs::data_dir() reverts to the platform-specific fallback only
- config.rs adds `data_dir: Option<PathBuf>` to ServerConfig
- main.rs resolves config.server.data_dir with fallback to
  numa::data_dir() and passes it to build_tls_config, then stores the
  resolved path on ctx.data_dir for downstream consumers
- tls.rs::build_tls_config takes `data_dir: &Path` as an explicit
  parameter instead of calling crate::data_dir() behind the caller's
  back. regenerate_tls and dot.rs self_signed_tls now pass
  &ctx.data_dir, honoring whatever path the config resolved to
- tests/integration.sh Suite 6 uses `data_dir = "$NUMA_DATA"` in its
  test TOML instead of the NUMA_DATA_DIR env var prefix
- numa.toml gains a commented-out data_dir example

No behavior change for existing production deployments (the default
path is unchanged). Test harness is now fully config-driven, and
containerized deploys can override data_dir via mount+config without
needing env var injection.

127/127 unit tests pass, Suite 6 passes end-to-end.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
7f52bd8a32 test: Suite 6 — proxy + DoT coexistence, NUMA_DATA_DIR override
Adds integration test coverage for the realistic production shape
where both the HTTPS proxy and DoT are enabled simultaneously. This
was previously untested — every existing suite had either one or the
other, so the interaction path was implicit.

What Suite 6 verifies:
- Both listeners bind without panic
- DoT still resolves queries with the proxy enabled
- Proxy HTTPS handshake still works with DoT enabled
- Both certs validate against the same shared CA

To run non-root, adds a NUMA_DATA_DIR env var override to data_dir()
that lets callers point the CA/cert storage at any writable path.
Useful beyond tests: containerized deployments, CI runners, dev
testing without sudo. The fallback is the existing platform-specific
path (unix: /usr/local/var/numa, windows: %PROGRAMDATA%\numa).

Suite 6 sets NUMA_DATA_DIR=/tmp/numa-integration-data before
starting numa, then trusts the generated CA at $NUMA_DATA_DIR/ca.pem
for both kdig (DoT query) and openssl s_client (HTTPS proxy
handshake) verification.

All 6 suites, 32 checks, run non-root and pass locally.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
c98e6c3ea9 fix: install rustls crypto provider when loading user DoT cert
Adds tests/integration.sh Suite 5 (DoT via kdig + openssl) and
fixes a startup panic caught by it.

Bug: when [dot] cert_path/key_path was set AND [proxy] was disabled,
numa panicked on the first DoT handshake with "Could not
automatically determine the process-level CryptoProvider from Rustls
crate features". In normal deployments the proxy's build_tls_config
installs the default provider as a side effect, masking the missing
call in dot.rs::load_tls_config. Disable the proxy and the panic
surfaces. Fix: call
rustls::crypto::ring::default_provider().install_default() at the
top of load_tls_config (no-op if already installed).

Suite 5 exercises:
- DoT listener binds on configured port
- Resolves a local zone A record over TLS (kdig +tls)
- Persistent connection reuse (kdig +keepopen, 3 queries, 1 handshake)
- ALPN "dot" negotiation (openssl s_client -alpn dot)
- ALPN mismatch rejected with no_application_protocol (openssl -alpn h2)

Uses a pre-generated cert at /tmp so the test runs non-root.
Skips gracefully if kdig or openssl aren't installed.

Also: Dockerfile now EXPOSE 853/tcp so docker run -p 853:853 works
out of the box when users enable DoT.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
186e709373 test: verify DoT server rejects mismatched ALPN
Adds dot_rejects_non_dot_alpn to assert the rustls server enforces
ALPN strictness rather than silently accepting a mismatched
negotiation. This is the load-bearing behavior behind the cross-
protocol confusion defense — without enforcement, the ALPN "dot"
advertisement is just a sign hung on an unlocked door.

Refactors test_tls_configs to return the leaf cert DER instead of a
prebuilt client config, and adds a dot_client(cert_der, alpn) helper
so each test can build a client config with the ALPN list it needs.
The five existing DoT tests gain one line each to call dot_client
with dot_alpn(); behavior unchanged.

127/127 tests pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
bacc49667a fix: DoT cert needs explicit {tld}.{tld} SAN, not just *.{tld} wildcard
self_signed_tls was passing an empty service_names list, so the
generated cert only had the *.numa wildcard SAN. Strict TLS clients
(browsers, possibly some iOS versions) reject wildcards under
single-label TLDs — see the existing comment in tls.rs explaining
why the proxy lists each service explicitly.

setup-phone's mobileconfig sends ServerName "numa.numa" as SNI, so
the DoT cert must have an explicit numa.numa SAN. Pass proxy_tld
itself as a service name, mirroring how main.rs already registers
"numa" as a service for the proxy's TLS cert.

Test fixture updated to mirror the production SAN shape (*.numa +
numa.numa) and switched the client to SNI "numa.numa", so the
existing DoT test suite implicitly exercises the SNI path used by
setup-phone clients.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
7d0fe19462 style: drop narrating comments on dot_alpn and ALPN test
Both were restating what the code already said — dot_alpn's doc
narrated the function name and the test comment restated the
assertion. RFC 7858 §3.2 is already cited on self_signed_tls and
build_tls_config where the "why" actually matters.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
1632fc36f2 feat: DoT write timeout and ALPN "dot" advertisement
Two DoS/interop hardening items:

1. Bound write_framed by WRITE_TIMEOUT (10s) so a slow-reader
   attacker can't indefinitely hold a worker task and its connection
   permit. Symmetric to the existing handshake timeout.

2. Advertise ALPN "dot" per RFC 7858 §3.2. Required by some strict
   DoT clients (newer Apple stacks, some Android versions). rustls
   ServerConfig exposes alpn_protocols as a pub field so we set it
   after with_single_cert:
   - load_tls_config (user-provided cert/key): set directly
   - self_signed_tls (new, replaces fallback_tls): builds a fresh
     DoT-specific TLS config via build_tls_config with the ALPN list

   build_tls_config now takes an `alpn: Vec<Vec<u8>>` parameter so
   DoT and the proxy can pass different ALPN lists while sharing the
   same CA. Proxy callers pass Vec::new() (unchanged behavior).

   Dropped the ctx.tls_config reuse branch: we can't mutate a shared
   Arc<ServerConfig> to add DoT-specific ALPN, and reusing the proxy
   config was already quietly broken re: SAN (proxy cert covers
   *.{tld}, not the DoT server's bind hostname/IP).

Added dot_negotiates_alpn test that asserts conn.alpn_protocol()
returns Some(b"dot") after handshake. 126/126 tests pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
0a73cdf4db docs: add commented-out [dot] example to numa.toml
Matches the style of the other opt-in sections (blocking, dnssec, lan).
Documents all five DotConfig fields with their defaults.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
2b0c4e3d5e refactor: trim DoT listener — let-else reads, drop MIN_MSG_LEN and redundant localhost test
- Collapse two 4-arm read/timeout matches to let-else (lose one
  defensive debug log on payload-read timeout; idle timeouts are
  routine on persistent DoT connections anyway)
- Drop MIN_MSG_LEN: DnsPacket::from_buffer rejects truncated input
  on its own, and BytePacketBuffer is zero-init so buf[0..2] for
  sub-2-byte messages just yields a harmless FORMERR with id=0
- Inline ACCEPT_ERROR_BACKOFF (single use site)
- Drop the partial cert/key warning: missing one of cert_path/
  key_path silently falls back to self-signed; users see the
  self-signed cert at startup and figure it out
- Drop dot_localhost_resolution test: RFC 6761 localhost is tested
  in ctx.rs; this test only verified DoT transport, which
  dot_resolves_local_zone already covers
- Drop self-documenting comment in dot_multiple_queries_on_persistent_connection

Net -32 lines, 125/125 tests pass, no behavior change users would notice.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
357c710ec4 style: rustfmt
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
7742858b7b refactor: simplify DoT cert/key match and extract send_response helper
- Flatten 4-arm cert/key match in start_dot to 2 arms with the
  partial-config warning hoisted into a one-liner above the match.
- Extract send_response() that serializes a DnsPacket and writes it
  framed, used by both the FORMERR-on-parse-error and SERVFAIL-on-
  resolve-error paths. Removes duplicated buffer/write/log boilerplate
  and unifies the rescode logging via {:?}.

No behavior change; 126/126 tests still pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
1239ed0e72 fix: parse DoT queries up-front and echo question in SERVFAIL
Address review findings on PR #25:

- Refactor resolve_query to take a pre-parsed DnsPacket. Parse-error
  handling moves to the UDP caller, eliminating the double warn! line
  on malformed UDP queries.
- Enforce MIN_MSG_LEN=12 (DNS header) in handle_dot_connection so
  query_id extraction is always reading client-sent bytes, not the
  zeroed buffer tail.
- Parse the DoT query before calling resolve_query and retain it, so
  SERVFAIL responses can echo the original question section via
  response_from(). Parse failures send FORMERR with the client id.
- Extract write_framed() helper for length-prefix + flush, reused by
  success, SERVFAIL, and FORMERR paths.
- Back off 100ms on listener.accept() errors to avoid tight-looping
  on fd exhaustion.
- Replace the hardcoded 127.0.0.1:53 upstream in dot_nxdomain_for_unknown
  with a bound-but-unresponsive UDP socket owned by the test, making it
  independent of the host's local resolver. Test now runs in ~220ms
  (timeout lowered to 200ms) instead of 3s and asserts the question is
  echoed in the SERVFAIL response.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
cb54ab3dfc fix: harden DoT listener against slowloris and stale handshakes
- Add 10s timeout on TLS handshake — prevents clients from holding a
  semaphore permit without completing the handshake
- Add IDLE_TIMEOUT on payload read_exact — prevents slowloris after
  sending a valid length prefix then trickling bytes
- Extract accept_loop() shared between start_dot and tests — eliminates
  duplicated accept logic that could drift
- Add 5s timeout on TCP reads in recursive test mock server

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
aa8923b2c6 fix: add debug logging for DoT SERVFAIL serialization failure, TC-bit TODO
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
14efc51340 fix: send SERVFAIL on DoT resolve errors, extract shared connection handler
- Send SERVFAIL response (with correct query ID) when resolve_query
  fails, preventing DoT clients from hanging until idle timeout
- Extract handle_dot_connection() so tests use the same logic as
  production, eliminating duplicated accept/read/resolve loop
- Replace magic 4096 with named MAX_MSG_LEN constant tied to BUF_SIZE
- Add flush() after each TLS write to prevent buffered responses
- Extract fallback_tls() helper, handle partial cert/key config,
  support IPv6 bind address, remove redundant crypto provider init

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
Razvan Dimescu
e4350ae81c feat: add DNS-over-TLS (DoT) listener (RFC 7858)
Refactor handle_query into transport-agnostic resolve_query that returns
a BytePacketBuffer, keeping the UDP path zero-alloc. Add a TLS listener
on port 853 with persistent connections, idle timeout, connection limits,
and coalesced writes. Supports user-provided certs or self-signed CA
fallback. Includes 5 integration tests.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 02:53:43 +03:00
26 changed files with 2527 additions and 258 deletions

19
.SRCINFO Normal file
View File

@@ -0,0 +1,19 @@
pkgbase = numa-git
pkgdesc = Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS
pkgver = 0.10.1.r0.g0000000
pkgrel = 1
url = https://github.com/razvandimescu/numa
arch = x86_64
license = MIT
options = !lto
makedepends = cargo
makedepends = git
depends = gcc-libs
depends = glibc
provides = numa
conflicts = numa
backup = etc/numa.toml
source = numa::git+https://github.com/razvandimescu/numa.git
sha256sums = SKIP
pkgname = numa-git

76
.github/workflows/homebrew-bump.yml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Bump Homebrew Tap
on:
release:
types: [published]
workflow_dispatch:
inputs:
version:
description: 'Version to bump (e.g. 0.10.0 or v0.10.0)'
required: true
permissions:
contents: read
jobs:
bump:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Determine version
id: ver
run: |
if [ "${{ github.event_name }}" = "release" ]; then
V="${{ github.event.release.tag_name }}"
else
V="${{ github.event.inputs.version }}"
fi
V="${V#v}"
echo "version=$V" >> "$GITHUB_OUTPUT"
- name: Fetch sha256 checksums from release assets
id: shas
env:
V: ${{ steps.ver.outputs.version }}
run: |
set -euo pipefail
base="https://github.com/razvandimescu/numa/releases/download/v${V}"
for t in macos-aarch64 macos-x86_64 linux-aarch64 linux-x86_64; do
sha=$(curl -fsSL "${base}/numa-${t}.tar.gz.sha256" | awk '{print $1}')
if [ -z "$sha" ]; then
echo "ERROR: failed to fetch sha256 for $t" >&2
exit 1
fi
key=$(echo "$t" | tr '[:lower:]-' '[:upper:]_')
echo "SHA_${key}=${sha}" >> "$GITHUB_ENV"
done
- name: Clone homebrew-tap
env:
HOMEBREW_TAP_GITHUB_TOKEN: ${{ secrets.HOMEBREW_TAP_GITHUB_TOKEN }}
run: |
git clone "https://x-access-token:${HOMEBREW_TAP_GITHUB_TOKEN}@github.com/razvandimescu/homebrew-tap.git" tap
- name: Update formula
env:
VERSION: ${{ steps.ver.outputs.version }}
run: |
python3 scripts/update-homebrew-formula.py tap/numa.rb
echo "--- updated numa.rb ---"
cat tap/numa.rb
- name: Commit and push
working-directory: tap
env:
V: ${{ steps.ver.outputs.version }}
run: |
if git diff --quiet; then
echo "numa.rb already at v${V}, nothing to commit"
exit 0
fi
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add numa.rb
git commit -m "chore: bump numa to v${V}"
git push origin main

159
.github/workflows/publish-aur.yml vendored Normal file
View File

@@ -0,0 +1,159 @@
# `publish-aur.yml` - Arch Linux AUR Package Workflow
# --------------------
# This workflow automates the validation and publishing of the 'numa-git' package to the
# Arch User Repository (AUR). The AUR is a community-driven repository for Arch Linux users.
#
# Workflow Overview:
# 1. Validate: Builds and tests the package for Arch Linux x86_64 using a clean
# Arch Linux container.
# 2. Audit: Checks Rust dependencies for known security vulnerabilities using
# 'cargo-audit'.
# 3. Publish: If on the 'main' branch, it pushes the updated PKGBUILD and
# .SRCINFO to the AUR.
#
# Security Best Practices:
# - SHA Pinning: All GitHub Actions are pinned to a full-length commit SHA (e.g., v6.0.2 @ SHA)
# to ensure the code is immutable and protects against supply-chain attacks where a tag
# might be maliciously moved to a compromised commit.
# - SSH Hygiene: Uses ssh-agent to keep the private key in memory rather than on disk.
# - Audit: Runs 'cargo audit' to prevent publishing known vulnerable dependencies.
name: Publish - Arch Linux AUR Package
on:
push:
branches: [main]
workflow_dispatch:
permissions:
contents: read
jobs:
# The 'validate' job ensures that the PKGBUILD is correct and the software builds/tests
# successfully on Arch Linux before we attempt to publish it.
validate:
name: Validate PKGBUILD (${{ matrix.arch }})
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch: [x86_64]
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Build and Test Package
timeout-minutes: 60
env:
AUR_PKGNAME: ${{ secrets.AUR_PACKAGE_NAME }}
run: |
# We use a temporary directory to avoid Docker permission issues with the workspace.
mkdir -p build-dir
cp PKGBUILD build-dir/
docker run --rm -v $PWD/build-dir:/pkg -w /pkg archlinux:latest /bin/bash -c "
# ARCH LINUX SECURITY REQUIREMENT:
# 'makepkg' (the tool that builds Arch packages) refuses to run as root for safety.
# We must create a standard user and give them sudo access.
# Install build-time dependencies.
# 'base-devel' includes essential tools like gcc, make, and binutils.
# Install 'rust' directly to avoid the interactive virtual-package
# prompt for 'cargo' on current Arch images.
pacman -Syu --noconfirm --needed base-devel rust git sudo cargo-audit
useradd -m builduser
chown -R builduser:builduser /pkg
# Allow the build user to install dependencies during the build process.
echo 'builduser ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/builduser
# Fetch the source tree first so pkgver() and cargo-audit have a
# real Cargo.lock to inspect.
sudo -u builduser makepkg -o --nobuild --nocheck --nodeps --noprepare
# SECURITY AUDIT:
# Fail early if any dependencies have known security vulnerabilities.
sudo -u builduser sh -lc 'cd /pkg/src/numa && cargo audit'
# BUILD & TEST:
# 'makepkg -s' will:
# 1. Download source files (cloning this repo)
# 2. Run prepare(), build(), and check() (running cargo test)
# 3. Create the final .pkg.tar.zst package
sudo -u builduser makepkg -s --noconfirm
"
# The 'publish' job updates the AUR repository with our latest PKGBUILD and .SRCINFO.
publish:
name: Publish to AUR
needs: validate
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
# Securely configure SSH for AUR access.
- name: Configure SSH
run: |
mkdir -p ~/.ssh
# Official AUR Ed25519 fingerprint (prevents Man-in-the-Middle attacks).
echo "aur.archlinux.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEuBKrPzbawxA/k2g6NcyV5jmqwJ2s+zpgZGZ7tpLIcN" >> ~/.ssh/known_hosts
# Use ssh-agent to keep the private key in memory rather than writing it to disk.
eval $(ssh-agent -s)
echo "${{ secrets.AUR_SSH_PRIVATE_KEY }}" | tr -d '\r' | ssh-add -
# Export the agent socket so subsequent 'git' commands can use it.
echo "SSH_AUTH_SOCK=$SSH_AUTH_SOCK" >> $GITHUB_ENV
echo "SSH_AGENT_PID=$SSH_AGENT_PID" >> $GITHUB_ENV
- name: Push to AUR
env:
AUR_PKGNAME: ${{ secrets.AUR_PACKAGE_NAME }}
AUR_EMAIL: ${{ secrets.AUR_EMAIL }}
AUR_USER: ${{ secrets.AUR_USERNAME }}
run: |
# AUR repos are managed via Git. Each package has its own repo at:
# ssh://aur@aur.archlinux.org/<package-name>.git
git clone ssh://aur@aur.archlinux.org/$AUR_PKGNAME.git aur-repo
cp PKGBUILD aur-repo/
cd aur-repo
# METADATA GENERATION:
# '.SRCINFO' is a machine-readable version of the PKGBUILD.
# We must run this as a non-root user ('builduser') inside the container.
docker run --rm -v $(pwd):/pkg archlinux:latest /bin/bash -c "
pacman -Syu --noconfirm --needed binutils git sudo
useradd -m builduser
chown -R builduser:builduser /pkg
cd /pkg
sudo -u builduser git config --global --add safe.directory '*'
# makepkg -od fetches the source first so pkgver() can calculate the version.
# --noprepare skips the prepare() function, which invokes cargo and would
# otherwise require a full rust toolchain in this metadata-only container.
# pkgver() runs before prepare(), so .SRCINFO still gets the correct version.
sudo -u builduser makepkg -od --noprepare && sudo -u builduser makepkg --printsrcinfo > .SRCINFO
"
# Reclaim ownership: the in-container 'chown -R builduser:builduser /pkg'
# propagates through the bind mount, leaving .git/ owned by the container's
# builduser UID. Without this, subsequent 'git config' on the host fails with
# "could not lock config file .git/config: Permission denied".
sudo chown -R "$(id -u):$(id -g)" .
# Set the commit identity using secrets for security and auditability.
git config user.name "$AUR_USER"
git config user.email "$AUR_EMAIL"
# Stage and commit both the human-readable PKGBUILD and machine-readable .SRCINFO.
git add PKGBUILD .SRCINFO
if ! git diff --cached --quiet; then
git commit -m "chore: update PKGBUILD to ${{ github.sha }}"
git push origin master
else
echo "No changes to commit (metadata and PKGBUILD are already up-to-date)."
fi

View File

@@ -103,98 +103,16 @@ jobs:
- name: Create Release - name: Create Release
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
with: with:
# Use a PAT (not the default GITHUB_TOKEN) so the resulting
# `release: published` event propagates to downstream workflows
# like homebrew-bump.yml. Events triggered by GITHUB_TOKEN are
# deliberately not propagated by GitHub Actions to prevent
# infinite loops; PAT-authored events are the documented escape
# hatch. Reusing HOMEBREW_TAP_GITHUB_TOKEN (already a PAT used
# by homebrew-bump.yml itself) keeps the secret surface flat.
token: ${{ secrets.HOMEBREW_TAP_GITHUB_TOKEN }}
generate_release_notes: true generate_release_notes: true
files: | files: |
*.tar.gz *.tar.gz
*.zip *.zip
*.sha256 *.sha256
update-homebrew:
needs: release
runs-on: ubuntu-latest
steps:
- name: Get version from tag
id: version
run: echo "version=${GITHUB_REF_NAME#v}" >> "$GITHUB_OUTPUT"
- name: Download SHA256 files
uses: actions/download-artifact@v4
with:
merge-multiple: true
- name: Extract checksums
id: sha
run: |
echo "macos_arm=$(awk '{print $1}' numa-macos-aarch64.tar.gz.sha256)" >> "$GITHUB_OUTPUT"
echo "macos_x86=$(awk '{print $1}' numa-macos-x86_64.tar.gz.sha256)" >> "$GITHUB_OUTPUT"
echo "linux_arm=$(awk '{print $1}' numa-linux-aarch64.tar.gz.sha256)" >> "$GITHUB_OUTPUT"
echo "linux_x86=$(awk '{print $1}' numa-linux-x86_64.tar.gz.sha256)" >> "$GITHUB_OUTPUT"
- name: Update Homebrew formula
uses: actions/github-script@v7
with:
github-token: ${{ secrets.HOMEBREW_TAP_TOKEN }}
script: |
const version = '${{ steps.version.outputs.version }}';
const base = `https://github.com/razvandimescu/numa/releases/download/v${version}`;
const formula = `class Numa < Formula
desc "Portable DNS resolver with ad blocking, .numa local service proxy, and developer overrides"
homepage "https://github.com/razvandimescu/numa"
license "MIT"
version "${version}"
on_macos do
if Hardware::CPU.arm?
url "${base}/numa-macos-aarch64.tar.gz"
sha256 "${{ steps.sha.outputs.macos_arm }}"
else
url "${base}/numa-macos-x86_64.tar.gz"
sha256 "${{ steps.sha.outputs.macos_x86 }}"
end
end
on_linux do
if Hardware::CPU.arm?
url "${base}/numa-linux-aarch64.tar.gz"
sha256 "${{ steps.sha.outputs.linux_arm }}"
else
url "${base}/numa-linux-x86_64.tar.gz"
sha256 "${{ steps.sha.outputs.linux_x86 }}"
end
end
def install
bin.install "numa"
end
def caveats
<<~EOS
Numa requires root to bind port 53:
sudo numa # start the DNS server
sudo numa install # set as system DNS
sudo numa service start # run as persistent service
Dashboard: http://localhost:5380
EOS
end
test do
assert_match "numa", shell_output("#{bin}/numa --version")
end
end
`.replace(/^ /gm, '');
const { data: existing } = await github.rest.repos.getContent({
owner: 'razvandimescu',
repo: 'homebrew-tap',
path: 'numa.rb',
});
await github.rest.repos.createOrUpdateFileContents({
owner: 'razvandimescu',
repo: 'homebrew-tap',
path: 'numa.rb',
message: `numa ${version}`,
content: Buffer.from(formula).toString('base64'),
sha: existing.sha,
});

1
.gitignore vendored
View File

@@ -1,4 +1,5 @@
/target /target
/build-dir
CLAUDE.md CLAUDE.md
docs/ docs/
site/blog/posts/ site/blog/posts/

12
Cargo.lock generated
View File

@@ -1143,7 +1143,7 @@ dependencies = [
[[package]] [[package]]
name = "numa" name = "numa"
version = "0.9.1" version = "0.10.2"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"axum", "axum",
@@ -1159,6 +1159,7 @@ dependencies = [
"reqwest", "reqwest",
"ring", "ring",
"rustls", "rustls",
"rustls-pemfile",
"serde", "serde",
"serde_json", "serde_json",
"socket2 0.5.10", "socket2 0.5.10",
@@ -1546,6 +1547,15 @@ dependencies = [
"zeroize", "zeroize",
] ]
[[package]]
name = "rustls-pemfile"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dce314e5fee3f39953d46bb63bb8a46d40c2f8fb7cc5a3b6cab2bde9721d6e50"
dependencies = [
"rustls-pki-types",
]
[[package]] [[package]]
name = "rustls-pki-types" name = "rustls-pki-types"
version = "1.14.0" version = "1.14.0"

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "numa" name = "numa"
version = "0.9.1" version = "0.10.2"
authors = ["razvandimescu <razvan@dimescu.com>"] authors = ["razvandimescu <razvan@dimescu.com>"]
edition = "2021" edition = "2021"
description = "Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS" description = "Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS"
@@ -29,6 +29,7 @@ rustls = "0.23"
tokio-rustls = "0.26" tokio-rustls = "0.26"
arc-swap = "1" arc-swap = "1"
ring = "0.17" ring = "0.17"
rustls-pemfile = "2.2.0"
[dev-dependencies] [dev-dependencies]
criterion = { version = "0.5", features = ["html_reports"] } criterion = { version = "0.5", features = ["html_reports"] }

View File

@@ -13,5 +13,5 @@ RUN cargo build --release
FROM alpine:3.20 FROM alpine:3.20
COPY --from=builder /app/target/release/numa /usr/local/bin/numa COPY --from=builder /app/target/release/numa /usr/local/bin/numa
EXPOSE 53/udp 80/tcp 443/tcp 5380/tcp EXPOSE 53/udp 80/tcp 443/tcp 853/tcp 5380/tcp
ENTRYPOINT ["numa"] ENTRYPOINT ["numa"]

62
PKGBUILD Normal file
View File

@@ -0,0 +1,62 @@
# Maintainer: razvandimescu <razvan@dimescu.com>
pkgname=numa-git
_pkgname=numa
pkgver=0.10.1.r0.g0000000 # Placeholder — pkgver() rewrites this on each makepkg run
pkgrel=1
pkgdesc="Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS"
arch=('x86_64')
url="https://github.com/razvandimescu/numa"
license=('MIT')
options=('!lto')
depends=('gcc-libs' 'glibc')
makedepends=('cargo' 'git')
provides=("$_pkgname")
conflicts=("$_pkgname")
backup=('etc/numa.toml')
source=("$_pkgname::git+$url.git")
sha256sums=('SKIP')
pkgver() {
cd "$srcdir/$_pkgname"
( set -o pipefail
git describe --long --tags 2>/dev/null | sed 's/\([^-]*-g\)/r\1/;s/-/./g' ||
printf "r%s.%s" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)"
) | sed 's/^v//'
}
prepare() {
cd "$srcdir/$_pkgname"
# numa v0.10.1+ uses FHS-compliant paths on Linux by default
# (/var/lib/numa for data, journalctl for logs), so no source
# patching is needed. The earlier sed targeted /usr/local/bin/numa,
# which only appears in a comment in current main.
export RUSTUP_TOOLCHAIN=stable
cargo fetch --locked
}
build() {
cd "$srcdir/$_pkgname"
export RUSTUP_TOOLCHAIN=stable
cargo build --frozen --release
}
check() {
cd "$srcdir/$_pkgname"
export RUSTUP_TOOLCHAIN=stable
cargo test --frozen
}
package() {
cd "$srcdir/$_pkgname"
install -Dm755 "target/release/$_pkgname" "$pkgdir/usr/bin/$_pkgname"
# numa.service uses {{exe_path}} as a placeholder substituted by
# `numa install` at runtime via replace_exe_path(). For an AUR
# package install (no `numa install` step), we substitute it
# statically here so systemd gets a real ExecStart path.
sed 's|{{exe_path}}|/usr/bin/numa /etc/numa.toml|g' numa.service > numa.service.patched
install -Dm644 "numa.service.patched" "$pkgdir/usr/lib/systemd/system/numa.service"
install -Dm644 "numa.toml" "$pkgdir/etc/numa.toml"
install -Dm644 "LICENSE" "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}

View File

@@ -8,7 +8,7 @@
A portable DNS resolver in a single binary. Block ads on any network, name your local services (`frontend.numa`), and override any hostname with auto-revert — all from your laptop, no cloud account or Raspberry Pi required. A portable DNS resolver in a single binary. Block ads on any network, name your local services (`frontend.numa`), and override any hostname with auto-revert — all from your laptop, no cloud account or Raspberry Pi required.
Built from scratch in Rust. Zero DNS libraries. RFC 1035 wire protocol parsed by hand. Caching, ad blocking, and local service domains out of the box. Optional recursive resolution from root nameservers with full DNSSEC chain-of-trust validation. One ~8MB binary, everything embedded. Built from scratch in Rust. Zero DNS libraries. RFC 1035 wire protocol parsed by hand. Caching, ad blocking, and local service domains out of the box. Optional recursive resolution from root nameservers with full DNSSEC chain-of-trust validation, plus a DNS-over-TLS listener for encrypted client connections (iOS Private DNS, systemd-resolved, etc.). One ~8MB binary, everything embedded.
![Numa dashboard](assets/hero-demo.gif) ![Numa dashboard](assets/hero-demo.gif)
@@ -21,6 +21,9 @@ brew install razvandimescu/tap/numa
# Linux # Linux
curl -fsSL https://raw.githubusercontent.com/razvandimescu/numa/main/install.sh | sh curl -fsSL https://raw.githubusercontent.com/razvandimescu/numa/main/install.sh | sh
# Arch Linux (AUR)
yay -S numa-git
# Windows — download from GitHub Releases # Windows — download from GitHub Releases
# All platforms # All platforms
cargo install numa cargo install numa
@@ -67,6 +70,13 @@ Three resolution modes:
DNSSEC validates the full chain of trust: RRSIG signatures, DNSKEY verification, DS delegation, NSEC/NSEC3 denial proofs. [Read how it works →](https://numa.rs/blog/posts/dnssec-from-scratch.html) DNSSEC validates the full chain of trust: RRSIG signatures, DNSKEY verification, DS delegation, NSEC/NSEC3 denial proofs. [Read how it works →](https://numa.rs/blog/posts/dnssec-from-scratch.html)
**DNS-over-TLS listener** (RFC 7858) — accept encrypted queries on port 853 from strict clients like iOS Private DNS, systemd-resolved, or stubby. Two modes:
- **Self-signed** (default) — numa generates a local CA automatically. `numa install` adds it to the system trust store on macOS, Linux (Debian/Ubuntu, Fedora/RHEL/SUSE, Arch), and Windows. On iOS, install the `.mobileconfig` from `numa setup-phone`. Firefox keeps its own NSS store and ignores the system one — trust the CA there manually if you need HTTPS for `.numa` services in Firefox.
- **Bring-your-own cert** — point `[dot] cert_path` / `key_path` at a publicly-trusted cert (e.g., Let's Encrypt via DNS-01 challenge on a domain pointing at your numa instance). Clients connect without any trust-store setup — same UX as AdGuard Home or Cloudflare `1.1.1.1`.
ALPN `"dot"` is advertised and enforced in both modes; a handshake with mismatched ALPN is rejected as a cross-protocol confusion defense.
## LAN Discovery ## LAN Discovery
Run Numa on multiple machines. They find each other automatically via mDNS: Run Numa on multiple machines. They find each other automatically via mDNS:
@@ -96,6 +106,7 @@ From Machine B: `curl http://api.numa` → proxied to Machine A's port 8000. Ena
| Ad blocking | Yes | Yes | — | 385K+ domains | | Ad blocking | Yes | Yes | — | 385K+ domains |
| Web admin UI | Full | Full | — | Dashboard | | Web admin UI | Full | Full | — | Dashboard |
| Encrypted upstream (DoH) | Needs cloudflared | Yes | — | Native | | Encrypted upstream (DoH) | Needs cloudflared | Yes | — | Native |
| Encrypted clients (DoT listener) | Needs stunnel sidecar | Yes | Yes | Native (RFC 7858) |
| Portable (laptop) | No (appliance) | No (appliance) | Server | Single binary, macOS/Linux/Windows | | Portable (laptop) | No (appliance) | No (appliance) | Server | Single binary, macOS/Linux/Windows |
| Community maturity | 56K stars, 10 years | 33K stars | 20 years | New | | Community maturity | 56K stars, 10 years | 33K stars | 20 years | New |
@@ -116,6 +127,7 @@ From Machine B: `curl http://api.numa` → proxied to Machine A's port 8000. Ena
- [x] `.numa` local domains — auto TLS, path routing, WebSocket proxy - [x] `.numa` local domains — auto TLS, path routing, WebSocket proxy
- [x] LAN service discovery — mDNS, cross-machine DNS + proxy - [x] LAN service discovery — mDNS, cross-machine DNS + proxy
- [x] DNS-over-HTTPS — encrypted upstream - [x] DNS-over-HTTPS — encrypted upstream
- [x] DNS-over-TLS listener — encrypted client connections (RFC 7858, ALPN strict)
- [x] Recursive resolution + DNSSEC — chain-of-trust, NSEC/NSEC3 - [x] Recursive resolution + DNSSEC — chain-of-trust, NSEC/NSEC3
- [x] SRTT-based nameserver selection - [x] SRTT-based nameserver selection
- [ ] pkarr integration — self-sovereign DNS via Mainline DHT - [ ] pkarr integration — self-sovereign DNS via Mainline DHT

View File

@@ -2,6 +2,12 @@
bind_addr = "0.0.0.0:53" bind_addr = "0.0.0.0:53"
api_port = 5380 api_port = 5380
# api_bind_addr = "127.0.0.1" # default; set to "0.0.0.0" for LAN dashboard access # api_bind_addr = "127.0.0.1" # default; set to "0.0.0.0" for LAN dashboard access
# data_dir = "/var/lib/numa" # where numa stores TLS CA and cert material
# Defaults: /var/lib/numa on linux (FHS),
# /usr/local/var/numa on macos (homebrew prefix),
# %PROGRAMDATA%\numa on windows. Override for
# containerized deploys or tests that can't
# write to the system path.
# [upstream] # [upstream]
# mode = "forward" # "forward" (default) — relay to upstream # mode = "forward" # "forward" (default) — relay to upstream
@@ -83,6 +89,14 @@ tld = "numa"
# enabled = false # opt-in: verify chain of trust from root KSK # enabled = false # opt-in: verify chain of trust from root KSK
# strict = false # true = SERVFAIL on bogus signatures # strict = false # true = SERVFAIL on bogus signatures
# DNS-over-TLS listener (RFC 7858) — encrypted DNS on port 853
# [dot]
# enabled = false # opt-in: accept DoT queries
# port = 853 # standard DoT port
# bind_addr = "0.0.0.0" # IPv4 or IPv6; unspecified binds all interfaces
# cert_path = "/etc/numa/dot.crt" # PEM cert; omit to use self-signed (proxy CA if available)
# key_path = "/etc/numa/dot.key" # PEM private key; must be set together with cert_path
# LAN service discovery via mDNS (disabled by default — no network traffic unless enabled) # LAN service discovery via mDNS (disabled by default — no network traffic unless enabled)
# [lan] # [lan]
# enabled = true # discover other Numa instances via mDNS (_numa._tcp.local) # enabled = true # discover other Numa instances via mDNS (_numa._tcp.local)

View File

@@ -0,0 +1,57 @@
#!/usr/bin/env python3
"""Rewrite a Homebrew formula in place: bump version, URL paths, and sha256 lines.
Reads the formula path from argv[1], and the following env vars:
VERSION e.g. "0.10.0" (no leading v)
SHA_MACOS_AARCH64
SHA_MACOS_X86_64
SHA_LINUX_AARCH64
SHA_LINUX_X86_64
Assumptions about the formula:
- Has `version "X.Y.Z"` somewhere
- Has `url "...releases/download/vX.Y.Z/numa-<target>.tar.gz"` lines
- May or may not already have `sha256 "..."` lines immediately after each url
"""
import os
import re
import sys
formula_path = sys.argv[1]
version = os.environ["VERSION"].lstrip("v")
shas = {
"macos-aarch64": os.environ["SHA_MACOS_AARCH64"],
"macos-x86_64": os.environ["SHA_MACOS_X86_64"],
"linux-aarch64": os.environ["SHA_LINUX_AARCH64"],
"linux-x86_64": os.environ["SHA_LINUX_X86_64"],
}
with open(formula_path) as f:
content = f.read()
content = re.sub(r'version "[^"]*"', f'version "{version}"', content)
content = re.sub(
r"releases/download/v[\d.]+/numa-",
f"releases/download/v{version}/numa-",
content,
)
content = re.sub(r'\n[ \t]*sha256 "[^"]*"', "", content)
def add_sha(match: re.Match) -> str:
indent = match.group(1)
target = match.group(2)
if target not in shas:
return match.group(0)
return f'{match.group(0)}\n{indent}sha256 "{shas[target]}"'
content = re.sub(
r'^([ \t]+)url "[^"]*numa-([\w-]+)\.tar\.gz"',
add_sha,
content,
flags=re.MULTILINE,
)
with open(formula_path, "w") as f:
f.write(content)

View File

@@ -906,7 +906,7 @@ async fn remove_route(
} }
async fn serve_ca(State(ctx): State<Arc<ServerCtx>>) -> Result<impl IntoResponse, StatusCode> { async fn serve_ca(State(ctx): State<Arc<ServerCtx>>) -> Result<impl IntoResponse, StatusCode> {
let ca_path = ctx.data_dir.join("ca.pem"); let ca_path = ctx.data_dir.join(crate::tls::CA_FILE_NAME);
let bytes = tokio::task::spawn_blocking(move || std::fs::read(ca_path)) let bytes = tokio::task::spawn_blocking(move || std::fs::read(ca_path))
.await .await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)? .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?

View File

@@ -1,7 +1,7 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::net::Ipv4Addr; use std::net::Ipv4Addr;
use std::net::Ipv6Addr; use std::net::Ipv6Addr;
use std::path::Path; use std::path::{Path, PathBuf};
use serde::Deserialize; use serde::Deserialize;
@@ -29,6 +29,8 @@ pub struct Config {
pub lan: LanConfig, pub lan: LanConfig,
#[serde(default)] #[serde(default)]
pub dnssec: DnssecConfig, pub dnssec: DnssecConfig,
#[serde(default)]
pub dot: DotConfig,
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@@ -39,6 +41,10 @@ pub struct ServerConfig {
pub api_port: u16, pub api_port: u16,
#[serde(default = "default_api_bind_addr")] #[serde(default = "default_api_bind_addr")]
pub api_bind_addr: String, pub api_bind_addr: String,
/// Where numa writes TLS material (CA, leaf certs, regenerated state).
/// Defaults to `crate::data_dir()` (platform-specific system path) if unset.
#[serde(default)]
pub data_dir: Option<PathBuf>,
} }
impl Default for ServerConfig { impl Default for ServerConfig {
@@ -47,6 +53,7 @@ impl Default for ServerConfig {
bind_addr: default_bind_addr(), bind_addr: default_bind_addr(),
api_port: default_api_port(), api_port: default_api_port(),
api_bind_addr: default_api_bind_addr(), api_bind_addr: default_api_bind_addr(),
data_dir: None,
} }
} }
} }
@@ -370,6 +377,41 @@ pub struct DnssecConfig {
pub strict: bool, pub strict: bool,
} }
#[derive(Deserialize, Clone)]
pub struct DotConfig {
#[serde(default)]
pub enabled: bool,
#[serde(default = "default_dot_port")]
pub port: u16,
#[serde(default = "default_dot_bind_addr")]
pub bind_addr: String,
/// Path to TLS certificate (PEM). If None, uses self-signed CA.
#[serde(default)]
pub cert_path: Option<PathBuf>,
/// Path to TLS private key (PEM). If None, uses self-signed CA.
#[serde(default)]
pub key_path: Option<PathBuf>,
}
impl Default for DotConfig {
fn default() -> Self {
DotConfig {
enabled: false,
port: default_dot_port(),
bind_addr: default_dot_bind_addr(),
cert_path: None,
key_path: None,
}
}
}
fn default_dot_port() -> u16 {
853
}
fn default_dot_bind_addr() -> String {
"0.0.0.0".to_string()
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;

View File

@@ -62,24 +62,21 @@ pub struct ServerCtx {
pub dnssec_strict: bool, pub dnssec_strict: bool,
} }
pub async fn handle_query( /// Transport-agnostic DNS resolution. Runs the full pipeline (overrides, blocklist,
mut buffer: BytePacketBuffer, /// cache, upstream, DNSSEC) and returns the serialized response in a buffer.
/// Callers use `.filled()` to get the response bytes without heap allocation.
/// Callers are responsible for parsing the incoming buffer into a `DnsPacket`
/// (and logging parse errors) before calling this function.
pub async fn resolve_query(
query: DnsPacket,
src_addr: SocketAddr, src_addr: SocketAddr,
ctx: &ServerCtx, ctx: &ServerCtx,
) -> crate::Result<()> { ) -> crate::Result<BytePacketBuffer> {
let start = Instant::now(); let start = Instant::now();
let query = match DnsPacket::from_buffer(&mut buffer) {
Ok(packet) => packet,
Err(e) => {
warn!("{} | PARSE ERROR | {}", src_addr, e);
return Ok(());
}
};
let (qname, qtype) = match query.questions.first() { let (qname, qtype) = match query.questions.first() {
Some(q) => (q.name.clone(), q.qtype), Some(q) => (q.name.clone(), q.qtype),
None => return Ok(()), None => return Err("empty question section".into()),
}; };
// Pipeline: overrides -> .tld interception -> blocklist -> local zones -> cache -> upstream // Pipeline: overrides -> .tld interception -> blocklist -> local zones -> cache -> upstream
@@ -306,17 +303,17 @@ pub async fn handle_query(
response.resources.len(), response.resources.len(),
); );
// Serialize response
// TODO: TC bit is UDP-specific; DoT connections could carry up to 65535 bytes.
// Once BytePacketBuffer supports larger buffers, skip truncation for TCP/TLS.
let mut resp_buffer = BytePacketBuffer::new(); let mut resp_buffer = BytePacketBuffer::new();
if response.write(&mut resp_buffer).is_err() { if response.write(&mut resp_buffer).is_err() {
// Response too large for UDP — set TC bit and send header + question only // Response too large — set TC bit and send header + question only
debug!("response too large, setting TC bit for {}", qname); debug!("response too large, setting TC bit for {}", qname);
let mut tc_response = DnsPacket::response_from(&query, response.header.rescode); let mut tc_response = DnsPacket::response_from(&query, response.header.rescode);
tc_response.header.truncated_message = true; tc_response.header.truncated_message = true;
let mut tc_buffer = BytePacketBuffer::new(); resp_buffer = BytePacketBuffer::new();
tc_response.write(&mut tc_buffer)?; tc_response.write(&mut resp_buffer)?;
ctx.socket.send_to(tc_buffer.filled(), src_addr).await?;
} else {
ctx.socket.send_to(resp_buffer.filled(), src_addr).await?;
} }
// Record stats and query log // Record stats and query log
@@ -339,6 +336,30 @@ pub async fn handle_query(
dnssec, dnssec,
}); });
Ok(resp_buffer)
}
/// Handle a DNS query received over UDP. Thin wrapper around resolve_query.
pub async fn handle_query(
mut buffer: BytePacketBuffer,
src_addr: SocketAddr,
ctx: &ServerCtx,
) -> crate::Result<()> {
let query = match DnsPacket::from_buffer(&mut buffer) {
Ok(packet) => packet,
Err(e) => {
warn!("{} | PARSE ERROR | {}", src_addr, e);
return Ok(());
}
};
match resolve_query(query, src_addr, ctx).await {
Ok(resp_buffer) => {
ctx.socket.send_to(resp_buffer.filled(), src_addr).await?;
}
Err(e) => {
warn!("{} | RESOLVE ERROR | {}", src_addr, e);
}
}
Ok(()) Ok(())
} }

542
src/dot.rs Normal file
View File

@@ -0,0 +1,542 @@
use std::net::{IpAddr, SocketAddr};
use std::path::Path;
use std::sync::Arc;
use std::time::Duration;
use log::{debug, error, info, warn};
use rustls::ServerConfig;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpListener;
use tokio::sync::Semaphore;
use tokio_rustls::TlsAcceptor;
use crate::buffer::BytePacketBuffer;
use crate::config::DotConfig;
use crate::ctx::{resolve_query, ServerCtx};
use crate::header::ResultCode;
use crate::packet::DnsPacket;
const MAX_CONNECTIONS: usize = 512;
const IDLE_TIMEOUT: Duration = Duration::from_secs(30);
const HANDSHAKE_TIMEOUT: Duration = Duration::from_secs(10);
const WRITE_TIMEOUT: Duration = Duration::from_secs(10);
// Matches BytePacketBuffer::BUF_SIZE — RFC 7858 allows up to 65535 but our
// buffer would silently truncate anything larger.
const MAX_MSG_LEN: usize = 4096;
fn dot_alpn() -> Vec<Vec<u8>> {
vec![b"dot".to_vec()]
}
/// Build a TLS ServerConfig for DoT from user-provided cert/key PEM files.
fn load_tls_config(cert_path: &Path, key_path: &Path) -> crate::Result<Arc<ServerConfig>> {
// rustls needs a CryptoProvider installed before ServerConfig::builder().
// The proxy's build_tls_config also does this; we repeat it here because
// running DoT with user-provided certs while the proxy is disabled would
// otherwise panic on first handshake (no default provider).
let _ = rustls::crypto::ring::default_provider().install_default();
let cert_pem = std::fs::read(cert_path)?;
let key_pem = std::fs::read(key_path)?;
let certs: Vec<_> = rustls_pemfile::certs(&mut &cert_pem[..]).collect::<Result<_, _>>()?;
let key = rustls_pemfile::private_key(&mut &key_pem[..])?
.ok_or("no private key found in key file")?;
let mut config = ServerConfig::builder()
.with_no_client_auth()
.with_single_cert(certs, key)?;
config.alpn_protocols = dot_alpn();
Ok(Arc::new(config))
}
/// Build a self-signed DoT TLS config. Can't reuse `ctx.tls_config` (the
/// proxy's shared config) because DoT needs its own ALPN advertisement.
///
/// Pass `proxy_tld` itself as a service name so the cert gets an explicit
/// `{tld}.{tld}` SAN (e.g. "numa.numa") matching the ServerName that
/// setup-phone's mobileconfig sends as SNI. The `*.{tld}` wildcard alone
/// is rejected by strict TLS clients under single-label TLDs (per the
/// note in tls.rs::generate_service_cert).
fn self_signed_tls(ctx: &ServerCtx) -> Option<Arc<ServerConfig>> {
let service_names = [ctx.proxy_tld.clone()];
match crate::tls::build_tls_config(&ctx.proxy_tld, &service_names, dot_alpn(), &ctx.data_dir) {
Ok(cfg) => Some(cfg),
Err(e) => {
warn!(
"DoT: failed to generate self-signed TLS: {} — DoT disabled",
e
);
None
}
}
}
/// Start the DNS-over-TLS listener (RFC 7858).
pub async fn start_dot(ctx: Arc<ServerCtx>, config: &DotConfig) {
let tls_config = match (&config.cert_path, &config.key_path) {
(Some(cert), Some(key)) => match load_tls_config(cert, key) {
Ok(cfg) => cfg,
Err(e) => {
warn!("DoT: failed to load TLS cert/key: {} — DoT disabled", e);
return;
}
},
_ => match self_signed_tls(&ctx) {
Some(cfg) => cfg,
None => return,
},
};
let bind_addr: IpAddr = config
.bind_addr
.parse()
.unwrap_or(IpAddr::V4(std::net::Ipv4Addr::UNSPECIFIED));
let addr = SocketAddr::new(bind_addr, config.port);
let listener = match TcpListener::bind(addr).await {
Ok(l) => l,
Err(e) => {
warn!("DoT: could not bind {} ({}) — DoT disabled", addr, e);
return;
}
};
info!("DoT listening on {}", addr);
accept_loop(listener, TlsAcceptor::from(tls_config), ctx).await;
}
async fn accept_loop(listener: TcpListener, acceptor: TlsAcceptor, ctx: Arc<ServerCtx>) {
let semaphore = Arc::new(Semaphore::new(MAX_CONNECTIONS));
loop {
let (tcp_stream, remote_addr) = match listener.accept().await {
Ok(conn) => conn,
Err(e) => {
error!("DoT: TCP accept error: {}", e);
// Back off to avoid tight-looping on persistent failures (e.g. fd exhaustion).
tokio::time::sleep(Duration::from_millis(100)).await;
continue;
}
};
let permit = match semaphore.clone().try_acquire_owned() {
Ok(p) => p,
Err(_) => {
debug!("DoT: connection limit reached, rejecting {}", remote_addr);
continue;
}
};
let acceptor = acceptor.clone();
let ctx = Arc::clone(&ctx);
tokio::spawn(async move {
let _permit = permit; // held until task exits
let tls_stream =
match tokio::time::timeout(HANDSHAKE_TIMEOUT, acceptor.accept(tcp_stream)).await {
Ok(Ok(s)) => s,
Ok(Err(e)) => {
debug!("DoT: TLS handshake failed from {}: {}", remote_addr, e);
return;
}
Err(_) => {
debug!("DoT: TLS handshake timeout from {}", remote_addr);
return;
}
};
handle_dot_connection(tls_stream, remote_addr, &ctx).await;
});
}
}
/// Handle a single persistent DoT connection (RFC 7858).
/// Reads length-prefixed DNS queries until EOF, idle timeout, or error.
async fn handle_dot_connection<S>(mut stream: S, remote_addr: SocketAddr, ctx: &ServerCtx)
where
S: AsyncReadExt + AsyncWriteExt + Unpin,
{
loop {
// Read 2-byte length prefix (RFC 1035 §4.2.2) with idle timeout
let mut len_buf = [0u8; 2];
let Ok(Ok(_)) = tokio::time::timeout(IDLE_TIMEOUT, stream.read_exact(&mut len_buf)).await
else {
break;
};
let msg_len = u16::from_be_bytes(len_buf) as usize;
if msg_len > MAX_MSG_LEN {
debug!("DoT: oversized message {} from {}", msg_len, remote_addr);
break;
}
let mut buffer = BytePacketBuffer::new();
let Ok(Ok(_)) =
tokio::time::timeout(IDLE_TIMEOUT, stream.read_exact(&mut buffer.buf[..msg_len])).await
else {
break;
};
// Parse query up-front so we can echo its question section in SERVFAIL
// responses when resolve_query fails.
let query = match DnsPacket::from_buffer(&mut buffer) {
Ok(q) => q,
Err(e) => {
warn!("{} | PARSE ERROR | {}", remote_addr, e);
// BytePacketBuffer is zero-initialized, so buf[0..2] reads as 0x0000
// for sub-2-byte messages — harmless FORMERR with id=0.
let query_id = u16::from_be_bytes([buffer.buf[0], buffer.buf[1]]);
let mut resp = DnsPacket::new();
resp.header.id = query_id;
resp.header.response = true;
resp.header.rescode = ResultCode::FORMERR;
if send_response(&mut stream, &resp, remote_addr)
.await
.is_err()
{
break;
}
continue;
}
};
match resolve_query(query.clone(), remote_addr, ctx).await {
Ok(resp_buffer) => {
if write_framed(&mut stream, resp_buffer.filled())
.await
.is_err()
{
break;
}
}
Err(e) => {
warn!("{} | RESOLVE ERROR | {}", remote_addr, e);
// SERVFAIL that echoes the original question section.
let resp = DnsPacket::response_from(&query, ResultCode::SERVFAIL);
if send_response(&mut stream, &resp, remote_addr)
.await
.is_err()
{
break;
}
}
}
}
}
/// Serialize a DNS response and send it framed. Logs serialization failures
/// and returns Err so the caller can tear down the connection.
async fn send_response<S>(
stream: &mut S,
resp: &DnsPacket,
remote_addr: SocketAddr,
) -> std::io::Result<()>
where
S: AsyncWriteExt + Unpin,
{
let mut out_buf = BytePacketBuffer::new();
if resp.write(&mut out_buf).is_err() {
debug!(
"DoT: failed to serialize {:?} response for {}",
resp.header.rescode, remote_addr
);
return Err(std::io::Error::other("serialize failed"));
}
write_framed(stream, out_buf.filled()).await
}
/// Write a DNS message with its 2-byte length prefix, coalesced into one syscall.
/// Bounded by WRITE_TIMEOUT so a stalled reader can't indefinitely hold a worker.
async fn write_framed<S>(stream: &mut S, msg: &[u8]) -> std::io::Result<()>
where
S: AsyncWriteExt + Unpin,
{
let mut out = Vec::with_capacity(2 + msg.len());
out.extend_from_slice(&(msg.len() as u16).to_be_bytes());
out.extend_from_slice(msg);
match tokio::time::timeout(WRITE_TIMEOUT, async {
stream.write_all(&out).await?;
stream.flush().await
})
.await
{
Ok(result) => result,
Err(_) => Err(std::io::Error::other("write timeout")),
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::collections::HashMap;
use std::sync::{Mutex, RwLock};
use rcgen::{CertificateParams, DnType, KeyPair};
use rustls::pki_types::{CertificateDer, PrivateKeyDer, PrivatePkcs8KeyDer, ServerName};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use crate::buffer::BytePacketBuffer;
use crate::header::ResultCode;
use crate::packet::DnsPacket;
use crate::question::QueryType;
use crate::record::DnsRecord;
/// Generate a self-signed DoT server config and return its leaf cert DER
/// so callers can build matching client configs with arbitrary ALPN.
fn test_tls_configs() -> (Arc<ServerConfig>, CertificateDer<'static>) {
let _ = rustls::crypto::ring::default_provider().install_default();
// Mirror production self_signed_tls SAN shape: *.numa wildcard plus
// explicit numa.numa apex (the ServerName setup-phone uses as SNI).
let key_pair = KeyPair::generate().unwrap();
let mut params = CertificateParams::default();
params
.distinguished_name
.push(DnType::CommonName, "Numa .numa services");
params.subject_alt_names = vec![
rcgen::SanType::DnsName("*.numa".try_into().unwrap()),
rcgen::SanType::DnsName("numa.numa".try_into().unwrap()),
];
let cert = params.self_signed(&key_pair).unwrap();
let cert_der = CertificateDer::from(cert.der().to_vec());
let key_der = PrivateKeyDer::Pkcs8(PrivatePkcs8KeyDer::from(key_pair.serialize_der()));
let mut server_config = ServerConfig::builder()
.with_no_client_auth()
.with_single_cert(vec![cert_der.clone()], key_der)
.unwrap();
server_config.alpn_protocols = dot_alpn();
(Arc::new(server_config), cert_der)
}
/// Build a TLS client config that trusts `cert_der` and advertises the
/// given ALPN protocols. Used by tests to vary ALPN per test case.
fn dot_client(
cert_der: &CertificateDer<'static>,
alpn: Vec<Vec<u8>>,
) -> Arc<rustls::ClientConfig> {
let mut root_store = rustls::RootCertStore::empty();
root_store.add(cert_der.clone()).unwrap();
let mut config = rustls::ClientConfig::builder()
.with_root_certificates(root_store)
.with_no_client_auth();
config.alpn_protocols = alpn;
Arc::new(config)
}
/// Spin up a DoT listener with a test TLS config. Returns the bind addr
/// and the leaf cert DER so callers can build clients with arbitrary ALPN.
/// The upstream is pointed at a bound-but-unresponsive UDP socket we own, so
/// any query that escapes to the upstream path times out deterministically
/// (SERVFAIL) regardless of what the host has running on port 53.
async fn spawn_dot_server() -> (SocketAddr, CertificateDer<'static>) {
let (server_tls, cert_der) = test_tls_configs();
let socket = tokio::net::UdpSocket::bind("127.0.0.1:0").await.unwrap();
// Bind an unresponsive upstream and leak it so it lives for the test duration.
let blackhole = Box::leak(Box::new(std::net::UdpSocket::bind("127.0.0.1:0").unwrap()));
let upstream_addr = blackhole.local_addr().unwrap();
let ctx = Arc::new(ServerCtx {
socket,
zone_map: {
let mut m = HashMap::new();
let mut inner = HashMap::new();
inner.insert(
QueryType::A,
vec![DnsRecord::A {
domain: "dot-test.example".to_string(),
addr: std::net::Ipv4Addr::new(10, 0, 0, 1),
ttl: 300,
}],
);
m.insert("dot-test.example".to_string(), inner);
m
},
cache: RwLock::new(crate::cache::DnsCache::new(100, 60, 86400)),
stats: Mutex::new(crate::stats::ServerStats::new()),
overrides: RwLock::new(crate::override_store::OverrideStore::new()),
blocklist: RwLock::new(crate::blocklist::BlocklistStore::new()),
query_log: Mutex::new(crate::query_log::QueryLog::new(100)),
services: Mutex::new(crate::service_store::ServiceStore::new()),
lan_peers: Mutex::new(crate::lan::PeerStore::new(90)),
forwarding_rules: Vec::new(),
upstream: Mutex::new(crate::forward::Upstream::Udp(upstream_addr)),
upstream_auto: false,
upstream_port: 53,
lan_ip: Mutex::new(std::net::Ipv4Addr::LOCALHOST),
timeout: Duration::from_millis(200),
proxy_tld: "numa".to_string(),
proxy_tld_suffix: ".numa".to_string(),
lan_enabled: false,
config_path: String::new(),
config_found: false,
config_dir: std::path::PathBuf::from("/tmp"),
data_dir: std::path::PathBuf::from("/tmp"),
tls_config: Some(arc_swap::ArcSwap::from(server_tls)),
upstream_mode: crate::config::UpstreamMode::Forward,
root_hints: Vec::new(),
srtt: RwLock::new(crate::srtt::SrttCache::new(true)),
inflight: Mutex::new(HashMap::new()),
dnssec_enabled: false,
dnssec_strict: false,
});
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
let tls_config = Arc::clone(&*ctx.tls_config.as_ref().unwrap().load());
let acceptor = TlsAcceptor::from(tls_config);
tokio::spawn(accept_loop(listener, acceptor, ctx));
(addr, cert_der)
}
/// Open a TLS connection to the DoT server and return the stream.
/// Uses SNI "numa.numa" to mirror what setup-phone's mobileconfig sends.
async fn dot_connect(
addr: SocketAddr,
client_config: &Arc<rustls::ClientConfig>,
) -> tokio_rustls::client::TlsStream<tokio::net::TcpStream> {
let connector = tokio_rustls::TlsConnector::from(Arc::clone(client_config));
let tcp = tokio::net::TcpStream::connect(addr).await.unwrap();
connector
.connect(ServerName::try_from("numa.numa").unwrap(), tcp)
.await
.unwrap()
}
/// Send a DNS query over a DoT stream and read the response.
async fn dot_exchange(
stream: &mut tokio_rustls::client::TlsStream<tokio::net::TcpStream>,
query: &DnsPacket,
) -> DnsPacket {
let mut buf = BytePacketBuffer::new();
query.write(&mut buf).unwrap();
let msg = buf.filled();
let mut out = Vec::with_capacity(2 + msg.len());
out.extend_from_slice(&(msg.len() as u16).to_be_bytes());
out.extend_from_slice(msg);
stream.write_all(&out).await.unwrap();
let mut len_buf = [0u8; 2];
stream.read_exact(&mut len_buf).await.unwrap();
let resp_len = u16::from_be_bytes(len_buf) as usize;
let mut data = vec![0u8; resp_len];
stream.read_exact(&mut data).await.unwrap();
let mut resp_buf = BytePacketBuffer::from_bytes(&data);
DnsPacket::from_buffer(&mut resp_buf).unwrap()
}
#[tokio::test]
async fn dot_resolves_local_zone() {
let (addr, cert_der) = spawn_dot_server().await;
let client_config = dot_client(&cert_der, dot_alpn());
let mut stream = dot_connect(addr, &client_config).await;
let query = DnsPacket::query(0x1234, "dot-test.example", QueryType::A);
let resp = dot_exchange(&mut stream, &query).await;
assert_eq!(resp.header.id, 0x1234);
assert!(resp.header.response);
assert_eq!(resp.header.rescode, ResultCode::NOERROR);
assert_eq!(resp.answers.len(), 1);
match &resp.answers[0] {
DnsRecord::A { domain, addr, ttl } => {
assert_eq!(domain, "dot-test.example");
assert_eq!(*addr, std::net::Ipv4Addr::new(10, 0, 0, 1));
assert_eq!(*ttl, 300);
}
other => panic!("expected A record, got {:?}", other),
}
}
#[tokio::test]
async fn dot_multiple_queries_on_persistent_connection() {
let (addr, cert_der) = spawn_dot_server().await;
let client_config = dot_client(&cert_der, dot_alpn());
let mut stream = dot_connect(addr, &client_config).await;
for i in 0..3u16 {
let query = DnsPacket::query(0xA000 + i, "dot-test.example", QueryType::A);
let resp = dot_exchange(&mut stream, &query).await;
assert_eq!(resp.header.id, 0xA000 + i);
assert_eq!(resp.header.rescode, ResultCode::NOERROR);
assert_eq!(resp.answers.len(), 1);
}
}
#[tokio::test]
async fn dot_nxdomain_for_unknown() {
let (addr, cert_der) = spawn_dot_server().await;
let client_config = dot_client(&cert_der, dot_alpn());
let mut stream = dot_connect(addr, &client_config).await;
let query = DnsPacket::query(0xBEEF, "nonexistent.test", QueryType::A);
let resp = dot_exchange(&mut stream, &query).await;
assert_eq!(resp.header.id, 0xBEEF);
assert!(resp.header.response);
// Query goes to the blackhole upstream which never replies → SERVFAIL.
// The SERVFAIL response echoes the question section.
assert_eq!(resp.header.rescode, ResultCode::SERVFAIL);
assert_eq!(resp.questions.len(), 1);
assert_eq!(resp.questions[0].name, "nonexistent.test");
}
#[tokio::test]
async fn dot_negotiates_alpn() {
let (addr, cert_der) = spawn_dot_server().await;
let client_config = dot_client(&cert_der, dot_alpn());
let stream = dot_connect(addr, &client_config).await;
let (_io, conn) = stream.get_ref();
assert_eq!(conn.alpn_protocol(), Some(&b"dot"[..]));
}
#[tokio::test]
async fn dot_rejects_non_dot_alpn() {
// Cross-protocol confusion defense: a client that only offers "h2"
// (e.g. an HTTP/2 client mistakenly hitting :853) must not complete
// a TLS handshake with the DoT server. Verifies the rustls server
// sends `no_application_protocol` rather than silently negotiating.
let (addr, cert_der) = spawn_dot_server().await;
let client_config = dot_client(&cert_der, vec![b"h2".to_vec()]);
let connector = tokio_rustls::TlsConnector::from(client_config);
let tcp = tokio::net::TcpStream::connect(addr).await.unwrap();
let result = connector
.connect(ServerName::try_from("numa.numa").unwrap(), tcp)
.await;
assert!(
result.is_err(),
"DoT server must reject ALPN that doesn't include \"dot\""
);
}
#[tokio::test]
async fn dot_concurrent_connections() {
let (addr, cert_der) = spawn_dot_server().await;
let client_config = dot_client(&cert_der, dot_alpn());
let mut handles = Vec::new();
for i in 0..5u16 {
let cfg = Arc::clone(&client_config);
handles.push(tokio::spawn(async move {
let mut stream = dot_connect(addr, &cfg).await;
let query = DnsPacket::query(0xC000 + i, "dot-test.example", QueryType::A);
let resp = dot_exchange(&mut stream, &query).await;
assert_eq!(resp.header.id, 0xC000 + i);
assert_eq!(resp.header.rescode, ResultCode::NOERROR);
assert_eq!(resp.answers.len(), 1);
}));
}
for h in handles {
h.await.unwrap();
}
}
}

View File

@@ -5,6 +5,7 @@ pub mod cache;
pub mod config; pub mod config;
pub mod ctx; pub mod ctx;
pub mod dnssec; pub mod dnssec;
pub mod dot;
pub mod forward; pub mod forward;
pub mod header; pub mod header;
pub mod lan; pub mod lan;
@@ -25,7 +26,10 @@ pub type Error = Box<dyn std::error::Error + Send + Sync>;
pub type Result<T> = std::result::Result<T, Error>; pub type Result<T> = std::result::Result<T, Error>;
/// Shared config directory for persistent data (services.json, etc). /// Shared config directory for persistent data (services.json, etc).
/// Unix: ~/.config/numa/ (or /usr/local/var/numa/ when running as root daemon) /// Unix users: ~/.config/numa/
/// Linux root daemon: /var/lib/numa (FHS) — falls back to /usr/local/var/numa
/// if a pre-v0.10.1 install already lives there.
/// macOS root daemon: /usr/local/var/numa (Homebrew prefix)
/// Windows: %APPDATA%\numa /// Windows: %APPDATA%\numa
pub fn config_dir() -> std::path::PathBuf { pub fn config_dir() -> std::path::PathBuf {
#[cfg(windows)] #[cfg(windows)]
@@ -62,11 +66,15 @@ fn config_dir_unix() -> std::path::PathBuf {
} }
// Running as root daemon (launchd/systemd) — use system-wide path // Running as root daemon (launchd/systemd) — use system-wide path
std::path::PathBuf::from("/usr/local/var/numa") daemon_data_dir()
} }
/// System-wide data directory for TLS certs. /// Default system-wide data directory for TLS certs. Overridable via
/// Unix: /usr/local/var/numa /// `[server] data_dir = "..."` in numa.toml — this function only provides
/// the fallback when the config doesn't set it.
/// Linux: /var/lib/numa (FHS) — falls back to /usr/local/var/numa if a
/// pre-v0.10.1 install already has data there.
/// macOS: /usr/local/var/numa (Homebrew prefix)
/// Windows: %PROGRAMDATA%\numa /// Windows: %PROGRAMDATA%\numa
pub fn data_dir() -> std::path::PathBuf { pub fn data_dir() -> std::path::PathBuf {
#[cfg(windows)] #[cfg(windows)]
@@ -78,6 +86,62 @@ pub fn data_dir() -> std::path::PathBuf {
} }
#[cfg(not(windows))] #[cfg(not(windows))]
{ {
daemon_data_dir()
}
}
/// Resolve the system-wide data directory for the running platform.
/// Honors backwards compatibility with pre-v0.10.1 installs that still
/// have their CA cert + services.json under `/usr/local/var/numa`.
#[cfg(not(windows))]
fn daemon_data_dir() -> std::path::PathBuf {
#[cfg(target_os = "linux")]
{
std::path::PathBuf::from(resolve_linux_data_dir(
std::path::Path::new("/usr/local/var/numa").exists(),
std::path::Path::new("/var/lib/numa").exists(),
))
}
#[cfg(target_os = "macos")]
{
// macOS uses the Homebrew prefix convention; no FHS migration needed.
std::path::PathBuf::from("/usr/local/var/numa") std::path::PathBuf::from("/usr/local/var/numa")
} }
} }
/// Extracted as a pure function so the migration logic is unit-testable
/// without touching the real filesystem.
#[cfg(any(target_os = "linux", test))]
fn resolve_linux_data_dir(legacy_exists: bool, fhs_exists: bool) -> &'static str {
if legacy_exists && !fhs_exists {
"/usr/local/var/numa"
} else {
"/var/lib/numa"
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn linux_data_dir_fresh_install_uses_fhs() {
assert_eq!(resolve_linux_data_dir(false, false), "/var/lib/numa");
}
#[test]
fn linux_data_dir_upgrading_install_keeps_legacy() {
// Migration must keep legacy so the user doesn't lose their CA on upgrade.
assert_eq!(resolve_linux_data_dir(true, false), "/usr/local/var/numa");
}
#[test]
fn linux_data_dir_after_migration_uses_fhs() {
assert_eq!(resolve_linux_data_dir(true, true), "/var/lib/numa");
}
#[test]
fn linux_data_dir_only_fhs_uses_fhs() {
assert_eq!(resolve_linux_data_dir(false, true), "/var/lib/numa");
}
}

View File

@@ -204,13 +204,30 @@ async fn main() -> numa::Result<()> {
let forwarding_rules = system_dns.forwarding_rules; let forwarding_rules = system_dns.forwarding_rules;
// Resolve data_dir from config, falling back to the platform default.
// Used for TLS CA storage below and stored on ServerCtx for runtime use.
let resolved_data_dir = config
.server
.data_dir
.clone()
.unwrap_or_else(numa::data_dir);
// Build initial TLS config before ServerCtx (so ArcSwap is ready at construction) // Build initial TLS config before ServerCtx (so ArcSwap is ready at construction)
let initial_tls = if config.proxy.enabled && config.proxy.tls_port > 0 { let initial_tls = if config.proxy.enabled && config.proxy.tls_port > 0 {
let service_names = service_store.names(); let service_names = service_store.names();
match numa::tls::build_tls_config(&config.proxy.tld, &service_names) { match numa::tls::build_tls_config(
&config.proxy.tld,
&service_names,
Vec::new(),
&resolved_data_dir,
) {
Ok(tls_config) => Some(ArcSwap::from(tls_config)), Ok(tls_config) => Some(ArcSwap::from(tls_config)),
Err(e) => { Err(e) => {
log::warn!("TLS setup failed, HTTPS proxy disabled: {}", e); if let Some(advisory) = numa::tls::try_data_dir_advisory(&e, &resolved_data_dir) {
eprint!("{}", advisory);
} else {
log::warn!("TLS setup failed, HTTPS proxy disabled: {}", e);
}
None None
} }
} }
@@ -218,8 +235,21 @@ async fn main() -> numa::Result<()> {
None None
}; };
let socket = match UdpSocket::bind(&config.server.bind_addr).await {
Ok(s) => s,
Err(e) => {
if let Some(advisory) =
numa::system_dns::try_port53_advisory(&config.server.bind_addr, &e)
{
eprint!("{}", advisory);
std::process::exit(1);
}
return Err(e.into());
}
};
let ctx = Arc::new(ServerCtx { let ctx = Arc::new(ServerCtx {
socket: UdpSocket::bind(&config.server.bind_addr).await?, socket,
zone_map: build_zone_map(&config.zones)?, zone_map: build_zone_map(&config.zones)?,
cache: RwLock::new(DnsCache::new( cache: RwLock::new(DnsCache::new(
config.cache.max_entries, config.cache.max_entries,
@@ -248,7 +278,7 @@ async fn main() -> numa::Result<()> {
config_path: resolved_config_path, config_path: resolved_config_path,
config_found, config_found,
config_dir: numa::config_dir(), config_dir: numa::config_dir(),
data_dir: numa::data_dir(), data_dir: resolved_data_dir,
tls_config: initial_tls, tls_config: initial_tls,
upstream_mode: resolved_mode, upstream_mode: resolved_mode,
root_hints, root_hints,
@@ -370,6 +400,9 @@ async fn main() -> numa::Result<()> {
); );
} }
} }
if config.dot.enabled {
row("DoT", g, &format!("tls://:{}", config.dot.port));
}
if config.lan.enabled { if config.lan.enabled {
row("LAN", g, "mDNS (_numa._tcp.local)"); row("LAN", g, "mDNS (_numa._tcp.local)");
} }
@@ -477,6 +510,15 @@ async fn main() -> numa::Result<()> {
}); });
} }
// Spawn DNS-over-TLS listener (RFC 7858)
if config.dot.enabled {
let dot_ctx = Arc::clone(&ctx);
let dot_config = config.dot.clone();
tokio::spawn(async move {
numa::dot::start_dot(dot_ctx, &dot_config).await;
});
}
// UDP DNS listener // UDP DNS listener
#[allow(clippy::infinite_loop)] #[allow(clippy::infinite_loop)]
loop { loop {

View File

@@ -870,14 +870,25 @@ mod tests {
}; };
let handler = handler.clone(); let handler = handler.clone();
tokio::spawn(async move { tokio::spawn(async move {
let timeout = std::time::Duration::from_secs(5);
// Read length-prefixed DNS query // Read length-prefixed DNS query
let mut len_buf = [0u8; 2]; let mut len_buf = [0u8; 2];
if stream.read_exact(&mut len_buf).await.is_err() { if tokio::time::timeout(timeout, stream.read_exact(&mut len_buf))
.await
.ok()
.and_then(|r| r.ok())
.is_none()
{
return; return;
} }
let len = u16::from_be_bytes(len_buf) as usize; let len = u16::from_be_bytes(len_buf) as usize;
let mut data = vec![0u8; len]; let mut data = vec![0u8; len];
if stream.read_exact(&mut data).await.is_err() { if tokio::time::timeout(timeout, stream.read_exact(&mut data))
.await
.ok()
.and_then(|r| r.ok())
.is_none()
{
return; return;
} }

View File

@@ -46,6 +46,60 @@ pub fn discover_system_dns() -> SystemDnsInfo {
} }
} }
/// Advisory for port-53 bind failures (EADDRINUSE or EACCES); `None`
/// if not applicable so the caller can fall back to the raw error.
pub fn try_port53_advisory(bind_addr: &str, err: &std::io::Error) -> Option<String> {
if !is_port_53(bind_addr) {
return None;
}
let (title, cause) = match err.kind() {
std::io::ErrorKind::AddrInUse => (
"port 53 is already in use",
"Another process is already bound to port 53. On Linux this is\n \
typically systemd-resolved; on Windows, the DNS Client service.",
),
std::io::ErrorKind::PermissionDenied => (
"permission denied",
"Port 53 is privileged — binding it requires root on Linux/macOS\n \
or Administrator on Windows.",
),
_ => return None,
};
let o = "\x1b[1;38;2;192;98;58m"; // bold orange
let r = "\x1b[0m";
Some(format!(
"
{o}Numa{r} — cannot bind to {bind_addr}: {title}.
{cause}
Fix — pick one:
1. Install Numa as the system resolver (frees port 53):
sudo numa install (on Windows, run as Administrator)
2. Run on a non-privileged port for testing.
Create ~/.config/numa/numa.toml with:
[server]
bind_addr = \"127.0.0.1:5354\"
api_port = 5380
Then run: numa
Test with: dig @127.0.0.1 -p 5354 example.com
"
))
}
fn is_port_53(bind_addr: &str) -> bool {
bind_addr
.parse::<SocketAddr>()
.map(|s| s.port() == 53)
.unwrap_or(false)
}
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
fn discover_macos() -> SystemDnsInfo { fn discover_macos() -> SystemDnsInfo {
use log::{debug, warn}; use log::{debug, warn};
@@ -174,6 +228,9 @@ fn discover_linux() -> SystemDnsInfo {
let default_upstream = if let Some(ns) = upstream { let default_upstream = if let Some(ns) = upstream {
info!("detected system upstream: {}", ns); info!("detected system upstream: {}", ns);
Some(ns) Some(ns)
} else if let Some(ns) = resolvectl_dns_server() {
info!("detected system upstream via resolvectl: {}", ns);
Some(ns)
} else { } else {
// Fallback to backup from a previous `numa install` // Fallback to backup from a previous `numa install`
let backup = { let backup = {
@@ -214,7 +271,18 @@ fn discover_linux() -> SystemDnsInfo {
} }
} }
/// Parse resolv.conf in a single pass, extracting both the first non-loopback /// Yield each `nameserver` address from resolv.conf content. No filtering —
/// callers decide what counts as a real upstream.
#[cfg(any(target_os = "linux", test))]
fn iter_nameservers(content: &str) -> impl Iterator<Item = &str> {
content.lines().filter_map(|line| {
let mut parts = line.split_whitespace();
(parts.next() == Some("nameserver")).then_some(())?;
parts.next()
})
}
/// Parse resolv.conf in a single pass, extracting the first non-loopback
/// nameserver and all search domains. /// nameserver and all search domains.
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
fn parse_resolv_conf(path: &str) -> (Option<String>, Vec<String>) { fn parse_resolv_conf(path: &str) -> (Option<String>, Vec<String>) {
@@ -222,19 +290,13 @@ fn parse_resolv_conf(path: &str) -> (Option<String>, Vec<String>) {
Ok(t) => t, Ok(t) => t,
Err(_) => return (None, Vec::new()), Err(_) => return (None, Vec::new()),
}; };
let mut upstream = None; let upstream = iter_nameservers(&text)
.find(|ns| !is_loopback_or_stub(ns))
.map(str::to_string);
let mut search_domains = Vec::new(); let mut search_domains = Vec::new();
for line in text.lines() { for line in text.lines() {
let line = line.trim(); let line = line.trim();
if line.starts_with("nameserver") { if line.starts_with("search") || line.starts_with("domain") {
if upstream.is_none() {
if let Some(ns) = line.split_whitespace().nth(1) {
if !is_loopback_or_stub(ns) {
upstream = Some(ns.to_string());
}
}
}
} else if line.starts_with("search") || line.starts_with("domain") {
for domain in line.split_whitespace().skip(1) { for domain in line.split_whitespace().skip(1) {
search_domains.push(domain.to_string()); search_domains.push(domain.to_string());
} }
@@ -243,6 +305,21 @@ fn parse_resolv_conf(path: &str) -> (Option<String>, Vec<String>) {
(upstream, search_domains) (upstream, search_domains)
} }
/// True if the resolv.conf *content* appears to be written by numa itself,
/// or has no real upstream — either way, it's not a safe source of truth
/// for a backup.
#[cfg(any(target_os = "linux", test))]
fn resolv_conf_is_numa_managed(content: &str) -> bool {
content.contains("Generated by Numa") || !resolv_conf_has_real_upstream(content)
}
/// True if the resolv.conf content has at least one non-loopback, non-stub
/// nameserver. An all-loopback resolv.conf is self-referential.
#[cfg(any(target_os = "linux", test))]
fn resolv_conf_has_real_upstream(content: &str) -> bool {
iter_nameservers(content).any(|ns| !is_loopback_or_stub(ns))
}
/// Query resolvectl for the real upstream DNS server (e.g. VPC resolver on AWS). /// Query resolvectl for the real upstream DNS server (e.g. VPC resolver on AWS).
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
fn resolvectl_dns_server() -> Option<String> { fn resolvectl_dns_server() -> Option<String> {
@@ -526,9 +603,19 @@ fn enable_dnscache() {
.status(); .status();
} }
/// True if the backup map has at least one real upstream (non-loopback, non-stub).
#[cfg(any(windows, test))]
fn backup_has_real_upstream_windows(
interfaces: &std::collections::HashMap<String, WindowsInterfaceDns>,
) -> bool {
interfaces
.values()
.any(|iface| iface.servers.iter().any(|s| !is_loopback_or_stub(s)))
}
#[cfg(windows)] #[cfg(windows)]
fn install_windows() -> Result<(), String> { fn install_windows() -> Result<(), String> {
let interfaces = get_windows_interfaces()?; let mut interfaces = get_windows_interfaces()?;
if interfaces.is_empty() { if interfaces.is_empty() {
return Err("no active network interfaces found".to_string()); return Err("no active network interfaces found".to_string());
} }
@@ -538,9 +625,30 @@ fn install_windows() -> Result<(), String> {
std::fs::create_dir_all(parent) std::fs::create_dir_all(parent)
.map_err(|e| format!("failed to create {}: {}", parent.display(), e))?; .map_err(|e| format!("failed to create {}: {}", parent.display(), e))?;
} }
let json = serde_json::to_string_pretty(&interfaces)
.map_err(|e| format!("failed to serialize backup: {}", e))?; // Preserve an existing useful backup rather than overwriting it with
std::fs::write(&path, json).map_err(|e| format!("failed to write backup: {}", e))?; // numa-managed state (which would be self-referential after uninstall).
let existing: Option<std::collections::HashMap<String, WindowsInterfaceDns>> =
std::fs::read_to_string(&path)
.ok()
.and_then(|json| serde_json::from_str(&json).ok());
let has_useful_existing = existing
.as_ref()
.map(backup_has_real_upstream_windows)
.unwrap_or(false);
if has_useful_existing {
eprintln!(" Existing DNS backup preserved at {}", path.display());
} else {
// Filter loopback/stub addresses before saving so a fresh backup
// captured from already-numa-managed state isn't self-referential.
for iface in interfaces.values_mut() {
iface.servers.retain(|s| !is_loopback_or_stub(s));
}
let json = serde_json::to_string_pretty(&interfaces)
.map_err(|e| format!("failed to serialize backup: {}", e))?;
std::fs::write(&path, json).map_err(|e| format!("failed to write backup: {}", e))?;
}
for name in interfaces.keys() { for name in interfaces.keys() {
let status = std::process::Command::new("netsh") let status = std::process::Command::new("netsh")
@@ -570,7 +678,10 @@ fn install_windows() -> Result<(), String> {
let needs_reboot = disable_dnscache()?; let needs_reboot = disable_dnscache()?;
register_autostart(); register_autostart();
eprintln!("\n Original DNS saved to {}", path.display()); eprintln!();
if !has_useful_existing {
eprintln!(" Original DNS saved to {}", path.display());
}
eprintln!(" Run 'numa uninstall' to restore.\n"); eprintln!(" Run 'numa uninstall' to restore.\n");
if needs_reboot { if needs_reboot {
eprintln!(" *** Reboot required. Numa will start automatically. ***\n"); eprintln!(" *** Reboot required. Numa will start automatically. ***\n");
@@ -754,27 +865,60 @@ fn get_dns_servers(service: &str) -> Result<Vec<String>, String> {
} }
} }
/// True if the backup map has at least one real upstream (non-loopback, non-stub).
/// An all-loopback backup is self-referential — restoring it is a no-op.
#[cfg(any(target_os = "macos", test))]
fn backup_has_real_upstream_macos(
servers: &std::collections::HashMap<String, Vec<String>>,
) -> bool {
servers
.values()
.any(|list| list.iter().any(|s| !is_loopback_or_stub(s)))
}
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
fn install_macos() -> Result<(), String> { fn install_macos() -> Result<(), String> {
use std::collections::HashMap; use std::collections::HashMap;
let services = get_network_services()?; let services = get_network_services()?;
let mut original: HashMap<String, Vec<String>> = HashMap::new();
// Save current DNS for each service
for service in &services {
let servers = get_dns_servers(service)?;
original.insert(service.clone(), servers);
}
// Save backup
let dir = numa_data_dir(); let dir = numa_data_dir();
std::fs::create_dir_all(&dir) std::fs::create_dir_all(&dir)
.map_err(|e| format!("failed to create {}: {}", dir.display(), e))?; .map_err(|e| format!("failed to create {}: {}", dir.display(), e))?;
let json = serde_json::to_string_pretty(&original) // If a useful backup already exists (at least one non-loopback upstream),
.map_err(|e| format!("failed to serialize backup: {}", e))?; // preserve it — overwriting would destroy the original DNS state when
std::fs::write(backup_path(), json).map_err(|e| format!("failed to write backup: {}", e))?; // re-installing on top of a numa-managed configuration.
let existing_backup: Option<HashMap<String, Vec<String>>> =
std::fs::read_to_string(backup_path())
.ok()
.and_then(|json| serde_json::from_str(&json).ok());
let has_useful_existing = existing_backup
.as_ref()
.map(backup_has_real_upstream_macos)
.unwrap_or(false);
if has_useful_existing {
eprintln!(
" Existing DNS backup preserved at {}",
backup_path().display()
);
} else {
// Capture fresh, filtering out loopback and stub addresses so we
// never record a self-referential backup.
let mut original: HashMap<String, Vec<String>> = HashMap::new();
for service in &services {
let servers: Vec<String> = get_dns_servers(service)?
.into_iter()
.filter(|s| !is_loopback_or_stub(s))
.collect();
original.insert(service.clone(), servers);
}
let json = serde_json::to_string_pretty(&original)
.map_err(|e| format!("failed to serialize backup: {}", e))?;
std::fs::write(backup_path(), json)
.map_err(|e| format!("failed to write backup: {}", e))?;
}
// Set DNS to 127.0.0.1 and add "numa" search domain for each service // Set DNS to 127.0.0.1 and add "numa" search domain for each service
for service in &services { for service in &services {
@@ -795,7 +939,10 @@ fn install_macos() -> Result<(), String> {
.status(); .status();
} }
eprintln!("\n Original DNS saved to {}", backup_path().display()); eprintln!();
if !has_useful_existing {
eprintln!(" Original DNS saved to {}", backup_path().display());
}
eprintln!(" Run 'sudo numa uninstall' to restore.\n"); eprintln!(" Run 'sudo numa uninstall' to restore.\n");
Ok(()) Ok(())
@@ -990,14 +1137,23 @@ fn install_service_macos() -> Result<(), String> {
std::fs::write(PLIST_DEST, plist) std::fs::write(PLIST_DEST, plist)
.map_err(|e| format!("failed to write {}: {}", PLIST_DEST, e))?; .map_err(|e| format!("failed to write {}: {}", PLIST_DEST, e))?;
// Load the service first so numa is listening before DNS redirect // Modern launchctl API: explicitly tear down any existing in-memory
// state, then bootstrap fresh from the on-disk plist. The deprecated
// `load -w` returns exit 0 even when it cannot actually reload (label
// already in launchd state), silently leaving the daemon running a
// stale binary path after `numa install` rewrites the plist on disk —
// which is exactly what `brew upgrade numa` does.
let _ = std::process::Command::new("launchctl")
.args(["bootout", "system", PLIST_DEST])
.status();
let status = std::process::Command::new("launchctl") let status = std::process::Command::new("launchctl")
.args(["load", "-w", PLIST_DEST]) .args(["bootstrap", "system", PLIST_DEST])
.status() .status()
.map_err(|e| format!("failed to run launchctl: {}", e))?; .map_err(|e| format!("failed to run launchctl: {}", e))?;
if !status.success() { if !status.success() {
return Err("launchctl load failed".to_string()); return Err("launchctl bootstrap failed".to_string());
} }
// Wait for numa to be ready before redirecting DNS // Wait for numa to be ready before redirecting DNS
@@ -1010,7 +1166,7 @@ fn install_service_macos() -> Result<(), String> {
if !api_up { if !api_up {
// Service failed to start — don't redirect DNS to a dead endpoint // Service failed to start — don't redirect DNS to a dead endpoint
let _ = std::process::Command::new("launchctl") let _ = std::process::Command::new("launchctl")
.args(["unload", PLIST_DEST]) .args(["bootout", "system", PLIST_DEST])
.status(); .status();
return Err( return Err(
"numa service did not start (port 53 may be in use). Service unloaded.".to_string(), "numa service did not start (port 53 may be in use). Service unloaded.".to_string(),
@@ -1038,22 +1194,25 @@ fn uninstall_service_macos() -> Result<(), String> {
eprintln!(" warning: failed to restore system DNS: {}", e); eprintln!(" warning: failed to restore system DNS: {}", e);
} }
// Remove plist first so service won't restart on boot even if unload fails // Bootout the service from launchd's in-memory state BEFORE removing
if let Err(e) = std::fs::remove_file(PLIST_DEST) { // the plist. The modern API needs the file path as the specifier;
if e.kind() != std::io::ErrorKind::NotFound { // doing this in the wrong order would leave the service loaded in
return Err(format!("failed to remove {}: {}", PLIST_DEST, e)); // memory until reboot. (Deprecated `unload -w` had the same issue.)
let bootout_status = std::process::Command::new("launchctl")
.args(["bootout", "system", PLIST_DEST])
.status();
if let Ok(s) = bootout_status {
if !s.success() {
eprintln!(
" warning: launchctl bootout returned non-zero (service may not have been loaded)"
);
} }
} }
// Unload the service // Remove plist so the service won't restart on boot
let status = std::process::Command::new("launchctl") if let Err(e) = std::fs::remove_file(PLIST_DEST) {
.args(["unload", "-w", PLIST_DEST]) if e.kind() != std::io::ErrorKind::NotFound {
.status(); return Err(format!("failed to remove {}: {}", PLIST_DEST, e));
if let Ok(s) = status {
if !s.success() {
eprintln!(
" warning: launchctl unload returned non-zero (service may still be running)"
);
} }
} }
@@ -1132,11 +1291,31 @@ fn install_linux() -> Result<(), String> {
.map_err(|e| format!("failed to create {}: {}", parent.display(), e))?; .map_err(|e| format!("failed to create {}: {}", parent.display(), e))?;
} }
// Back up current resolv.conf (ignore NotFound) // Back up current resolv.conf, but never overwrite a useful existing
match std::fs::copy(resolv, &backup) { // backup with a numa-managed file — that would leave uninstall with
Ok(_) => eprintln!(" Saved /etc/resolv.conf to {}", backup.display()), // nothing to restore to.
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {} let current = std::fs::read_to_string(resolv).ok();
Err(e) => return Err(format!("failed to backup /etc/resolv.conf: {}", e)), let current_is_numa_managed = current
.as_deref()
.map(resolv_conf_is_numa_managed)
.unwrap_or(false);
let existing_backup_is_useful = std::fs::read_to_string(&backup)
.ok()
.as_deref()
.map(resolv_conf_has_real_upstream)
.unwrap_or(false);
if existing_backup_is_useful {
eprintln!(
" Existing resolv.conf backup preserved at {}",
backup.display()
);
} else if current_is_numa_managed {
eprintln!(" warning: /etc/resolv.conf is already numa-managed; no fresh backup written");
} else if let Some(content) = current.as_deref() {
std::fs::write(&backup, content)
.map_err(|e| format!("failed to backup /etc/resolv.conf: {}", e))?;
eprintln!(" Saved /etc/resolv.conf to {}", backup.display());
} }
if resolv if resolv
@@ -1278,102 +1457,209 @@ fn run_systemctl(args: &[&str]) -> Result<(), String> {
// --- CA trust management --- // --- CA trust management ---
/// One Linux trust-store backend (Debian, Fedora pki, Arch p11-kit).
#[cfg(target_os = "linux")]
struct LinuxTrustStore {
name: &'static str,
anchor_dir: &'static str,
anchor_file: &'static str,
refresh_install: &'static [&'static str],
refresh_uninstall: &'static [&'static str],
}
// If you change this table, update tests/docker/install-trust.sh to match —
// it asserts the same paths/commands against real distro images.
#[cfg(target_os = "linux")]
const LINUX_TRUST_STORES: &[LinuxTrustStore] = &[
// Debian / Ubuntu / Mint
LinuxTrustStore {
name: "debian",
anchor_dir: "/usr/local/share/ca-certificates",
anchor_file: "numa-local-ca.crt",
refresh_install: &["update-ca-certificates"],
refresh_uninstall: &["update-ca-certificates", "--fresh"],
},
// Fedora / RHEL / CentOS / SUSE (p11-kit via update-ca-trust wrapper)
LinuxTrustStore {
name: "pki",
anchor_dir: "/etc/pki/ca-trust/source/anchors",
anchor_file: "numa-local-ca.pem",
refresh_install: &["update-ca-trust", "extract"],
refresh_uninstall: &["update-ca-trust", "extract"],
},
// Arch / Manjaro (raw p11-kit)
LinuxTrustStore {
name: "p11kit",
anchor_dir: "/etc/ca-certificates/trust-source/anchors",
anchor_file: "numa-local-ca.pem",
refresh_install: &["trust", "extract-compat"],
refresh_uninstall: &["trust", "extract-compat"],
},
];
#[cfg(target_os = "linux")]
fn detect_linux_trust_store() -> Option<&'static LinuxTrustStore> {
LINUX_TRUST_STORES
.iter()
.find(|s| std::path::Path::new(s.anchor_dir).is_dir())
}
fn trust_ca() -> Result<(), String> { fn trust_ca() -> Result<(), String> {
let ca_path = crate::data_dir().join("ca.pem"); let ca_path = crate::data_dir().join(crate::tls::CA_FILE_NAME);
if !ca_path.exists() { if !ca_path.exists() {
return Err("CA not generated yet — start numa first to create certificates".into()); return Err("CA not generated yet — start numa first to create certificates".into());
} }
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
{ let result = trust_ca_macos(&ca_path);
let status = std::process::Command::new("security")
.args([
"add-trusted-cert",
"-d",
"-r",
"trustRoot",
"-k",
"/Library/Keychains/System.keychain",
])
.arg(&ca_path)
.status()
.map_err(|e| format!("security: {}", e))?;
if !status.success() {
return Err("security add-trusted-cert failed".into());
}
eprintln!(" Trusted Numa CA in system keychain");
}
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
{ let result = trust_ca_linux(&ca_path);
let dest = std::path::Path::new("/usr/local/share/ca-certificates/numa-local-ca.crt"); #[cfg(windows)]
std::fs::copy(&ca_path, dest).map_err(|e| format!("copy CA: {}", e))?; let result = trust_ca_windows(&ca_path);
let status = std::process::Command::new("update-ca-certificates") #[cfg(not(any(target_os = "macos", target_os = "linux", windows)))]
.status() let result = Err::<(), String>("CA trust not supported on this OS".to_string());
.map_err(|e| format!("update-ca-certificates: {}", e))?;
if !status.success() {
return Err("update-ca-certificates failed".into());
}
eprintln!(" Trusted Numa CA system-wide");
}
#[cfg(not(any(target_os = "macos", target_os = "linux")))] result
{
Err("CA trust not supported on this OS".into())
}
#[cfg(any(target_os = "macos", target_os = "linux"))]
Ok(())
} }
fn untrust_ca() -> Result<(), String> { fn untrust_ca() -> Result<(), String> {
let ca_path = crate::data_dir().join("ca.pem");
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
let result = untrust_ca_macos();
#[cfg(target_os = "linux")]
let result = untrust_ca_linux();
#[cfg(windows)]
let result = untrust_ca_windows();
#[cfg(not(any(target_os = "macos", target_os = "linux", windows)))]
let result = Ok::<(), String>(());
result
}
#[cfg(target_os = "macos")]
fn trust_ca_macos(ca_path: &std::path::Path) -> Result<(), String> {
let status = std::process::Command::new("security")
.args([
"add-trusted-cert",
"-d",
"-r",
"trustRoot",
"-k",
"/Library/Keychains/System.keychain",
])
.arg(ca_path)
.status()
.map_err(|e| format!("security: {}", e))?;
if !status.success() {
return Err("security add-trusted-cert failed".into());
}
eprintln!(" Trusted Numa CA in system keychain");
Ok(())
}
#[cfg(target_os = "macos")]
fn untrust_ca_macos() -> Result<(), String> {
if let Ok(out) = std::process::Command::new("security")
.args([
"find-certificate",
"-c",
crate::tls::CA_COMMON_NAME,
"-a",
"-Z",
"/Library/Keychains/System.keychain",
])
.output()
{ {
// Find all Numa CA certs by hash and delete each one let stdout = String::from_utf8_lossy(&out.stdout);
if let Ok(out) = std::process::Command::new("security") for line in stdout.lines() {
.args([ if let Some(hash) = line.strip_prefix("SHA-1 hash: ") {
"find-certificate", let hash = hash.trim();
"-c", let _ = std::process::Command::new("security")
"Numa Local CA", .args([
"-a", "delete-certificate",
"-Z", "-Z",
"/Library/Keychains/System.keychain", hash,
]) "/Library/Keychains/System.keychain",
.output() ])
{ .output();
let stdout = String::from_utf8_lossy(&out.stdout);
for line in stdout.lines() {
if let Some(hash) = line.strip_prefix("SHA-1 hash: ") {
let hash = hash.trim();
let _ = std::process::Command::new("security")
.args([
"delete-certificate",
"-Z",
hash,
"/Library/Keychains/System.keychain",
])
.output();
}
} }
} }
eprintln!(" Removed Numa CA from system keychain");
} }
eprintln!(" Removed Numa CA from system keychain");
Ok(())
}
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
{ fn trust_ca_linux(ca_path: &std::path::Path) -> Result<(), String> {
let dest = std::path::Path::new("/usr/local/share/ca-certificates/numa-local-ca.crt"); let store = detect_linux_trust_store().ok_or_else(|| {
if dest.exists() { let names: Vec<&str> = LINUX_TRUST_STORES.iter().map(|s| s.name).collect();
let _ = std::fs::remove_file(dest); format!(
let _ = std::process::Command::new("update-ca-certificates") "no supported CA trust store found (tried: {}). \
.arg("--fresh") Please report at https://github.com/razvandimescu/numa/issues",
.status(); names.join(", ")
eprintln!(" Removed Numa CA from system trust store"); )
})?;
let dest = std::path::Path::new(store.anchor_dir).join(store.anchor_file);
std::fs::copy(ca_path, &dest).map_err(|e| format!("copy CA to {}: {}", dest.display(), e))?;
run_refresh(store.name, store.refresh_install)?;
eprintln!(" Trusted Numa CA system-wide ({})", store.name);
Ok(())
}
#[cfg(target_os = "linux")]
fn untrust_ca_linux() -> Result<(), String> {
let Some(store) = detect_linux_trust_store() else {
return Ok(());
};
let dest = std::path::Path::new(store.anchor_dir).join(store.anchor_file);
match std::fs::remove_file(&dest) {
Ok(()) => {
let _ = run_refresh(store.name, store.refresh_uninstall);
eprintln!(" Removed Numa CA from system trust store ({})", store.name);
} }
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {}
Err(_) => {} // best-effort uninstall
} }
Ok(())
}
let _ = ca_path; // suppress unused warning on other platforms #[cfg(target_os = "linux")]
fn run_refresh(store_name: &str, argv: &[&str]) -> Result<(), String> {
let (cmd, args) = argv
.split_first()
.expect("refresh command must be non-empty");
let status = std::process::Command::new(cmd)
.args(args)
.status()
.map_err(|e| format!("{} ({}): {}", cmd, store_name, e))?;
if !status.success() {
return Err(format!("{} ({}) failed", cmd, store_name));
}
Ok(())
}
#[cfg(windows)]
fn trust_ca_windows(ca_path: &std::path::Path) -> Result<(), String> {
let status = std::process::Command::new("certutil")
.args(["-addstore", "-f", "Root"])
.arg(ca_path)
.status()
.map_err(|e| format!("certutil: {}", e))?;
if !status.success() {
return Err("certutil -addstore Root failed (run as Administrator?)".into());
}
eprintln!(" Trusted Numa CA in Windows Root store");
Ok(())
}
#[cfg(windows)]
fn untrust_ca_windows() -> Result<(), String> {
let _ = std::process::Command::new("certutil")
.args(["-delstore", "Root", crate::tls::CA_COMMON_NAME])
.status();
eprintln!(" Removed Numa CA from Windows Root store");
Ok(()) Ok(())
} }
@@ -1432,6 +1718,82 @@ Wireless LAN adapter Wi-Fi:
assert!(!result.contains("{{exe_path}}")); assert!(!result.contains("{{exe_path}}"));
} }
#[test]
fn macos_backup_real_upstream_detection() {
use std::collections::HashMap;
let mut map: HashMap<String, Vec<String>> = HashMap::new();
// Empty backup → no real upstream
assert!(!backup_has_real_upstream_macos(&map));
// All-loopback backup → still no real upstream (the bug case)
map.insert("Wi-Fi".into(), vec!["127.0.0.1".into()]);
map.insert("Ethernet".into(), vec!["::1".into()]);
assert!(!backup_has_real_upstream_macos(&map));
// One real entry → useful
map.insert("Tailscale".into(), vec!["192.168.1.1".into()]);
assert!(backup_has_real_upstream_macos(&map));
}
#[test]
fn windows_backup_filters_loopback() {
use std::collections::HashMap;
let mut map: HashMap<String, WindowsInterfaceDns> = HashMap::new();
// Empty backup → no real upstream
assert!(!backup_has_real_upstream_windows(&map));
// All-loopback backup → still no real upstream (the bug case)
map.insert(
"Wi-Fi".into(),
WindowsInterfaceDns {
dhcp: false,
servers: vec!["127.0.0.1".into()],
},
);
map.insert(
"Ethernet".into(),
WindowsInterfaceDns {
dhcp: false,
servers: vec!["::1".into(), "0.0.0.0".into()],
},
);
assert!(!backup_has_real_upstream_windows(&map));
// One real entry alongside loopback → useful
map.insert(
"Ethernet 2".into(),
WindowsInterfaceDns {
dhcp: false,
servers: vec!["192.168.1.1".into()],
},
);
assert!(backup_has_real_upstream_windows(&map));
}
#[test]
fn resolv_conf_real_upstream_detection() {
let real = "nameserver 192.168.1.1\nsearch lan\n";
assert!(resolv_conf_has_real_upstream(real));
assert!(!resolv_conf_is_numa_managed(real));
let self_ref = "nameserver 127.0.0.1\nsearch numa\n";
assert!(!resolv_conf_has_real_upstream(self_ref));
assert!(resolv_conf_is_numa_managed(self_ref));
let numa_marker =
"# Generated by Numa — run 'sudo numa uninstall' to restore\nnameserver 127.0.0.1\nsearch numa\n";
assert!(resolv_conf_is_numa_managed(numa_marker));
let systemd_stub = "nameserver 127.0.0.53\noptions edns0\n";
assert!(!resolv_conf_has_real_upstream(systemd_stub));
let mixed = "nameserver 127.0.0.1\nnameserver 1.1.1.1\n";
assert!(resolv_conf_has_real_upstream(mixed));
assert!(!resolv_conf_is_numa_managed(mixed));
}
#[test] #[test]
fn parse_ipconfig_skips_disconnected() { fn parse_ipconfig_skips_disconnected() {
let sample = "\ let sample = "\
@@ -1448,4 +1810,43 @@ Wireless LAN adapter Wi-Fi:
assert_eq!(result.len(), 1); assert_eq!(result.len(), 1);
assert!(result.contains_key("Wi-Fi")); assert!(result.contains_key("Wi-Fi"));
} }
#[test]
fn try_port53_advisory_addr_in_use() {
let err = std::io::Error::from(std::io::ErrorKind::AddrInUse);
let msg = try_port53_advisory("0.0.0.0:53", &err).expect("should advise on port 53");
assert!(msg.contains("cannot bind to"));
assert!(msg.contains("already in use"));
assert!(msg.contains("numa install"));
assert!(msg.contains("bind_addr"));
}
#[test]
fn try_port53_advisory_permission_denied() {
let err = std::io::Error::from(std::io::ErrorKind::PermissionDenied);
let msg = try_port53_advisory("0.0.0.0:53", &err).expect("should advise on port 53");
assert!(msg.contains("cannot bind to"));
assert!(msg.contains("permission denied"));
assert!(msg.contains("numa install"));
assert!(msg.contains("bind_addr"));
}
#[test]
fn try_port53_advisory_skips_non_53_ports() {
let err = std::io::Error::from(std::io::ErrorKind::AddrInUse);
assert!(try_port53_advisory("127.0.0.1:5354", &err).is_none());
assert!(try_port53_advisory("[::]:853", &err).is_none());
}
#[test]
fn try_port53_advisory_skips_unrelated_error_kinds() {
let err = std::io::Error::from(std::io::ErrorKind::NotFound);
assert!(try_port53_advisory("0.0.0.0:53", &err).is_none());
}
#[test]
fn try_port53_advisory_skips_malformed_bind_addr() {
let err = std::io::Error::from(std::io::ErrorKind::AddrInUse);
assert!(try_port53_advisory("not-an-address", &err).is_none());
}
} }

View File

@@ -13,6 +13,13 @@ use time::{Duration, OffsetDateTime};
const CA_VALIDITY_DAYS: i64 = 3650; // 10 years const CA_VALIDITY_DAYS: i64 = 3650; // 10 years
const CERT_VALIDITY_DAYS: i64 = 365; // 1 year const CERT_VALIDITY_DAYS: i64 = 365; // 1 year
/// Common Name on Numa's local CA. Referenced by trust-store helpers
/// (`security`, `certutil`) when locating the cert for removal.
pub const CA_COMMON_NAME: &str = "Numa Local CA";
/// Filename of the CA certificate inside the data dir.
pub const CA_FILE_NAME: &str = "ca.pem";
/// Collect all service + LAN peer names and regenerate the TLS cert. /// Collect all service + LAN peer names and regenerate the TLS cert.
pub fn regenerate_tls(ctx: &ServerCtx) { pub fn regenerate_tls(ctx: &ServerCtx) {
let tls = match &ctx.tls_config { let tls = match &ctx.tls_config {
@@ -24,7 +31,7 @@ pub fn regenerate_tls(ctx: &ServerCtx) {
names.extend(ctx.lan_peers.lock().unwrap().names()); names.extend(ctx.lan_peers.lock().unwrap().names());
let names: Vec<String> = names.into_iter().collect(); let names: Vec<String> = names.into_iter().collect();
match build_tls_config(&ctx.proxy_tld, &names) { match build_tls_config(&ctx.proxy_tld, &names, Vec::new(), &ctx.data_dir) {
Ok(new_config) => { Ok(new_config) => {
tls.store(new_config); tls.store(new_config);
info!("TLS cert regenerated for {} services", names.len()); info!("TLS cert regenerated for {} services", names.len());
@@ -33,20 +40,63 @@ pub fn regenerate_tls(ctx: &ServerCtx) {
} }
} }
/// Advisory for TLS-setup failures caused by a non-writable data dir;
/// `None` if not applicable so the caller can fall back to the raw error.
pub fn try_data_dir_advisory(err: &crate::Error, data_dir: &Path) -> Option<String> {
let io_err = err.downcast_ref::<std::io::Error>()?;
if io_err.kind() != std::io::ErrorKind::PermissionDenied {
return None;
}
let o = "\x1b[1;38;2;192;98;58m";
let r = "\x1b[0m";
Some(format!(
"
{o}Numa{r} — HTTPS proxy disabled: cannot write TLS CA to {}.
The data directory is not writable by the current user. Numa needs
to persist a local Certificate Authority there to serve .numa over
HTTPS. DNS resolution and plain-HTTP proxy continue to work.
Fix — pick one:
1. Install Numa as the system resolver (sets up a writable data dir):
sudo numa install (on Windows, run as Administrator)
2. Point data_dir at a path you can write.
Create ~/.config/numa/numa.toml with:
[server]
data_dir = \"/path/you/can/write\"
",
data_dir.display()
))
}
/// Build a TLS config with a cert covering all provided service names. /// Build a TLS config with a cert covering all provided service names.
/// Wildcards under single-label TLDs (*.numa) are rejected by browsers, /// Wildcards under single-label TLDs (*.numa) are rejected by browsers,
/// so we list each service explicitly as a SAN. /// so we list each service explicitly as a SAN.
pub fn build_tls_config(tld: &str, service_names: &[String]) -> crate::Result<Arc<ServerConfig>> { /// `alpn` is advertised in the TLS ServerHello — pass empty for the proxy
let dir = crate::data_dir(); /// (which accepts any ALPN), or `[b"dot"]` for DoT (RFC 7858 §3.2).
let (ca_cert, ca_key) = ensure_ca(&dir)?; /// `data_dir` is where the CA material is stored — taken from
/// `[server] data_dir` in numa.toml (defaults to `crate::data_dir()`).
pub fn build_tls_config(
tld: &str,
service_names: &[String],
alpn: Vec<Vec<u8>>,
data_dir: &Path,
) -> crate::Result<Arc<ServerConfig>> {
let (ca_cert, ca_key) = ensure_ca(data_dir)?;
let (cert_chain, key) = generate_service_cert(&ca_cert, &ca_key, tld, service_names)?; let (cert_chain, key) = generate_service_cert(&ca_cert, &ca_key, tld, service_names)?;
// Ensure a crypto provider is installed (rustls needs one) // Ensure a crypto provider is installed (rustls needs one)
let _ = rustls::crypto::ring::default_provider().install_default(); let _ = rustls::crypto::ring::default_provider().install_default();
let config = ServerConfig::builder() let mut config = ServerConfig::builder()
.with_no_client_auth() .with_no_client_auth()
.with_single_cert(cert_chain, key)?; .with_single_cert(cert_chain, key)?;
config.alpn_protocols = alpn;
info!( info!(
"TLS configured for {} .{} domains", "TLS configured for {} .{} domains",
@@ -58,7 +108,7 @@ pub fn build_tls_config(tld: &str, service_names: &[String]) -> crate::Result<Ar
fn ensure_ca(dir: &Path) -> crate::Result<(rcgen::Certificate, KeyPair)> { fn ensure_ca(dir: &Path) -> crate::Result<(rcgen::Certificate, KeyPair)> {
let ca_key_path = dir.join("ca.key"); let ca_key_path = dir.join("ca.key");
let ca_cert_path = dir.join("ca.pem"); let ca_cert_path = dir.join(CA_FILE_NAME);
if ca_key_path.exists() && ca_cert_path.exists() { if ca_key_path.exists() && ca_cert_path.exists() {
let key_pem = std::fs::read_to_string(&ca_key_path)?; let key_pem = std::fs::read_to_string(&ca_key_path)?;
@@ -77,7 +127,7 @@ fn ensure_ca(dir: &Path) -> crate::Result<(rcgen::Certificate, KeyPair)> {
let mut params = CertificateParams::default(); let mut params = CertificateParams::default();
params params
.distinguished_name .distinguished_name
.push(DnType::CommonName, "Numa Local CA"); .push(DnType::CommonName, CA_COMMON_NAME);
params.is_ca = IsCa::Ca(BasicConstraints::Unconstrained); params.is_ca = IsCa::Ca(BasicConstraints::Unconstrained);
params.key_usages = vec![KeyUsagePurpose::KeyCertSign, KeyUsagePurpose::CrlSign]; params.key_usages = vec![KeyUsagePurpose::KeyCertSign, KeyUsagePurpose::CrlSign];
params.not_before = OffsetDateTime::now_utc(); params.not_before = OffsetDateTime::now_utc();
@@ -154,3 +204,33 @@ fn generate_service_cert(
Ok((vec![cert_der, ca_der], key_der)) Ok((vec![cert_der, ca_der], key_der))
} }
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
#[test]
fn try_data_dir_advisory_permission_denied() {
let err: crate::Error =
Box::new(std::io::Error::from(std::io::ErrorKind::PermissionDenied));
let path = PathBuf::from("/usr/local/var/numa");
let msg = try_data_dir_advisory(&err, &path).expect("should advise");
assert!(msg.contains("HTTPS proxy disabled"));
assert!(msg.contains("/usr/local/var/numa"));
assert!(msg.contains("numa install"));
assert!(msg.contains("data_dir"));
}
#[test]
fn try_data_dir_advisory_skips_other_io_kinds() {
let err: crate::Error = Box::new(std::io::Error::from(std::io::ErrorKind::NotFound));
assert!(try_data_dir_advisory(&err, &PathBuf::from("/x")).is_none());
}
#[test]
fn try_data_dir_advisory_skips_non_io_errors() {
let err: crate::Error = "rcgen failure".into();
assert!(try_data_dir_advisory(&err, &PathBuf::from("/x")).is_none());
}
}

123
tests/docker/install-trust.sh Executable file
View File

@@ -0,0 +1,123 @@
#!/usr/bin/env bash
#
# Cross-distro CA trust contract test for issue #35.
#
# Runs the exact shell commands `src/system_dns.rs::trust_ca_linux` would run
# on each Linux trust-store family (Debian, Fedora pki, Arch p11-kit), and
# asserts the certificate ends up in (and is removed from) the system bundle.
#
# This is a contract test, not an integration test: it doesn't drive the Rust
# code (that would need systemd-in-container). It verifies the assumptions in
# `LINUX_TRUST_STORES` against the real distro behavior. If you change that
# table in src/system_dns.rs, update the per-distro cases below to match.
#
# Requirements: docker, openssl (host).
# Usage: ./tests/docker/install-trust.sh
set -euo pipefail
cd "$(dirname "$0")/../.."
GREEN="\033[32m"; RED="\033[31m"; RESET="\033[0m"
# Self-signed CA fixture, mounted into each container as ca.pem.
# basicConstraints=CA:TRUE is required — without it, Debian's
# update-ca-certificates silently skips the cert during bundle build.
FIXTURE_DIR=$(mktemp -d)
trap 'rm -rf "$FIXTURE_DIR"' EXIT
openssl req -x509 -newkey rsa:2048 -nodes -days 1 \
-keyout "$FIXTURE_DIR/ca.key" \
-out "$FIXTURE_DIR/ca.pem" \
-subj "/CN=Numa Local CA Test $(date +%s)" \
-addext "basicConstraints=critical,CA:TRUE" \
-addext "keyUsage=critical,keyCertSign,cRLSign" >/dev/null 2>&1
# Distro bundles store certs differently — Debian writes raw PEM only,
# Fedora prepends "# CN" comment headers, Arch via extract-compat is
# raw PEM. To detect cert presence uniformly we grep for a deterministic
# substring of the base64 body (first base64 line is unique per cert).
CERT_TAG=$(sed -n '2p' "$FIXTURE_DIR/ca.pem")
PASSED=0; FAILED=0
run_case() {
local distro="$1"; shift
local image="$1"; shift
local platform="$1"; shift
local script="$1"
printf "── %s (%s) ──\n" "$distro" "$image"
if docker run --rm \
--platform "$platform" \
--security-opt seccomp=unconfined \
-e CERT_TAG="$CERT_TAG" \
-e DEBIAN_FRONTEND=noninteractive \
-v "$FIXTURE_DIR/ca.pem:/fixture/ca.pem:ro" \
"$image" bash -c "$script"; then
printf "${GREEN}${RESET} %s\n\n" "$distro"
PASSED=$((PASSED + 1))
else
printf "${RED}${RESET} %s\n\n" "$distro"
FAILED=$((FAILED + 1))
fi
}
# Debian / Ubuntu / Mint — anchor: /usr/local/share/ca-certificates/*.crt
run_case "debian" "debian:stable" "linux/amd64" '
set -e
apt-get update -qq
apt-get install -qq -y ca-certificates >/dev/null
install -m 0644 /fixture/ca.pem /usr/local/share/ca-certificates/numa-local-ca.crt
update-ca-certificates >/dev/null 2>&1
grep -q "$CERT_TAG" /etc/ssl/certs/ca-certificates.crt
echo " install: cert present in bundle"
rm /usr/local/share/ca-certificates/numa-local-ca.crt
update-ca-certificates --fresh >/dev/null 2>&1
if grep -q "$CERT_TAG" /etc/ssl/certs/ca-certificates.crt; then
echo " uninstall: cert STILL present (regression)" >&2
exit 1
fi
echo " uninstall: cert removed from bundle"
'
# Fedora / RHEL / CentOS / SUSE — anchor: /etc/pki/ca-trust/source/anchors/*.pem
run_case "fedora" "fedora:latest" "linux/amd64" '
set -e
dnf install -q -y ca-certificates >/dev/null
install -m 0644 /fixture/ca.pem /etc/pki/ca-trust/source/anchors/numa-local-ca.pem
update-ca-trust extract
grep -q "$CERT_TAG" /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
echo " install: cert present in bundle"
rm /etc/pki/ca-trust/source/anchors/numa-local-ca.pem
update-ca-trust extract
if grep -q "$CERT_TAG" /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem; then
echo " uninstall: cert STILL present (regression)" >&2
exit 1
fi
echo " uninstall: cert removed from bundle"
'
# Arch / Manjaro — anchor: /etc/ca-certificates/trust-source/anchors/*.pem
# archlinux:latest is x86_64-only; --platform forces emulation on Apple Silicon.
run_case "arch" "archlinux:latest" "linux/amd64" '
set -e
# pacman 7+ filters syscalls in its own sandbox; disable for Rosetta/qemu emulation.
sed -i "s/^#DisableSandboxSyscalls/DisableSandboxSyscalls/" /etc/pacman.conf
pacman -Sy --noconfirm --needed ca-certificates p11-kit >/dev/null 2>&1
install -m 0644 /fixture/ca.pem /etc/ca-certificates/trust-source/anchors/numa-local-ca.pem
trust extract-compat
grep -q "$CERT_TAG" /etc/ssl/certs/ca-certificates.crt
echo " install: cert present in bundle"
rm /etc/ca-certificates/trust-source/anchors/numa-local-ca.pem
trust extract-compat
if grep -q "$CERT_TAG" /etc/ssl/certs/ca-certificates.crt; then
echo " uninstall: cert STILL present (regression)" >&2
exit 1
fi
echo " uninstall: cert removed from bundle"
'
printf "── summary ──\n"
printf " ${GREEN}passed${RESET}: %d\n" "$PASSED"
printf " ${RED}failed${RESET}: %d\n" "$FAILED"
[ "$FAILED" -eq 0 ]

147
tests/docker/smoke-arch.sh Executable file
View File

@@ -0,0 +1,147 @@
#!/usr/bin/env bash
#
# Arch Linux compatibility smoke test.
#
# Builds numa from source inside an archlinux:latest container, runs it
# in forward mode on port 5354, and verifies a single DNS query returns
# an A record. Validates the "Arch compatible" claim end-to-end before
# release announcements.
#
# Dogfooding: the test numa forwards to the host's running numa via
# host.docker.internal (Docker Desktop's host gateway). This avoids the
# Docker NAT/UDP issues with public resolvers and exercises the realistic
# numa-on-numa shape. Requires the host to be running numa on port 53.
#
# First run is slow (~8-12 min): image pull + pacman + cold cargo build.
# No caching across runs.
#
# Requirements: docker, host running numa on 0.0.0.0:53
# Usage: ./tests/docker/smoke-arch.sh
set -euo pipefail
cd "$(dirname "$0")/../.."
GREEN="\033[32m"; RED="\033[31m"; RESET="\033[0m"
# Precondition: the test numa-on-arch forwards to the host numa as its
# upstream (dogfood pattern). Fail fast with a clear error if there is
# no working DNS on the host, rather than letting the dig inside the
# container time out with "deadline has elapsed".
if ! dig @127.0.0.1 google.com A +short +time=1 +tries=1 >/dev/null 2>&1; then
printf "${RED}error:${RESET} host numa is not answering on 127.0.0.1:53\n" >&2
echo " This test forwards to the host numa via host.docker.internal." >&2
echo " Start numa on the host first (sudo numa install), then rerun." >&2
exit 1
fi
echo "── building + running numa on archlinux:latest ──"
echo " (first run is slow: image pull + pacman + cold cargo build, ~8-12 min)"
echo
docker run --rm \
--platform linux/amd64 \
--security-opt seccomp=unconfined \
-v "$PWD:/src:ro" \
-v numa-arch-cargo:/root/.cargo \
-v numa-arch-target:/work/target \
archlinux:latest bash -c '
set -e
# pacman 7+ filters syscalls in its own sandbox; disable for Rosetta/qemu
sed -i "s/^#DisableSandboxSyscalls/DisableSandboxSyscalls/" /etc/pacman.conf
echo "── pacman: installing build + runtime deps ──"
pacman -Sy --noconfirm --needed rust gcc pkgconf cmake make perl bind 2>&1 | tail -3
echo
# Copy source to a writable workdir, skipping target/ + .git so we
# do not pull in the host (macOS) build artifacts.
mkdir -p /work
tar -C /src --exclude=./target --exclude=./.git -cf - . | tar -C /work -xf -
cd /work
echo "── cargo build --release --locked ──"
cargo build --release --locked 2>&1 | tail -5
echo
# Dogfood: forward to the host numa via host.docker.internal.
# numa parses upstream.address as a literal SocketAddr, so we resolve
# the hostname to an IPv4 address first (force v4 — getent hosts may
# return IPv6 first, and IPv6 addresses need bracketed addr:port form).
HOST_IP=$(getent ahostsv4 host.docker.internal | awk "/STREAM/ {print \$1; exit}")
if [ -z "$HOST_IP" ]; then
echo " ✗ could not resolve host.docker.internal to IPv4 (not on Docker Desktop?)"
exit 1
fi
echo "── starting numa on :5354 (forward to host numa at $HOST_IP:53) ──"
# Intentionally NOT setting [server] data_dir — we want to exercise the
# default code path (data_dir() → daemon_data_dir() → /var/lib/numa) so
# the FHS-path assertion below verifies the live wiring, not just the
# unit-tested helper.
cat > /tmp/numa.toml <<EOF
[server]
bind_addr = "127.0.0.1:5354"
api_port = 5381
[upstream]
mode = "forward"
address = "$HOST_IP"
port = 53
EOF
./target/release/numa /tmp/numa.toml > /tmp/numa.log 2>&1 &
NUMA_PID=$!
# Poll for readiness — numa is ready when it answers a query
READY=0
for i in 1 2 3 4 5 6 7 8; do
sleep 1
if dig @127.0.0.1 -p 5354 google.com A +short +time=1 +tries=1 2>/dev/null \
| grep -qE "^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$"; then
READY=1
break
fi
done
if [ "$READY" -ne 1 ]; then
echo " ✗ numa did not return an A record after 8s"
echo " numa log:"
cat /tmp/numa.log
kill $NUMA_PID 2>/dev/null || true
exit 1
fi
echo "── dig @127.0.0.1 -p 5354 google.com A ──"
ANSWER=$(dig @127.0.0.1 -p 5354 google.com A +short +time=2 +tries=1)
echo "$ANSWER" | sed "s/^/ /"
kill $NUMA_PID 2>/dev/null || true
# FHS path assertion: the default data dir on Linux must be /var/lib/numa
# (not the legacy /usr/local/var/numa). The CA cert generated at startup
# is the canonical proof that numa wrote to the right place.
echo
echo "── FHS path check ──"
if [ -f /var/lib/numa/ca.pem ]; then
echo " ✓ CA cert at /var/lib/numa/ca.pem (FHS path)"
else
echo " ✗ CA cert NOT at /var/lib/numa/ca.pem"
echo " ls /var/lib/numa/:"
ls -la /var/lib/numa/ 2>&1 | sed "s/^/ /"
echo " ls /usr/local/var/numa/:"
ls -la /usr/local/var/numa/ 2>&1 | sed "s/^/ /"
exit 1
fi
if [ -e /usr/local/var/numa ]; then
echo " ✗ legacy path /usr/local/var/numa unexpectedly exists on a fresh container"
exit 1
fi
echo " ✓ legacy path /usr/local/var/numa absent (fresh install used FHS)"
echo
echo " ✓ numa built, ran, answered a forward query, and used the FHS data dir on Arch"
'
echo
printf "${GREEN}── smoke-arch passed ──${RESET}\n"

138
tests/docker/smoke-port53.sh Executable file
View File

@@ -0,0 +1,138 @@
#!/usr/bin/env bash
#
# Port-53 conflict advisory integration test.
#
# Builds numa from source inside a debian:bookworm container, pre-binds
# port 53 with a UDP socket, then runs numa bare (default bind_addr
# 0.0.0.0:53). Verifies:
# - process exits with code 1
# - stderr contains the advisory ("cannot bind to")
# - stderr contains both fix suggestions ("numa install", "bind_addr")
#
# This is the end-to-end test for the fix in:
# src/main.rs — AddrInUse match arm → eprint advisory + process::exit(1)
#
# No systemd-resolved needed — the conflict is simulated by a Python
# UDP socket held open before numa starts.
#
# Requirements: docker
# Usage: ./tests/docker/smoke-port53.sh
set -euo pipefail
cd "$(dirname "$0")/../.."
GREEN="\033[32m"; RED="\033[31m"; RESET="\033[0m"
pass() { printf " ${GREEN}${RESET} %s\n" "$1"; }
fail() { printf " ${RED}${RESET} %s\n" "$1"; printf " %s\n" "$2"; FAILED=$((FAILED+1)); }
FAILED=0
echo "── smoke-port53: building + testing numa on debian:bookworm ──"
echo " (first run is slow: image pull + cold cargo build, ~5-8 min)"
echo
OUTPUT=$(docker run --rm \
--platform linux/amd64 \
-v "$PWD:/src:ro" \
-v numa-port53-cargo:/root/.cargo \
-v numa-port53-target:/work/target \
debian:bookworm bash -c '
set -e
apt-get update -qq && apt-get install -y -qq curl build-essential python3 2>&1 | tail -3
# Install rustup if not already in the cargo cache volume
if ! command -v cargo &>/dev/null; then
curl -sSf https://sh.rustup.rs | sh -s -- -y --profile minimal --quiet
fi
. "$HOME/.cargo/env"
# Copy source to a writable workdir
mkdir -p /work
tar -C /src --exclude=./target --exclude=./.git -cf - . | tar -C /work -xf -
cd /work
echo "── cargo build --release --locked ──"
cargo build --release --locked 2>&1 | tail -5
echo
# Write the holder script to a file to avoid quoting hell.
# Holds port 53 until killed — no sleep race.
cat > /tmp/hold53.py << '"'"'PYEOF'"'"'
import socket, signal
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 0)
s.bind(("", 53))
signal.pause()
PYEOF
python3 /tmp/hold53.py &
HOLDER_PID=$!
# Verify the holder is actually up before proceeding
sleep 0.3
if ! kill -0 $HOLDER_PID 2>/dev/null; then
echo "holder_failed=1"
exit 1
fi
echo "── running numa with port 53 already bound ──"
# timeout 5: guards against numa not exiting (advisory not fired, bug present)
# Capture stderr to a file so the exit code is not clobbered by || or $()
set +e
timeout 5 ./target/release/numa > /tmp/numa-stderr.txt 2>&1
EXIT_CODE=$?
set -e
STDERR=$(cat /tmp/numa-stderr.txt)
kill $HOLDER_PID 2>/dev/null || true
echo "exit_code=$EXIT_CODE"
printf "%s" "$STDERR" | sed "s/^/ numa: /"
' 2>&1)
echo "$OUTPUT"
echo
echo "── assertions ──"
if echo "$OUTPUT" | grep -q "holder_failed=1"; then
echo " SETUP FAILED: could not pre-bind port 53 inside container"
exit 1
fi
EXIT_CODE=$(echo "$OUTPUT" | grep '^exit_code=' | cut -d= -f2)
if [ "${EXIT_CODE:-}" = "1" ]; then
pass "exits with code 1"
else
fail "exits with code 1" "got: exit_code=${EXIT_CODE:-<missing>}"
fi
if echo "$OUTPUT" | grep -q "cannot bind to"; then
pass "advisory printed to stderr"
else
fail "advisory printed to stderr" "stderr did not contain 'cannot bind to'"
fi
if echo "$OUTPUT" | grep -q "numa install"; then
pass "advisory offers 'sudo numa install'"
else
fail "advisory offers 'sudo numa install'" "not found in output"
fi
if echo "$OUTPUT" | grep -q "bind_addr"; then
pass "advisory offers non-privileged port alternative"
else
fail "advisory offers non-privileged port alternative" "'bind_addr' not found in output"
fi
echo
if [ "$FAILED" -eq 0 ]; then
printf "${GREEN}── smoke-port53 passed ──${RESET}\n"
exit 0
else
printf "${RED}── smoke-port53 failed ($FAILED assertion(s)) ──${RESET}\n"
exit 1
fi

View File

@@ -404,6 +404,241 @@ check "Cache flushed" \
kill "$NUMA_PID" 2>/dev/null || true kill "$NUMA_PID" 2>/dev/null || true
wait "$NUMA_PID" 2>/dev/null || true wait "$NUMA_PID" 2>/dev/null || true
sleep 1
# ---- Suite 5: DNS-over-TLS (RFC 7858) ----
echo ""
echo "╔══════════════════════════════════════════╗"
echo "║ Suite 5: DNS-over-TLS (RFC 7858) ║"
echo "╚══════════════════════════════════════════╝"
if ! command -v kdig >/dev/null 2>&1; then
printf " ${DIM}skipped — install 'knot' for kdig${RESET}\n"
elif ! command -v openssl >/dev/null 2>&1; then
printf " ${DIM}skipped — openssl not found${RESET}\n"
else
DOT_PORT=8853
DOT_CERT=/tmp/numa-integration-dot.crt
DOT_KEY=/tmp/numa-integration-dot.key
# Generate a test cert mirroring production self_signed_tls SAN shape
# (*.numa wildcard + explicit numa.numa apex).
openssl req -x509 -newkey rsa:2048 -nodes -days 1 \
-keyout "$DOT_KEY" -out "$DOT_CERT" \
-subj "/CN=Numa .numa services" \
-addext "subjectAltName=DNS:*.numa,DNS:numa.numa" \
>/dev/null 2>&1
# Suite 5 uses a local zone so it's upstream-independent — the point is
# to exercise the DoT transport layer (handshake, ALPN, framing,
# persistent connections), not re-test recursive resolution.
cat > "$CONFIG" << CONF
[server]
bind_addr = "127.0.0.1:$PORT"
api_port = $API_PORT
[upstream]
mode = "forward"
address = "127.0.0.1"
port = 65535
[cache]
max_entries = 10000
[blocking]
enabled = false
[proxy]
enabled = false
[dot]
enabled = true
port = $DOT_PORT
bind_addr = "127.0.0.1"
cert_path = "$DOT_CERT"
key_path = "$DOT_KEY"
[[zones]]
domain = "dot-test.example"
record_type = "A"
value = "10.0.0.1"
ttl = 60
CONF
RUST_LOG=info "$BINARY" "$CONFIG" > "$LOG" 2>&1 &
NUMA_PID=$!
sleep 4
if ! kill -0 "$NUMA_PID" 2>/dev/null; then
FAILED=$((FAILED + 1))
printf " ${RED}${RESET} DoT startup\n"
printf " ${DIM}%s${RESET}\n" "$(tail -5 "$LOG")"
else
echo ""
echo "=== Listener ==="
check "DoT bound on 127.0.0.1:$DOT_PORT" \
"DoT listening on 127.0.0.1:$DOT_PORT" \
"$(grep 'DoT listening' "$LOG")"
KDIG="kdig @127.0.0.1 -p $DOT_PORT +tls +tls-ca=$DOT_CERT +tls-hostname=numa.numa +time=5 +retry=0"
echo ""
echo "=== Queries over DoT ==="
check "DoT local zone A record" \
"10.0.0.1" \
"$($KDIG +short dot-test.example A 2>/dev/null)"
# +keepopen reuses one TLS connection for multiple queries — tests
# persistent connection handling. kdig applies options left-to-right,
# so +short and +keepopen must come before the query specs.
check "DoT persistent connection (3 queries, 1 handshake)" \
"10.0.0.1" \
"$($KDIG +keepopen +short dot-test.example A dot-test.example A dot-test.example A 2>/dev/null | head -1)"
echo ""
echo "=== ALPN ==="
# Positive case: client offers "dot", server picks it.
ALPN_OK=$(echo "" | openssl s_client -connect "127.0.0.1:$DOT_PORT" \
-servername numa.numa -alpn dot -CAfile "$DOT_CERT" 2>&1 </dev/null || true)
check "DoT negotiates ALPN \"dot\"" \
"ALPN protocol: dot" \
"$ALPN_OK"
# Negative case: client offers only "h2", server must reject the
# handshake with no_application_protocol alert (cross-protocol
# confusion defense, RFC 7858bis §3.2).
if echo "" | openssl s_client -connect "127.0.0.1:$DOT_PORT" \
-servername numa.numa -alpn h2 -CAfile "$DOT_CERT" \
</dev/null >/dev/null 2>&1; then
ALPN_MISMATCH="handshake unexpectedly succeeded"
else
ALPN_MISMATCH="rejected"
fi
check "DoT rejects non-dot ALPN" \
"rejected" \
"$ALPN_MISMATCH"
fi
kill "$NUMA_PID" 2>/dev/null || true
wait "$NUMA_PID" 2>/dev/null || true
rm -f "$DOT_CERT" "$DOT_KEY"
fi
sleep 1
# ---- Suite 6: Proxy + DoT coexistence ----
echo ""
echo "╔══════════════════════════════════════════╗"
echo "║ Suite 6: Proxy + DoT Coexistence ║"
echo "╚══════════════════════════════════════════╝"
if ! command -v kdig >/dev/null 2>&1 || ! command -v openssl >/dev/null 2>&1; then
printf " ${DIM}skipped — needs kdig + openssl${RESET}\n"
else
DOT_PORT=8853
PROXY_HTTP_PORT=8080
PROXY_HTTPS_PORT=8443
NUMA_DATA=/tmp/numa-integration-data
# Fresh data dir so we generate a fresh CA for this suite. Path is set
# via [server] data_dir in the TOML below, not an env var — numa treats
# its config file as the single source of truth for all knobs.
rm -rf "$NUMA_DATA"
mkdir -p "$NUMA_DATA"
cat > "$CONFIG" << CONF
[server]
bind_addr = "127.0.0.1:$PORT"
api_port = $API_PORT
data_dir = "$NUMA_DATA"
[upstream]
mode = "forward"
address = "127.0.0.1"
port = 65535
[cache]
max_entries = 10000
[blocking]
enabled = false
[proxy]
enabled = true
port = $PROXY_HTTP_PORT
tls_port = $PROXY_HTTPS_PORT
tld = "numa"
bind_addr = "127.0.0.1"
[dot]
enabled = true
port = $DOT_PORT
bind_addr = "127.0.0.1"
[[zones]]
domain = "dot-test.example"
record_type = "A"
value = "10.0.0.1"
ttl = 60
CONF
RUST_LOG=info "$BINARY" "$CONFIG" > "$LOG" 2>&1 &
NUMA_PID=$!
sleep 4
if ! kill -0 "$NUMA_PID" 2>/dev/null; then
FAILED=$((FAILED + 1))
printf " ${RED}${RESET} Startup with proxy + DoT\n"
printf " ${DIM}%s${RESET}\n" "$(tail -5 "$LOG")"
else
echo ""
echo "=== Both listeners ==="
check "DoT listener bound" \
"DoT listening on 127.0.0.1:$DOT_PORT" \
"$(grep 'DoT listening' "$LOG")"
check "HTTPS proxy listener bound" \
"HTTPS proxy listening on 127.0.0.1:$PROXY_HTTPS_PORT" \
"$(grep 'HTTPS proxy listening' "$LOG")"
PANIC_COUNT=$(grep -c 'panicked' "$LOG" 2>/dev/null || echo 0)
check "No startup panics in log" \
"^0$" \
"$PANIC_COUNT"
echo ""
echo "=== DoT works with proxy enabled ==="
# Proxy's build_tls_config runs first and creates the CA in
# $NUMA_DATA_DIR. DoT self_signed_tls then loads the same CA and
# issues its own leaf cert. One CA trusts both listeners.
CA="$NUMA_DATA/ca.pem"
KDIG="kdig @127.0.0.1 -p $DOT_PORT +tls +tls-ca=$CA +tls-hostname=numa.numa +time=5 +retry=0"
check "DoT local zone A (with proxy on)" \
"10.0.0.1" \
"$($KDIG +short dot-test.example A 2>/dev/null)"
echo ""
echo "=== Proxy TLS works with DoT enabled ==="
# Proxy cert has SAN numa.numa (auto-added "numa" service). A
# successful handshake validates that the proxy's separate
# ServerConfig wasn't disturbed by DoT's own cert generation.
PROXY_TLS=$(echo "" | openssl s_client -connect "127.0.0.1:$PROXY_HTTPS_PORT" \
-servername numa.numa -CAfile "$CA" 2>&1 </dev/null || true)
check "Proxy HTTPS TLS handshake succeeds" \
"Verify return code: 0 (ok)" \
"$PROXY_TLS"
fi
kill "$NUMA_PID" 2>/dev/null || true
wait "$NUMA_PID" 2>/dev/null || true
rm -rf "$NUMA_DATA"
fi
# Summary # Summary
echo "" echo ""

View File

@@ -0,0 +1,94 @@
#!/usr/bin/env bash
#
# Manual macOS CA trust contract test.
#
# Mirrors src/system_dns.rs::trust_ca_macos / untrust_ca_macos by running
# the same `security` shell commands against a fixture cert with a unique
# CN. Safe to run alongside a production numa install:
#
# - Test cert CN = "Numa Local CA Test <pid-ts>", always strictly longer
# than the production CN "Numa Local CA". `security find-certificate -c`
# does substring matching, so the test's search for $TEST_CN can never
# match the production cert (the search term is longer than the prod CN).
# - All deletes use `delete-certificate -Z <hash>`, which only touches the
# cert with that exact hash. Production and test certs have different
# hashes by construction (different key material), so the delete cannot
# reach the production cert even if a CN search somehow returned both.
#
# Mutates the System keychain (briefly). Cleans up on success or interrupt.
# Requires sudo for `security add-trusted-cert` and `delete-certificate`.
#
# Usage: ./tests/manual/install-trust-macos.sh
set -euo pipefail
if [[ "$OSTYPE" != darwin* ]]; then
echo "This test is macOS-only." >&2
exit 1
fi
GREEN="\033[32m"; RED="\033[31m"; RESET="\033[0m"
# Production constant from src/tls.rs::CA_COMMON_NAME — keep in sync.
PROD_CN="Numa Local CA"
KEYCHAIN="/Library/Keychains/System.keychain"
# Notice if production numa is already installed. We proceed regardless —
# see header for why coexistence is safe (unique CN + by-hash deletion).
if security find-certificate -c "$PROD_CN" "$KEYCHAIN" >/dev/null 2>&1; then
echo " note: production '$PROD_CN' detected — proceeding alongside (test cert can't touch it)"
echo
fi
# Unique CN ensures the test cert can never collide with production.
TEST_CN="Numa Local CA Test $$-$(date +%s)"
FIXTURE_DIR=$(mktemp -d)
cleanup() {
# Best-effort: remove any test certs by hash if still present.
if security find-certificate -c "$TEST_CN" "$KEYCHAIN" >/dev/null 2>&1; then
echo " cleanup: removing leftover test cert"
security find-certificate -c "$TEST_CN" -a -Z "$KEYCHAIN" 2>/dev/null \
| awk '/^SHA-1 hash:/ {print $NF}' \
| while read -r hash; do
sudo security delete-certificate -Z "$hash" "$KEYCHAIN" >/dev/null 2>&1 || true
done
fi
rm -rf "$FIXTURE_DIR"
}
trap cleanup EXIT
echo "── generating fixture CA ──"
openssl req -x509 -newkey rsa:2048 -nodes -days 1 \
-keyout "$FIXTURE_DIR/ca.key" \
-out "$FIXTURE_DIR/ca.pem" \
-subj "/CN=$TEST_CN" \
-addext "basicConstraints=critical,CA:TRUE" \
-addext "keyUsage=critical,keyCertSign,cRLSign" >/dev/null 2>&1
echo " CN: $TEST_CN"
echo
echo "── trust step (mirrors trust_ca_macos) ──"
sudo security add-trusted-cert -d -r trustRoot -k "$KEYCHAIN" "$FIXTURE_DIR/ca.pem"
if security find-certificate -c "$TEST_CN" "$KEYCHAIN" >/dev/null 2>&1; then
printf " ${GREEN}${RESET} test cert found in keychain\n"
else
printf " ${RED}${RESET} test cert NOT found after add-trusted-cert\n"
exit 1
fi
echo
echo "── untrust step (mirrors untrust_ca_macos) ──"
security find-certificate -c "$TEST_CN" -a -Z "$KEYCHAIN" 2>/dev/null \
| awk '/^SHA-1 hash:/ {print $NF}' \
| while read -r hash; do
sudo security delete-certificate -Z "$hash" "$KEYCHAIN" >/dev/null
done
if security find-certificate -c "$TEST_CN" "$KEYCHAIN" >/dev/null 2>&1; then
printf " ${RED}${RESET} test cert STILL present after delete (regression)\n"
exit 1
fi
printf " ${GREEN}${RESET} test cert removed from keychain\n"
echo
printf "${GREEN}all checks passed${RESET}\n"