33 Commits

Author SHA1 Message Date
Razvan Dimescu
db6a105f77 Merge pull request #150 from razvandimescu/fix/refresh-honors-forwarding-rules
fix(cache): refresh honors forwarding rules (#147)
2026-04-25 18:26:47 +03:00
Razvan Dimescu
bf977595b6 Merge pull request #152 from gatozee/fix_title_alignment
fix: title alignment
2026-04-25 08:23:09 +03:00
Krtek Zee
63a2d26276 fix: title alignment 2026-04-24 17:42:32 -07:00
Razvan Dimescu
cfef4f4160 fix(cache): refresh honors forwarding rules (#147)
refresh_entry unconditionally queried the default upstream, so any
domain covered by a forwarding rule got re-resolved through the public
resolver once its cache entry hit NearExpiry or Stale. The resulting
NXDOMAIN/NODATA overwrote the good answer for at least cache.min_ttl
(60s default), persisting until restart. Match the precedence from
resolve_query: forwarding rule wins over recursive/default upstream.

Extract a_record_response() helper in testutil and migrate six call
sites — two regression tests here plus four adjacent tests using the
same boilerplate.
2026-04-24 19:03:19 +03:00
Razvan Dimescu
38ddb59e00 Merge pull request #149 from razvandimescu/fix/publish-aur-detached-head
ci(aur): attach to master after clone to avoid detached HEAD
2026-04-24 18:07:21 +03:00
Razvan Dimescu
441935af5a Merge pull request #148 from razvandimescu/fix/dashboard-cache
fix(api): Cache-Control: no-cache on dashboard HTML
2026-04-24 17:59:30 +03:00
Razvan Dimescu
d090e049ec ci(aur): attach to master after clone to avoid detached HEAD
aur.archlinux.org stopped advertising the HEAD symref around 2026-04-22
(`git ls-remote --symref` returns HEAD as a raw SHA, no 'ref:' line).
Fresh clones therefore land in detached HEAD, commits do not land on
any branch, and 'git push origin master' fails with:

  error: src refspec master does not match any

Every AUR publish run since has failed for this reason. Checking out
master explicitly after clone attaches the working copy to the branch
the push targets. refs/heads/master is still present on the remote, so
no other changes are needed.
2026-04-24 17:57:51 +03:00
Razvan Dimescu
4aa91a5236 fix(api): Cache-Control: no-cache on dashboard HTML
Browsers heuristically cached the dashboard page because the response
carried no Cache-Control header, so a numa upgrade on the daemon did
not surface updated PATH_DEFS (e.g. the UPSTREAM row added in v0.14.0)
until the user hard-reloaded. Force revalidation on every load.
Closes #144.
2026-04-24 17:51:14 +03:00
Razvan Dimescu
93f0ea7501 Merge pull request #145 from razvandimescu/docs/recipes
docs: lift user-facing guides to recipes/, drop dangling docs/ refs
2026-04-24 15:22:44 +03:00
Razvan Dimescu
f7f35b3424 docs: lift user-facing guides to recipes/, drop dangling docs/ refs
docs/ is gitignored; references to docs/implementation/*.md from public
source, configs, and packaging were dead links outside the maintainer
machine. Adds four recipes (README, dnsdist-front, doh-on-lan,
odoh-upstream) under top-level recipes/ and repoints existing pointers.

- numa.toml, packaging/client/{README.md,numa.toml}: point to
  recipes/odoh-upstream.md.
- src/{bootstrap_resolver,forward,serve}.rs: reference issue #122
  directly (module scope is broader than the ODoH-specific recipe).
- src/health.rs: drop the §-ref; iOS HealthInfo remains named as the
  canonical consumer.
2026-04-24 15:09:16 +03:00
Razvan Dimescu
3913d42319 Merge pull request #137 from razvandimescu/fix/soa-compression-roundtrip
fix(packet): parse SOA natively to stop malformed replies (#128)
2026-04-24 13:59:57 +03:00
Razvan Dimescu
e702f5861b Update README.md to remove outdated listing information
Removed section about listing on the public ecosystem and DNSCrypt's canonical list.
2026-04-23 09:39:34 +03:00
Razvan Dimescu
933643f2c7 Merge pull request #139 from razvandimescu/fix/odoh-relay-doc-path
docs(config): fix ODoH relay path in numa.toml example
2026-04-23 08:58:53 +03:00
Razvan Dimescu
96cf778bea docs(config): fix ODoH relay path in numa.toml example
The example in `numa.toml` pointed at `https://odoh-relay.numa.rs/proxy`,
but the relay only serves the ODoH endpoint at `/relay` (every other
reference in the tree — `src/config.rs` docs and tests, and
`packaging/client/numa.toml` — uses `/relay`). Users who copied the
example got `404 Not Found` on every query and SERVFAIL at the client.

Reported in #138.
2026-04-23 08:53:35 +03:00
Razvan Dimescu
2274151c17 fix(packet): parse SOA natively to stop malformed replies (#128)
SOA records were stored as opaque bytes (DnsRecord::UNKNOWN), so the
RFC 1035 §3.3.13 MNAME/RNAME name-compression pointers — offsets into
the upstream packet — were re-emitted verbatim. Once Numa applied its
own compression to surrounding names, those pointers landed on garbage
and clients rejected the reply ("malformed reply packet" in kdig).

Parse SOA via read_qname and write via write_qname, matching the
NS/CNAME/MX pattern. Adds the canonical-rdata arm in dnssec.rs for
RRSIG verification. Regression test round-trips a CNAME-chain response
with a compressed SOA in authority through hickory-proto strict parse.
2026-04-23 00:36:02 +03:00
Razvan Dimescu
c787de1548 chore: bump version to 0.14.2 2026-04-22 23:57:37 +03:00
Razvan Dimescu
e6e79273b9 Revert "chore: bump version to 0.15.0"
This reverts commit 3ec3b40830.
2026-04-22 23:57:28 +03:00
Razvan Dimescu
3ec3b40830 chore: bump version to 0.15.0 2026-04-22 23:50:20 +03:00
Razvan Dimescu
90fa79bc0f Merge pull request #135 from razvandimescu/fix/hedge-default-off
fix(upstream): default hedge_ms=0 to avoid silent 2x upstream query count
2026-04-22 23:49:15 +03:00
Razvan Dimescu
b8a125b598 fix(upstream): default hedge_ms=0 to avoid silent 2x upstream query count
Hedging fires a second upstream query against the same upstream after
the hedge delay. Rescues packet loss and handshake stalls on flaky
links, but every lookup shows up twice at the provider — silently
halves the headroom for anyone on a quota'd upstream (NextDNS free tier,
Control D, paid Quad9).

Surfaced by #134 (bcookatpcsd), who saw every query duplicated on the
NextDNS dashboard with a single-address DoT upstream. Not a bug — the
feature doing what it says on the tin — but a surprising default.

Flipping the default to 0 makes hedging explicitly opt-in. Users who
want tail-latency rescue on flaky nets add `hedge_ms = 10` (or higher).
No config migration needed; no breaking changes to the API surface.

Also tightens the numa.toml comment so the trade-off is visible at
config time, not retroactively on a provider dashboard.
2026-04-22 23:30:55 +03:00
Razvan Dimescu
bc30be94e7 Merge pull request #131 from razvandimescu/feat/packaging-client-docker
feat(packaging): ODoH client Docker deploy recipe
2026-04-22 23:11:50 +03:00
Razvan Dimescu
26b1cd5917 feat(packaging): ODoH client Docker deploy
Single-container docker-compose recipe for running numa in ODoH client
mode. Ships with a starter numa.toml pointing at odoh-relay.numa.rs
paired with Cloudflare's ODoH target — two independent operators with
distinct eTLD+1s, so the default passes numa's same-operator check.

Exposes :53 UDP+TCP for LAN clients and :5380 for the dashboard + REST
API. README covers prerequisites, deploy, verification, and the ODoH
privacy boundary (relay sees IP, target sees query, neither sees both).

Advertised alongside packaging/relay/ in the main README Docker section.
2026-04-22 18:05:46 +03:00
Razvan Dimescu
77d6d89f80 Merge pull request #130 from razvandimescu/docs/numa-toml-odoh-examples
docs(config): ODoH upstream examples with relay_ip/target_ip pinning
2026-04-22 17:20:19 +03:00
Razvan Dimescu
4fdd05f284 Merge pull request #132 from razvandimescu/chore/site-live-reload
chore(site): live-reload dev server
2026-04-22 17:17:37 +03:00
Razvan Dimescu
2e461ccc0f docs(config): add ODoH upstream examples with relay_ip/target_ip pinning
Complements the bootstrap resolver fix (#122, #126) by documenting the
ODoH knobs in the commented config template. Explains relay_ip/target_ip
as the way to prevent plain-DNS leaks of the relay/target hostnames via
the bootstrap resolver on cold boot when numa is its own system DNS.
2026-04-22 17:13:13 +03:00
Razvan Dimescu
bf84c44346 Merge pull request #133 from razvandimescu/chore/cargo-audit-rustls-webpki
chore: bump rustls-webpki to 0.103.13 (RUSTSEC-2026-0104)
2026-04-22 17:03:58 +03:00
Razvan Dimescu
df2062882c chore: bump rustls-webpki to 0.103.13 for RUSTSEC-2026-0104
Advisory published 2026-04-22: reachable panic in certificate revocation
list parsing. Patch is a lockfile-only bump — transitive via rustls, no
direct dep changes. Unblocks cargo audit in CI across all open PRs.
2026-04-22 16:42:10 +03:00
Razvan Dimescu
76dda89078 Merge pull request #129 from razvandimescu/chore/gitignore-claude
chore: gitignore .claude/ harness state
2026-04-22 16:39:56 +03:00
Razvan Dimescu
640b64bf7e chore(site): live-reload dev server via chokidar + browser-sync
Replaces the plain python3 http.server + one-shot make blog with a
watcher pipeline: chokidar regenerates HTML on MD/template changes,
browser-sync serves the site and reloads the browser on rendered-asset
changes. First run downloads both via npx; subsequent runs are instant.

Preflight checks for npx and pandoc. Port arg parsing is tolerant of
legacy --drafts flag ordering (drafts are always included now, since
that's what the dev loop actually wants).

Cleanup trap kills the watcher on exit so re-runs don't leave orphans.
2026-04-22 15:50:21 +03:00
Razvan Dimescu
5ba19e04c8 chore: gitignore local Claude Code harness state
.claude/ holds per-session harness files (settings.local.json, task
locks, worktree metadata). None of it belongs in the repo.
2026-04-22 15:49:58 +03:00
Razvan Dimescu
c98afafaa1 Merge pull request #127 from razvandimescu/refactor/bootstrap-btreemap
refactor(bootstrap): BTreeMap for overrides + simplify review
2026-04-21 18:41:49 +03:00
Razvan Dimescu
5cba02a6c8 refactor(bootstrap): BTreeMap for overrides + simplify review
- Switch overrides from HashMap to BTreeMap — deterministic iteration by
  type, drops the manual sort when logging.
- Rename the flat_map closure's inner `ips` to `addrs` to stop shadowing
  the outer Vec<String>.
- Trim the Suite 8 TEST-NET-1 comment to keep the "why" and drop
  mechanism narration.
- Drop a redundant sleep 1 after wait — wait already blocks on exit.
2026-04-21 18:37:35 +03:00
Razvan Dimescu
46a95d58aa Merge pull request #126 from razvandimescu/fix/self-resolver-loop
fix(bootstrap): route numa HTTPS via IP-literal bootstrap resolver (#122)
2026-04-21 17:52:51 +03:00
26 changed files with 725 additions and 72 deletions

View File

@@ -126,6 +126,10 @@ jobs:
# ssh://aur@aur.archlinux.org/<package-name>.git
git clone ssh://aur@aur.archlinux.org/$AUR_PKGNAME.git aur-repo
# AUR's git server no longer advertises HEAD's symref, so clone
# lands in detached HEAD. Attach to master before committing.
git -C aur-repo checkout master
cp PKGBUILD aur-repo/
cd aur-repo

1
.gitignore vendored
View File

@@ -1,6 +1,7 @@
/target
/build-dir
CLAUDE.md
.claude/
docs/
site/blog/posts/
ios/

6
Cargo.lock generated
View File

@@ -1547,7 +1547,7 @@ dependencies = [
[[package]]
name = "numa"
version = "0.14.1"
version = "0.14.2"
dependencies = [
"arc-swap",
"axum",
@@ -2130,9 +2130,9 @@ dependencies = [
[[package]]
name = "rustls-webpki"
version = "0.103.12"
version = "0.103.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8279bb85272c9f10811ae6a6c547ff594d6a7f3c6c6b02ee9726d1d0dcfcdd06"
checksum = "61c429a8649f110dddef65e2a5ad240f747e85f7758a6bccc7e5777bd33f756e"
dependencies = [
"aws-lc-rs",
"ring",

View File

@@ -1,6 +1,6 @@
[package]
name = "numa"
version = "0.14.1"
version = "0.14.2"
authors = ["razvandimescu <razvan@dimescu.com>"]
edition = "2021"
description = "Portable DNS resolver in Rust — .numa local domains, ad blocking, developer overrides, DNS-over-HTTPS"

View File

@@ -125,6 +125,10 @@ docker run -d --name numa --network host \
Multi-arch: `linux/amd64` and `linux/arm64`.
Turnkey compose recipes:
- [`packaging/client/`](packaging/client/) — ODoH client mode (anonymous DNS), Numa + starter `numa.toml`.
- [`packaging/relay/`](packaging/relay/) — public ODoH relay, Numa + Caddy + ACME.
## How It Compares
| | Pi-hole | AdGuard Home | Unbound | Numa |

View File

@@ -22,6 +22,7 @@ api_port = 5380
# [upstream]
# mode = "forward" # "forward" (default) — relay to upstream
# # "recursive" — resolve from root hints (no address needed)
# # "odoh" — Oblivious DoH (see ODoH block below)
# address = "9.9.9.9" # single upstream (plain UDP)
# address = ["192.168.1.1", "9.9.9.9:5353"] # multiple upstreams — SRTT picks fastest
# address = "https://dns.quad9.net/dns-query" # DNS-over-HTTPS (encrypted)
@@ -29,11 +30,29 @@ api_port = 5380
# fallback = ["8.8.8.8", "1.1.1.1"] # tried only when all primaries fail
# port = 53 # default port for addresses without :port
# timeout_ms = 3000
# hedge_ms = 10 # request hedging delay (ms). After this delay
# # without a response, fires a parallel request
# # to the same upstream. Rescues packet loss (UDP),
# # dispatch spikes (DoH), TLS stalls (DoT).
# # Set to 0 to disable. Default: 10
# hedge_ms = 0 # request hedging delay (ms). Default: 0 (off).
# # Set to e.g. 10 to fire a parallel upstream
# # request after 10ms of silence — rescues packet
# # loss (UDP), dispatch spikes (DoH), TLS stalls
# # (DoT). Doubles the upstream query count, so
# # leave off for quota'd providers (NextDNS,
# # Control D).
# ODoH (Oblivious DNS-over-HTTPS, RFC 9230). The relay sees your IP but
# not the question; the target sees the question but not your IP. Numa
# refuses same-operator relay+target configs by default (eTLD+1 check).
# [upstream]
# mode = "odoh"
# relay = "https://odoh-relay.numa.rs/relay"
# target = "https://odoh.cloudflare-dns.com/dns-query"
# strict = true # default: refuse to downgrade to `fallback`
# # on relay failure. Set false to allow a
# # non-oblivious fallback path.
# relay_ip = "178.104.229.30" # optional: pin IPs so numa doesn't leak the
# target_ip = "104.16.249.249" # relay/target hostnames via the bootstrap
# # resolver on cold boot when numa is its
# # own system DNS. See
# # recipes/odoh-upstream.md.
# root_hints = [ # only used in recursive mode
# "198.41.0.4", # a.root-servers.net (Verisign)
# "199.9.14.201", # b.root-servers.net (USC-ISI)

View File

@@ -0,0 +1,72 @@
# Numa ODoH Client — Docker deploy
Single-container deploy that runs Numa as an ODoH (RFC 9230) client: every
DNS query routes through an independent relay + target so neither operator
sees both your IP and your question. See the [ODoH upstream recipe][odoh]
for the protocol details and the bootstrap-pinning trade-offs.
[odoh]: ../../recipes/odoh-upstream.md
## Prerequisites
- Docker + Docker Compose v2.
- Port 53 (UDP+TCP) free on the host — Numa listens there for DNS
clients on your LAN.
## Configure
The shipped `numa.toml` points at Numa's own public relay
(`odoh-relay.numa.rs`) paired with Cloudflare's ODoH target
(`odoh.cloudflare-dns.com`). That's two independent operators with
distinct eTLD+1s — the default configuration passes Numa's same-operator
check and works out of the box.
To use a different relay or target, edit `numa.toml` and adjust the URLs.
The `relay` and `target` must resolve to distinct operators or Numa
refuses to start.
## Deploy
```sh
docker compose up -d
docker compose logs -f numa # watch startup
```
The first query fires the bootstrap resolver + ODoH config fetch;
subsequent queries reuse the warm HTTP/2 connection.
## Point your devices at it
Set each device's DNS server to the IP of the Docker host. For a LAN-wide
rollout, set the DNS server in your router's DHCP config so every device
picks it up automatically.
Verify a query landed on the ODoH path:
```sh
dig @<host-ip> example.com
curl http://<host-ip>:5380/stats | jq '.upstream_transport.odoh'
```
`upstream_transport.odoh` should increment on each query.
## What this does NOT buy you
ODoH protects the *path*, not the content:
- **The target (Cloudflare here) still sees the question.** It just
doesn't know it's you asking. If Cloudflare logs every ODoH query, the
query is still visible — it's simply unattributed.
- **The relay is a trusted party for availability.** A malicious relay
can drop or delay queries; it just can't read them.
- **Traffic analysis defeats small relays.** If you're the only client
talking to a relay, timing alone re-identifies you. Shared, busy relays
give better anonymity sets.
See the [ODoH integration doc][odoh] for more.
## Relay operator?
If you'd rather run your own relay (same binary, different mode), see
[`../relay/`](../relay/) — that package spins up a public-facing relay
with Caddy + ACME in front of it.

View File

@@ -0,0 +1,15 @@
services:
numa:
image: ghcr.io/razvandimescu/numa:latest
command: ["/etc/numa/numa.toml"]
ports:
- "53:53/udp"
- "53:53/tcp"
- "5380:5380/tcp" # dashboard + REST API
volumes:
- ./numa.toml:/etc/numa/numa.toml:ro
- numa_data:/var/lib/numa
restart: unless-stopped
volumes:
numa_data:

View File

@@ -0,0 +1,23 @@
# Numa — ODoH client mode (docker-compose starter).
# Sends every DNS query through an independent relay + target pair so
# neither operator sees both your IP and your question. See
# recipes/odoh-upstream.md for the protocol details and
# packaging/client/README.md for deploy notes.
[server]
bind_addr = "0.0.0.0:53"
api_bind_addr = "0.0.0.0"
data_dir = "/var/lib/numa"
[upstream]
mode = "odoh"
# Numa's own relay (Hetzner, systemd + Caddy). Swap to any other public
# ODoH relay if you'd rather not depend on a single operator; the protocol
# tolerates it, and Numa refuses same-operator relay+target by default.
relay = "https://odoh-relay.numa.rs/relay"
target = "https://odoh.cloudflare-dns.com/dns-query"
# strict = true (default). Relay failure → SERVFAIL, never silent downgrade.
[blocking]
enabled = true
# Default blocklist (Hagezi Pro). Edit the `lists` array to taste.

View File

@@ -39,10 +39,3 @@ curl https://<hostname>/health
Then point any ODoH client at `https://<hostname>/relay` and watch the
counters tick.
## Listing on the public ecosystem
DNSCrypt's [v3/odoh-relays.md](https://github.com/DNSCrypt/dnscrypt-resolvers/blob/master/v3/odoh-relays.md)
is the canonical list. The pruned 2025-09-16 commit shows one public ODoH
relay survived the cull — running this compose file doubles global supply.
Open a PR there once your relay has been up for ~24 hours.

11
recipes/README.md Normal file
View File

@@ -0,0 +1,11 @@
# Recipes
Scenario-driven configs for common Numa deployments. Each recipe is self-contained: copy the snippet, adjust the marked fields, reload.
## Transport / encryption
- [DoH on the LAN](doh-on-lan.md) — expose Numa's built-in DNS-over-HTTPS to local clients.
- [dnsdist in front of Numa](dnsdist-front.md) — terminate public TLS externally, keep Numa on loopback.
- [ODoH upstream with bootstrap pinning](odoh-upstream.md) — oblivious DNS client mode without leaking the relay/target hostnames.
Missing a scenario? Open an issue or PR — these are plain Markdown with no build step.

64
recipes/dnsdist-front.md Normal file
View File

@@ -0,0 +1,64 @@
# dnsdist in front of Numa
For public DoH with a real (ACME-signed) cert, terminate TLS outside Numa and forward plain DNS (or loopback-only DoH) to the resolver. Cert renewal, rate-limiting, and load-balancing live in the front-end; Numa stays focused on resolution.
## When to use this
- Public hostname (`dns.example.com`) with a Let's Encrypt or internal PKI cert.
- You want a dedicated front-end for DoH/DoT/DoQ while Numa stays loopback-bound.
- You plan to run multiple Numa instances behind one endpoint.
## Architecture
```
public 443/DoH ┐
public 853/DoT ├─► dnsdist ─► 127.0.0.1:53 (Numa UDP/TCP)
public 443/DoQ ┘
```
## dnsdist config
```lua
-- /etc/dnsdist/dnsdist.conf
newServer({address="127.0.0.1:53", name="numa", checkType="A", checkName="numa.rs."})
addDOHLocal(
"0.0.0.0:443",
"/etc/letsencrypt/live/dns.example.com/fullchain.pem",
"/etc/letsencrypt/live/dns.example.com/privkey.pem",
"/dns-query",
{doTCP=true, reusePort=true}
)
addTLSLocal(
"0.0.0.0:853",
"/etc/letsencrypt/live/dns.example.com/fullchain.pem",
"/etc/letsencrypt/live/dns.example.com/privkey.pem"
)
addAction(AllRule(), PoolAction("", false))
```
## Numa config
```toml
[proxy]
enabled = true # keep if you still use *.numa service routing
bind_addr = "127.0.0.1" # stays default
```
No changes to `[server]` — Numa keeps serving plain DNS on UDP/TCP 53, which dnsdist forwards.
## Caveat: client IPs
Without PROXY protocol support in Numa, the query log shows the front-end's IP on every query, not the real client. dnsdist can emit PROXY v2 (`useProxyProtocol=true` on `newServer`), but Numa doesn't yet parse it — tracked in the wish-list under #143. Until then, accept the blind spot or correlate against dnsdist's own logs.
## Verify
```bash
kdig +https @dns.example.com example.com
kdig +tls @dns.example.com example.com
```
Both should return clean answers. Numa's `/queries` API should show the request landing, sourced from the front-end IP.

61
recipes/doh-on-lan.md Normal file
View File

@@ -0,0 +1,61 @@
# DoH on the LAN
Numa ships an RFC 8484 DoH endpoint (`POST /dns-query`) on the `[proxy]` HTTPS listener. By default it binds `127.0.0.1:443` with a self-signed cert — invisible to anything off the box. Three changes make it reachable from the LAN.
## When to use this
- Your phone/laptop is on the same network as Numa and you want encrypted DNS without a cloud resolver.
- You're OK installing Numa's self-signed CA on every client (one-time, via `/ca.pem` + the mobileconfig flow).
For a publicly-trusted cert, see [dnsdist in front of Numa](dnsdist-front.md) instead.
## Minimal config
```toml
[proxy]
enabled = true # default
bind_addr = "0.0.0.0" # was 127.0.0.1 — expose to LAN
tls_port = 443 # default; DoH is served here
tld = "numa" # default — self-resolving, see below
```
`tld` is the DoH gate: Numa accepts the DoH request only when the `Host` header is loopback or equals (or is a subdomain of) `tld`. Clients therefore dial `https://numa/dns-query`.
With the default `tld = "numa"`, there's no DNS bootstrap to configure: Numa already resolves `numa` and `*.numa` to its own LAN IP for remote clients (that's how the `*.numa` service-proxy feature works). Any client that uses Numa as its resolver will resolve `numa` correctly on first try.
If you'd rather use a hostname that resolves via normal DNS (e.g. you want DoH-only clients that never talk plain DNS to Numa), set `tld = "dns.example.com"` and add a matching A record in whichever DNS your clients consult before reaching Numa.
## Trust the CA on each client
Numa generates a self-signed CA at startup. Fetch it once, import it wherever you'll run the DoH client:
```bash
curl -o numa-ca.pem http://<numa-ip>:5380/ca.pem
```
- **macOS** — `sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain numa-ca.pem`
- **iOS** — install the mobileconfig from the API (same CA, signed profile). Flip *Settings → General → About → Certificate Trust Settings* on after install.
- **Linux** — drop into `/usr/local/share/ca-certificates/` and run `sudo update-ca-certificates`.
- **Android** — requires the user-installed CA path; browsers may still refuse it for DoH. Consider the [dnsdist front](dnsdist-front.md) route instead.
## Verify
```bash
kdig +https @numa example.com
```
Without `+https` kdig uses plain DNS. With `+https` the same answers should flow over port 443.
Raw check:
```bash
curl -H 'accept: application/dns-message' \
--data-binary @query.bin \
https://numa/dns-query
```
## Gotchas
- Port 443 is privileged on Linux/macOS. Run Numa via the provided service units, or grant `CAP_NET_BIND_SERVICE` (`sudo setcap 'cap_net_bind_service=+ep' /path/to/numa`).
- Non-matching `Host` header → HTTP 404 from the proxy's fallback handler. Double-check `tld`.
- ChromeOS enrollment rejects user-installed CAs for some flows — known pain point, see issue #136.

59
recipes/odoh-upstream.md Normal file
View File

@@ -0,0 +1,59 @@
# ODoH upstream with bootstrap pinning
Numa can run as an Oblivious DoH (RFC 9230) client: the relay sees your IP but not the question, the target sees the question but not your IP. Neither party alone can re-identify a query. This recipe covers the minimal config and the bootstrap leak that `relay_ip` / `target_ip` close.
## When to use this
- You want split-trust encrypted DNS without a single provider seeing both who you are and what you asked.
- Numa is your system resolver (so there's no "other" DNS to ask).
## Minimal config
```toml
[upstream]
mode = "odoh"
relay = "https://odoh-relay.numa.rs/relay"
target = "https://odoh.cloudflare-dns.com/dns-query"
strict = true # refuse to fall back to a non-oblivious path on relay failure
```
`strict = true` means a relay-level HTTPS failure returns SERVFAIL instead of silently downgrading. Set it to `false` and configure `[upstream].fallback` if you'd rather keep resolving (at the cost of the oblivious property).
## The bootstrap leak
When Numa is the system resolver and needs to reach the relay/target, *something* has to translate `odoh-relay.numa.rs` → IP. If Numa asks itself, you deadlock. If Numa asks a bootstrap resolver (1.1.1.1, 9.9.9.9), that resolver learns which ODoH endpoint you use in cleartext — it can't see your questions, but it sees the destination. That's the leak ODoH was supposed to close.
`relay_ip` and `target_ip` tell Numa the IPs directly, so it never asks anyone:
```toml
[upstream]
mode = "odoh"
relay = "https://odoh-relay.numa.rs/relay"
target = "https://odoh.cloudflare-dns.com/dns-query"
relay_ip = "178.104.229.30" # pin the relay — no hostname lookup
target_ip = "104.16.249.249" # pin the target — no hostname lookup
```
Numa still validates TLS against the hostnames in `relay` / `target`, so a hijacked IP can't masquerade — pinning skips only the DNS step.
## Finding current IPs
```bash
dig +short odoh-relay.numa.rs
dig +short odoh.cloudflare-dns.com
```
Re-pin when an operator rotates. The community-maintained list at <https://github.com/DNSCrypt/dnscrypt-resolvers/blob/master/v3/odoh-relays.md> is a useful cross-reference.
## Verify
```bash
kdig @127.0.0.1 example.com
```
Numa's `/queries` API and startup banner should label the upstream as `odoh://`. Look for `ODoH relay returned ...` errors in the logs if routing fails.
## Known gotchas
- **Same-operator refused.** Numa's eTLD+1 check blocks configs where the relay and target belong to the same operator (pointless — same party sees both sides). Override only when testing.
- **Single relay.** Current config accepts one relay and one target. Multi-entry rotation/failover is tracked in #140.

View File

@@ -1,14 +1,41 @@
#!/usr/bin/env bash
# Dev server for site/: regenerates drafts on each MD change, reloads the
# browser on each rendered HTML/CSS/JS change. Port is the first numeric arg
# (default 9000); any other args are ignored for back-compat.
#
# First run downloads chokidar-cli + browser-sync into the npm cache — slow
# once, instant after that.
set -euo pipefail
PORT="${1:-9000}"
if [[ "${1:-}" == "--drafts" ]] || [[ "${2:-}" == "--drafts" ]]; then
PORT="${PORT//--drafts/9000}" # default port if --drafts was first arg
make blog-drafts
else
make blog
PORT=9000
for arg in "$@"; do
if [[ "$arg" =~ ^[0-9]+$ ]]; then
PORT="$arg"
break
fi
done
echo "Serving site at http://localhost:$PORT"
cd site && python3 -m http.server "$PORT"
command -v npx >/dev/null || { echo "npx not found. Install Node.js: https://nodejs.org" >&2; exit 1; }
command -v pandoc >/dev/null || { echo "pandoc not found (required by 'make blog-drafts')." >&2; exit 1; }
# Initial render so the first page load has everything.
make blog-drafts
echo "Serving site at http://localhost:$PORT (drafts included, live reload)"
# Kill child processes on exit so re-runs don't leave orphaned watchers.
trap 'kill $(jobs -p) 2>/dev/null' EXIT INT TERM
# Regenerate HTML when MD sources or the blog template change.
npx --yes chokidar-cli \
"drafts/*.md" "blog/*.md" "site/blog-template.html" \
-c "make blog-drafts" &
# Serve + reload on rendered-asset changes.
cd site && exec npx --yes browser-sync start \
--server . \
--port "$PORT" \
--files "**/*.html,**/*.css,**/*.js" \
--no-open \
--no-notify

View File

@@ -83,8 +83,13 @@ pub fn router(ctx: Arc<ServerCtx>) -> Router {
}
async fn dashboard() -> impl IntoResponse {
// Revalidate each load so browsers don't keep serving a stale
// dashboard across numa upgrades.
(
[(header::CONTENT_TYPE, "text/html; charset=utf-8")],
[
(header::CONTENT_TYPE, "text/html; charset=utf-8"),
(header::CACHE_CONTROL, "no-cache"),
],
DASHBOARD_HTML,
)
}
@@ -1244,6 +1249,13 @@ mod tests {
.await
.unwrap();
assert_eq!(resp.status(), 200);
assert_eq!(
resp.headers()
.get(header::CACHE_CONTROL)
.map(|v| v.to_str().unwrap()),
Some("no-cache"),
"dashboard must revalidate to avoid stale HTML across upgrades"
);
let body = axum::body::to_bytes(resp.into_body(), 100000)
.await
.unwrap();

View File

@@ -2,8 +2,7 @@
//! relay/target, blocklist CDN). When numa is its own system resolver
//! (`/etc/resolv.conf → 127.0.0.1`, HAOS add-on, Pi-hole-style container),
//! the default `getaddrinfo` path loops back through numa before numa can
//! answer — a chicken-and-egg that deadlocks cold boot. See issue #122 and
//! `docs/implementation/bootstrap-resolver.md`.
//! answer — a chicken-and-egg that deadlocks cold boot. See issue #122.
//!
//! Resolution order per hostname:
//! 1. Per-hostname overrides (e.g. ODoH `relay_ip` / `target_ip`) → return

View File

@@ -451,8 +451,12 @@ fn default_upstream_port() -> u16 {
fn default_timeout_ms() -> u64 {
5000
}
/// Off by default: hedging fires a second upstream query, which silently
/// doubles the count at the provider — hurts quota'd DNS (NextDNS, Control
/// D). Opt in with `hedge_ms = 10` for tail-latency rescue on flaky nets
/// or handshake-slow DoT.
fn default_hedge_ms() -> u64 {
10
0
}
#[derive(Deserialize)]

View File

@@ -408,6 +408,33 @@ fn cache_and_parse(
/// Used for both stale-entry refresh and proactive cache warming.
pub async fn refresh_entry(ctx: &ServerCtx, qname: &str, qtype: QueryType) {
let query = DnsPacket::query(0, qname, qtype);
// Forwarding rules must win here, mirroring `resolve_query` — otherwise
// refresh re-resolves private zones through the default upstream and
// poisons the cache with NXDOMAIN.
if let Some(pool) = crate::system_dns::match_forwarding_rule(qname, &ctx.forwarding_rules) {
let mut buf = BytePacketBuffer::new();
if query.write(&mut buf).is_ok() {
if let Ok(wire) = forward_with_failover_raw(
buf.filled(),
pool,
&ctx.srtt,
ctx.timeout,
ctx.hedge_delay,
)
.await
{
ctx.cache.write().unwrap().insert_wire(
qname,
qtype,
&wire,
DnssecStatus::Indeterminate,
);
}
}
return;
}
if ctx.upstream_mode == UpstreamMode::Recursive {
if let Ok(resp) = crate::recursive::resolve_recursive(
qname,
@@ -1244,14 +1271,8 @@ mod tests {
#[tokio::test]
async fn pipeline_filter_aaaa_leaves_a_queries_alone() {
let mut upstream_resp = DnsPacket::new();
upstream_resp.header.response = true;
upstream_resp.header.rescode = ResultCode::NOERROR;
upstream_resp.answers.push(DnsRecord::A {
domain: "example.com".to_string(),
addr: Ipv4Addr::new(93, 184, 216, 34),
ttl: 300,
});
let upstream_resp =
crate::testutil::a_record_response("example.com", Ipv4Addr::new(93, 184, 216, 34), 300);
let upstream_addr = crate::testutil::mock_upstream(upstream_resp).await;
let mut ctx = crate::testutil::test_ctx().await;
@@ -1471,14 +1492,8 @@ mod tests {
#[tokio::test]
async fn pipeline_forwarding_returns_upstream_answer() {
let mut upstream_resp = DnsPacket::new();
upstream_resp.header.response = true;
upstream_resp.header.rescode = ResultCode::NOERROR;
upstream_resp.answers.push(DnsRecord::A {
domain: "internal.corp".to_string(),
addr: Ipv4Addr::new(10, 1, 2, 3),
ttl: 600,
});
let upstream_resp =
crate::testutil::a_record_response("internal.corp", Ipv4Addr::new(10, 1, 2, 3), 600);
let upstream_addr = crate::testutil::mock_upstream(upstream_resp).await;
let mut ctx = crate::testutil::test_ctx().await;
@@ -1505,14 +1520,8 @@ mod tests {
async fn pipeline_forwarding_fails_over_to_second_upstream() {
let dead = crate::testutil::blackhole_upstream();
let mut live_resp = DnsPacket::new();
live_resp.header.response = true;
live_resp.header.rescode = ResultCode::NOERROR;
live_resp.answers.push(DnsRecord::A {
domain: "internal.corp".to_string(),
addr: Ipv4Addr::new(10, 9, 9, 9),
ttl: 600,
});
let live_resp =
crate::testutil::a_record_response("internal.corp", Ipv4Addr::new(10, 9, 9, 9), 600);
let live = crate::testutil::mock_upstream(live_resp).await;
let mut ctx = crate::testutil::test_ctx().await;
@@ -1534,14 +1543,8 @@ mod tests {
#[tokio::test]
async fn pipeline_default_pool_reports_upstream_path() {
let mut upstream_resp = DnsPacket::new();
upstream_resp.header.response = true;
upstream_resp.header.rescode = ResultCode::NOERROR;
upstream_resp.answers.push(DnsRecord::A {
domain: "example.com".to_string(),
addr: Ipv4Addr::new(93, 184, 216, 34),
ttl: 300,
});
let upstream_resp =
crate::testutil::a_record_response("example.com", Ipv4Addr::new(93, 184, 216, 34), 300);
let upstream_addr = crate::testutil::mock_upstream(upstream_resp).await;
let ctx = crate::testutil::test_ctx().await;
@@ -1556,4 +1559,67 @@ mod tests {
assert_eq!(resp.header.rescode, ResultCode::NOERROR);
assert_eq!(resp.answers.len(), 1);
}
#[tokio::test]
async fn refresh_entry_honors_forwarding_rule() {
let rule_resp =
crate::testutil::a_record_response("internal.corp", Ipv4Addr::new(10, 0, 0, 42), 300);
let rule_upstream = crate::testutil::mock_upstream(rule_resp).await;
let mut ctx = crate::testutil::test_ctx().await;
ctx.forwarding_rules = vec![ForwardingRule::new(
"corp".to_string(),
UpstreamPool::new(vec![Upstream::Udp(rule_upstream)], vec![]),
)];
// Default pool points at a blackhole — if the refresh queries it
// instead of the rule, the test fails because nothing is cached.
ctx.upstream_pool
.lock()
.unwrap()
.set_primary(vec![Upstream::Udp(crate::testutil::blackhole_upstream())]);
let ctx = Arc::new(ctx);
refresh_entry(&ctx, "internal.corp", QueryType::A).await;
let cached = ctx
.cache
.read()
.unwrap()
.lookup("internal.corp", QueryType::A)
.expect("refresh must populate cache via forwarding rule");
match &cached.answers[0] {
DnsRecord::A { addr, .. } => assert_eq!(*addr, Ipv4Addr::new(10, 0, 0, 42)),
other => panic!("expected A record, got {:?}", other),
}
}
#[tokio::test]
async fn refresh_entry_prefers_forwarding_rule_over_recursive() {
let rule_resp =
crate::testutil::a_record_response("db.internal.corp", Ipv4Addr::new(10, 0, 0, 7), 300);
let rule_upstream = crate::testutil::mock_upstream(rule_resp).await;
let mut ctx = crate::testutil::test_ctx().await;
ctx.upstream_mode = UpstreamMode::Recursive;
ctx.forwarding_rules = vec![ForwardingRule::new(
"corp".to_string(),
UpstreamPool::new(vec![Upstream::Udp(rule_upstream)], vec![]),
)];
// No root_hints — recursion would fail immediately, proving that
// the rule branch fired instead.
let ctx = Arc::new(ctx);
refresh_entry(&ctx, "db.internal.corp", QueryType::A).await;
let cached = ctx
.cache
.read()
.unwrap()
.lookup("db.internal.corp", QueryType::A)
.expect("recursive-mode refresh must still consult forwarding rules");
match &cached.answers[0] {
DnsRecord::A { addr, .. } => assert_eq!(*addr, Ipv4Addr::new(10, 0, 0, 7)),
other => panic!("expected A record, got {:?}", other),
}
}
}

View File

@@ -882,6 +882,28 @@ fn record_rdata_canonical(record: &DnsRecord) -> Vec<u8> {
rdata.extend(type_bitmap);
rdata
}
DnsRecord::SOA {
mname,
rname,
serial,
refresh,
retry,
expire,
minimum,
..
} => {
let mname_wire = name_to_wire(mname);
let rname_wire = name_to_wire(rname);
let mut rdata = Vec::with_capacity(mname_wire.len() + rname_wire.len() + 20);
rdata.extend(&mname_wire);
rdata.extend(&rname_wire);
rdata.extend(&serial.to_be_bytes());
rdata.extend(&refresh.to_be_bytes());
rdata.extend(&retry.to_be_bytes());
rdata.extend(&expire.to_be_bytes());
rdata.extend(&minimum.to_be_bytes());
rdata
}
DnsRecord::UNKNOWN { data, .. } => data.clone(),
DnsRecord::RRSIG { .. } => Vec::new(),
}

View File

@@ -175,8 +175,7 @@ pub fn parse_upstream(
///
/// Uses the system resolver. Callers running inside `serve::run` pass the
/// shared [`crate::bootstrap_resolver::NumaResolver`] via
/// [`build_https_client_with_resolver`] to avoid the self-loop documented
/// in `docs/implementation/bootstrap-resolver.md`.
/// [`build_https_client_with_resolver`] to avoid the self-loop (issue #122).
pub fn build_https_client() -> reqwest::Client {
build_https_client_with_resolver(1, None)
}

View File

@@ -7,11 +7,10 @@
//! Both handlers call [`HealthResponse::build`] to assemble the JSON
//! response from `HealthMeta` + live inputs.
//!
//! JSON schema is documented in `docs/implementation/ios-companion-app.md`
//! §4.2. The iOS companion app's `HealthInfo` struct is the canonical
//! consumer; any change to this response must keep that struct decoding
//! cleanly (all consumed fields are optional on the Swift side, but
//! `lan_ip` is load-bearing for the pipeline).
//! The iOS companion app's `HealthInfo` struct is the canonical consumer;
//! any change to this response must keep that struct decoding cleanly (all
//! consumed fields are optional on the Swift side, but `lan_ip` is
//! load-bearing for the pipeline).
use std::net::Ipv4Addr;
use std::path::Path;

View File

@@ -24,6 +24,17 @@ pub enum DnsRecord {
host: String,
ttl: u32,
},
SOA {
domain: String,
mname: String,
rname: String,
serial: u32,
refresh: u32,
retry: u32,
expire: u32,
minimum: u32,
ttl: u32,
},
CNAME {
domain: String,
host: String,
@@ -100,6 +111,7 @@ impl DnsRecord {
| DnsRecord::RRSIG { domain, .. }
| DnsRecord::NSEC { domain, .. }
| DnsRecord::NSEC3 { domain, .. }
| DnsRecord::SOA { domain, .. }
| DnsRecord::UNKNOWN { domain, .. } => domain,
}
}
@@ -111,6 +123,7 @@ impl DnsRecord {
DnsRecord::NS { .. } => QueryType::NS,
DnsRecord::CNAME { .. } => QueryType::CNAME,
DnsRecord::MX { .. } => QueryType::MX,
DnsRecord::SOA { .. } => QueryType::SOA,
DnsRecord::DNSKEY { .. } => QueryType::DNSKEY,
DnsRecord::DS { .. } => QueryType::DS,
DnsRecord::RRSIG { .. } => QueryType::RRSIG,
@@ -132,6 +145,7 @@ impl DnsRecord {
| DnsRecord::RRSIG { ttl, .. }
| DnsRecord::NSEC { ttl, .. }
| DnsRecord::NSEC3 { ttl, .. }
| DnsRecord::SOA { ttl, .. }
| DnsRecord::UNKNOWN { ttl, .. } => *ttl,
}
}
@@ -172,6 +186,12 @@ impl DnsRecord {
+ next_hashed_owner.capacity()
+ type_bitmap.capacity()
}
DnsRecord::SOA {
domain,
mname,
rname,
..
} => domain.capacity() + mname.capacity() + rname.capacity(),
DnsRecord::UNKNOWN { domain, data, .. } => domain.capacity() + data.capacity(),
}
}
@@ -188,6 +208,7 @@ impl DnsRecord {
| DnsRecord::RRSIG { ttl, .. }
| DnsRecord::NSEC { ttl, .. }
| DnsRecord::NSEC3 { ttl, .. }
| DnsRecord::SOA { ttl, .. }
| DnsRecord::UNKNOWN { ttl, .. } => *ttl = new_ttl,
}
}
@@ -365,8 +386,31 @@ impl DnsRecord {
ttl,
})
}
QueryType::SOA => {
// MNAME/RNAME compressible per RFC 1035 §3.3.13 — decompress to avoid stale pointers on re-emit.
let mut mname = String::with_capacity(64);
buffer.read_qname(&mut mname)?;
let mut rname = String::with_capacity(64);
buffer.read_qname(&mut rname)?;
let serial = buffer.read_u32()?;
let refresh = buffer.read_u32()?;
let retry = buffer.read_u32()?;
let expire = buffer.read_u32()?;
let minimum = buffer.read_u32()?;
Ok(DnsRecord::SOA {
domain,
mname,
rname,
serial,
refresh,
retry,
expire,
minimum,
ttl,
})
}
_ => {
// SOA, TXT, SRV, etc. — stored as opaque bytes until parsed natively
// TXT, SRV, HTTPS, SVCB, etc. — stored as opaque bytes until parsed natively
let data = buffer.get_range(buffer.pos(), data_len as usize)?.to_vec();
buffer.step(data_len as usize)?;
Ok(DnsRecord::UNKNOWN {
@@ -430,6 +474,30 @@ impl DnsRecord {
let size = buffer.pos() - (pos + 2);
buffer.set_u16(pos, size as u16)?;
}
DnsRecord::SOA {
ref domain,
ref mname,
ref rname,
serial,
refresh,
retry,
expire,
minimum,
ttl,
} => {
write_header(buffer, domain, QueryType::SOA.to_num(), ttl)?;
let rdlen_pos = buffer.pos();
buffer.write_u16(0)?;
buffer.write_qname(mname)?;
buffer.write_qname(rname)?;
buffer.write_u32(serial)?;
buffer.write_u32(refresh)?;
buffer.write_u32(retry)?;
buffer.write_u32(expire)?;
buffer.write_u32(minimum)?;
let rdlen = buffer.pos() - (rdlen_pos + 2);
buffer.set_u16(rdlen_pos, rdlen as u16)?;
}
DnsRecord::AAAA {
ref domain,
ref addr,

View File

@@ -52,7 +52,6 @@ pub async fn run(config_path: String) -> crate::Result<()> {
// Routes numa-originated HTTPS (DoH upstream, ODoH relay/target, blocklist
// CDN) away from the system resolver so lookups don't loop back through
// numa when it's its own system DNS.
// See `docs/implementation/bootstrap-resolver.md`.
let resolver_overrides = match config.upstream.mode {
crate::config::UpstreamMode::Odoh => config
.upstream
@@ -343,12 +342,13 @@ pub async fn run(config_path: String) -> crate::Result<()> {
};
// Title row: center within the box
let tag_line = "DNS that governs itself";
let title = format!(
"{b}NUMA{r} {it}DNS that governs itself{r} {d}v{}{r}",
"{b}NUMA{r} {it}{tag_line}{r} {d}v{}{r}",
env!("CARGO_PKG_VERSION")
);
// The title contains ANSI codes; visible length is ~38 chars. Pad to fill the box.
let title_visible_len = 4 + 2 + 24 + 2 + 1 + env!("CARGO_PKG_VERSION").len() + 1;
let title_visible_len = 4 + 2 + tag_line.len() + 2 + 1 + env!("CARGO_PKG_VERSION").len() + 1;
let title_pad = w.saturating_sub(title_visible_len);
eprintln!("\n{o}{bar_top}{r}");
eprint!("{o}{r} {title}");

View File

@@ -12,11 +12,13 @@ use crate::cache::DnsCache;
use crate::config::UpstreamMode;
use crate::ctx::ServerCtx;
use crate::forward::{Upstream, UpstreamPool};
use crate::header::ResultCode;
use crate::health::HealthMeta;
use crate::lan::PeerStore;
use crate::override_store::OverrideStore;
use crate::packet::DnsPacket;
use crate::query_log::QueryLog;
use crate::record::DnsRecord;
use crate::service_store::ServiceStore;
use crate::srtt::SrttCache;
use crate::stats::ServerStats;
@@ -67,6 +69,20 @@ pub async fn test_ctx() -> ServerCtx {
}
}
/// Build a NOERROR response containing a single A record — the shape used
/// repeatedly by pipeline/forwarding tests to seed `mock_upstream`.
pub fn a_record_response(domain: &str, addr: Ipv4Addr, ttl: u32) -> DnsPacket {
let mut pkt = DnsPacket::new();
pkt.header.response = true;
pkt.header.rescode = ResultCode::NOERROR;
pkt.answers.push(DnsRecord::A {
domain: domain.to_string(),
addr,
ttl,
});
pkt
}
/// Spawn a UDP socket that replies to the first DNS query with the given
/// response packet (patching the query ID to match). Returns the socket address.
pub async fn mock_upstream(response: DnsPacket) -> SocketAddr {

View File

@@ -0,0 +1,115 @@
//! Regression test for issue #128: SOA with compressed MNAME/RNAME must
//! survive Numa's round-trip — compression pointers reference the upstream
//! packet's byte layout, so we have to decompress on read and re-compress
//! on write.
use numa::buffer::BytePacketBuffer;
use numa::packet::DnsPacket;
const COMPRESSION_FLAG: u16 = 0xC000;
fn upstream_packet() -> Vec<u8> {
let mut p = Vec::<u8>::new();
p.extend_from_slice(&[
0x12, 0x34, 0x81, 0x80, 0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x00, 0x00,
]);
assert_eq!(p.len(), 12);
write_name(&mut p, &["odin", "adobe", "com"]);
p.extend_from_slice(&[0x00, 0x41, 0x00, 0x01]);
p.extend_from_slice(&[0xC0, 0x0C]);
p.extend_from_slice(&[0x00, 0x05, 0x00, 0x01, 0x00, 0x00, 0x23, 0x7F]);
let rdlen_pos_1 = p.len();
p.extend_from_slice(&[0x00, 0x00]);
let cname1_start = p.len();
write_name(&mut p, &["cdn", "adobeaemcloud", "com"]);
let rdlen_1 = (p.len() - cname1_start) as u16;
p[rdlen_pos_1..rdlen_pos_1 + 2].copy_from_slice(&rdlen_1.to_be_bytes());
p.extend_from_slice(&(COMPRESSION_FLAG | cname1_start as u16).to_be_bytes());
p.extend_from_slice(&[0x00, 0x05, 0x00, 0x01, 0x00, 0x00, 0x23, 0x7F]);
let rdlen_pos_2 = p.len();
p.extend_from_slice(&[0x00, 0x00]);
let cname2_start = p.len();
p.push(9);
p.extend_from_slice(b"adobe-aem");
let map_label_off = p.len();
p.push(3);
p.extend_from_slice(b"map");
let fastly_label_off = p.len();
p.push(6);
p.extend_from_slice(b"fastly");
p.push(3);
p.extend_from_slice(b"net");
p.push(0);
let rdlen_2 = (p.len() - cname2_start) as u16;
p[rdlen_pos_2..rdlen_pos_2 + 2].copy_from_slice(&rdlen_2.to_be_bytes());
p.extend_from_slice(&(COMPRESSION_FLAG | fastly_label_off as u16).to_be_bytes());
p.extend_from_slice(&[0x00, 0x06, 0x00, 0x01, 0x00, 0x00, 0x07, 0x08]);
let rdlen_pos_soa = p.len();
p.extend_from_slice(&[0x00, 0x00]);
let soa_rdata_start = p.len();
p.extend_from_slice(&(COMPRESSION_FLAG | map_label_off as u16).to_be_bytes());
p.extend_from_slice(&(COMPRESSION_FLAG | fastly_label_off as u16).to_be_bytes());
p.extend_from_slice(&1u32.to_be_bytes());
p.extend_from_slice(&7200u32.to_be_bytes());
p.extend_from_slice(&3600u32.to_be_bytes());
p.extend_from_slice(&1209600u32.to_be_bytes());
p.extend_from_slice(&1800u32.to_be_bytes());
let rdlen_soa = (p.len() - soa_rdata_start) as u16;
p[rdlen_pos_soa..rdlen_pos_soa + 2].copy_from_slice(&rdlen_soa.to_be_bytes());
p
}
fn write_name(p: &mut Vec<u8>, labels: &[&str]) {
for l in labels {
p.push(l.len() as u8);
p.extend_from_slice(l.as_bytes());
}
p.push(0);
}
#[test]
fn compressed_soa_survives_numa_round_trip() {
let upstream = upstream_packet();
let hickory_in = hickory_proto::op::Message::from_vec(&upstream)
.expect("hand-crafted upstream must be valid");
let soa_in_rd = hickory_in.name_servers()[0]
.data()
.clone()
.into_soa()
.expect("SOA rdata");
assert_eq!(soa_in_rd.mname().to_string(), "map.fastly.net.");
assert_eq!(soa_in_rd.rname().to_string(), "fastly.net.");
let mut in_buf = BytePacketBuffer::from_bytes(&upstream);
let pkt = DnsPacket::from_buffer(&mut in_buf).expect("numa parses upstream");
assert_eq!(pkt.answers.len(), 2);
assert_eq!(pkt.authorities.len(), 1);
let mut out_buf = BytePacketBuffer::new();
pkt.write(&mut out_buf).expect("numa writes");
let out = out_buf.filled().to_vec();
let hickory_out =
hickory_proto::op::Message::from_vec(&out).expect("numa re-emission must parse strictly");
let soa_out_rd = hickory_out.name_servers()[0]
.data()
.clone()
.into_soa()
.expect("SOA rdata on output");
assert_eq!(soa_out_rd.mname().to_string(), "map.fastly.net.");
assert_eq!(soa_out_rd.rname().to_string(), "fastly.net.");
assert_eq!(soa_out_rd.serial(), 1);
assert_eq!(soa_out_rd.refresh(), 7200);
assert_eq!(soa_out_rd.retry(), 3600);
assert_eq!(soa_out_rd.expire(), 1209600);
assert_eq!(soa_out_rd.minimum(), 1800);
}