Feature Request: Enhancements for Forward and Recursive Modes #34

Closed
opened 2026-04-04 21:01:02 +08:00 by Palvef · 3 comments
Palvef commented 2026-04-04 21:01:02 +08:00 (Migrated from github.com)

Hi! I would like to propose several enhancements to improve flexibility and observability in both forward and recursive modes:

  1. Support ip:port in forward mode
    Allow specifying upstream servers with custom ports. Example:

    address = ["192.168.1.1", "192.168.1.2:5353", "[2001:db8::1]:5553"]
    
  2. Fallback mechanism in forward mode
    Introduce a fallback configuration for upstream resolvers when primary servers fail:

    fallback = ["8.8.8.8", "1.1.1.1"]
    
  3. Upstream status visibility via API
    Extend the API to expose:

    • Current forwarding target
    • Health/availability status of each upstream server
  4. Domain prefetch support
    Add a prefetch-domain feature to proactively resolve and cache specific domains.

  5. Bind source IP for recursive queries
    In recursive mode, allow binding a specific local IP address for outbound queries.

Hi! I would like to propose several enhancements to improve flexibility and observability in both forward and recursive modes: 1. **Support `ip:port` in forward mode** Allow specifying upstream servers with custom ports. Example: ``` address = ["192.168.1.1", "192.168.1.2:5353", "[2001:db8::1]:5553"] ``` 2. **Fallback mechanism in forward mode** Introduce a fallback configuration for upstream resolvers when primary servers fail: ``` fallback = ["8.8.8.8", "1.1.1.1"] ``` 3. **Upstream status visibility via API** Extend the API to expose: * Current forwarding target * Health/availability status of each upstream server 4. **Domain prefetch support** Add a `prefetch-domain` feature to proactively resolve and cache specific domains. 5. **Bind source IP for recursive queries** In recursive mode, allow binding a specific local IP address for outbound queries.
razvandimescu commented 2026-04-11 06:40:37 +08:00 (Migrated from github.com)

Thanks for the detailed request @Palvef!

Items 1, 2, 3 are addressed in #77:

  • address now accepts a string or array with optional per-server port ("1.2.3.4:5353", "[::1]:5553")
  • New fallback pool, tried only when all primaries fail
  • /stats exposes the preferred upstream and pool info
[upstream]
address = ["192.168.1.1", "192.168.1.2:5353"]
fallback = ["8.8.8.8", "1.1.1.1"]

Item 4 is addressed in #78 — cache warming:

[cache]
warm = ["google.com", "github.com"]

Resolves A + AAAA at startup, then re-resolves proactively before TTL expiry so configured domains are always one hop away.

Item 5 (bind source IP for recursive) — this is handled at the OS level via routing tables. The kernel selects the source IP based on the destination route; if you need to pin DNS traffic to a specific interface, the right place is ip route / firewall rules, not the application. Out of scope unless a concrete use case surfaces.

Thanks for the detailed request @Palvef! **Items 1, 2, 3** are addressed in #77: - `address` now accepts a string or array with optional per-server port (`"1.2.3.4:5353"`, `"[::1]:5553"`) - New `fallback` pool, tried only when all primaries fail - `/stats` exposes the preferred upstream and pool info ```toml [upstream] address = ["192.168.1.1", "192.168.1.2:5353"] fallback = ["8.8.8.8", "1.1.1.1"] ``` **Item 4** is addressed in #78 — cache warming: ```toml [cache] warm = ["google.com", "github.com"] ``` Resolves A + AAAA at startup, then re-resolves proactively before TTL expiry so configured domains are always one hop away. **Item 5** (bind source IP for recursive) — this is handled at the OS level via routing tables. The kernel selects the source IP based on the destination route; if you need to pin DNS traffic to a specific interface, the right place is `ip route` / firewall rules, not the application. Out of scope unless a concrete use case surfaces.
Palvef commented 2026-04-12 17:16:35 +08:00 (Migrated from github.com)

In multi-ISP environments, relying solely on the kernel routing table is insufficient for DNS recursion optimization.

A common deployment scenario involves a single host with multiple NUMA nodes, each bound to a different ISP uplink and IP address. In this setup, each NUMA instance is expected to perform recursive resolution through its associated ISP path to obtain topology-aware CDN responses.

For example:

server1:
outgoing-interface: 192.168.1.1
server2:
outgoing-interface: 10.0.0.1

Resolvers such as unbound support explicit outgoing interface binding, ensuring that recursive queries egress via the intended ISP. This directly impacts CDN resolution quality, as many providers return location-sensitive IPs based on the source address of the query.

Currently, numa follows the system default route, which results in all recursive traffic potentially exiting through a single ISP. In multi-ISP scenarios, this can lead to suboptimal CDN endpoints being returned, degrading performance.

Therefore, binding the source IP (or interface) at the application level is not just a routing concern but a functional requirement for accurate DNS resolution in these environments.

In multi-ISP environments, relying solely on the kernel routing table is insufficient for DNS recursion optimization. A common deployment scenario involves a single host with multiple NUMA nodes, each bound to a different ISP uplink and IP address. In this setup, each `NUMA` instance is expected to perform recursive resolution through its associated ISP path to obtain topology-aware CDN responses. For example: ``` server1: outgoing-interface: 192.168.1.1 ``` ``` server2: outgoing-interface: 10.0.0.1 ``` Resolvers such as `unbound` support explicit outgoing interface binding, ensuring that recursive queries egress via the intended ISP. This directly impacts *CDN* resolution quality, as many providers return location-sensitive IPs based on the source address of the query. Currently, numa follows the system default route, which results in all recursive traffic potentially exiting through a single ISP. In multi-ISP scenarios, this can lead to suboptimal CDN endpoints being returned, degrading performance. Therefore, binding the source IP (or interface) at the application level is not just a routing concern but a functional requirement for accurate DNS resolution in these environments.
razvandimescu commented 2026-04-13 06:35:09 +08:00 (Migrated from github.com)

Ok understood let me track with #93 thanks again!

Ok understood let me track with #93 thanks again!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: dearsky/numa#34