[GH-ISSUE #2730] Question: allow response from different nameserver #1052

Open
opened 2026-03-16 01:26:51 +03:00 by kerem · 13 comments
Owner

Originally created by @JuxhinDB on GitHub (Jan 19, 2025).
Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/2730

What is the question?

Thanks for this project! We're using hickory-dns to power the name resolution for Have I Been Squatted. We have many instances where we hit google nameservers on 8.8.8.8:53 and receive responses from 8.8.4.4:53, which results in warnings such as:

2025-01-19T18:55:59.308868Z  WARN 334: ignoring response from 8.8.4.4:53 because it does not match name_server: 8.8.8.8:53.

The warning is deep in the library, so I'm not sure how feasible it is to pass some context to allow bypassing this check.

github.com/hickory-dns/hickory-dns@0b88f271c4/crates/proto/src/udp/udp_client_stream.rs (L304-L312)

Happy to get some input on it, and if so will try to open a PR to resolve it.

Originally created by @JuxhinDB on GitHub (Jan 19, 2025). Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/2730 What is the question? Thanks for this project! We're using `hickory-dns` to power the name resolution for [Have I Been Squatted](https://haveibeensquatted.com/). We have many instances where we hit google nameservers on 8.8.8.8:53 and receive responses from 8.8.4.4:53, which results in warnings such as: ``` 2025-01-19T18:55:59.308868Z WARN 334: ignoring response from 8.8.4.4:53 because it does not match name_server: 8.8.8.8:53. ``` The warning is deep in the library, so I'm not sure how feasible it is to pass some context to allow bypassing this check. https://github.com/hickory-dns/hickory-dns/blob/0b88f271c4395546ef2cf4dbadabeff68ba041b6/crates/proto/src/udp/udp_client_stream.rs#L304-L312 Happy to get some input on it, and if so will try to open a PR to resolve it.
Author
Owner

@bluejekyll commented on GitHub (Jan 19, 2025):

huh, I've never seen google respond from a different IP than the original request. Is this causing resolution failures?

<!-- gh-comment-id:2601064179 --> @bluejekyll commented on GitHub (Jan 19, 2025): huh, I've never seen google respond from a different IP than the original request. Is this causing resolution failures?
Author
Owner

@djc commented on GitHub (Jan 20, 2025):

Would be good to find some documentation from Google on this behavior -- it seems quite surprising.

What version are you using? Please make sure to test this against current main or the latest 0.25.0-alpha.4.

<!-- gh-comment-id:2601860750 --> @djc commented on GitHub (Jan 20, 2025): Would be good to find some documentation from Google on this behavior -- it seems quite surprising. What version are you using? Please make sure to test this against current main or the latest 0.25.0-alpha.4.
Author
Owner

@JuxhinDB commented on GitHub (Jan 24, 2025):

Right, apologies folks for the late reply. Added some traces surrounding the lookup and it is seems to only occur when we get an NXDOMAIN (i.e., non-existent).

2025-01-24T09:59:08.599411Z DEBUG ThreadId(02): enriching domain Permutation { domain: Domain { fqdn: "metamask.aoste.it", tld: "aoste.it", domain: "metamask" }, kind: Tld }
2025-01-24T09:59:08.599436Z TRACE ThreadId(02): hickory-dns resolving metamask.aoste.it
2025-01-24T09:59:08.603809Z  WARN ThreadId(10): ignoring response from 8.8.8.8:53 because it does not match name_server: 8.8.4.4:53.
2025-01-24T09:59:08.603793Z TRACE ThreadId(02): hickory-dns resolution result for metamask-ssl.io: Err(ResolveError { kind: NoRecordsFound { query: Query { name: Name("metamask-ssl.io."), query_type: AAAA, query_class: IN }, soa: Some(Record { name_labels: Name("io."), rr_type: SOA, dns_class: IN, ttl: 1800, rdata: Some(SOA { mname: Name("a0.nic.io."), rname: Name("hostmaster.donuts.email."), serial: 1737712499, refresh: 7200, retry: 900, expire: 1209600, minimum: 3600 }) }), negative_ttl: Some(1800), response_code: NXDomain, trusted: true } })

This is quite a common use-case for us (in fact one of the features). I'll try to find some time to add more introspection into hickory to see if I can narrow it down further, as I'd like to correlate exactly queries are failing.

<!-- gh-comment-id:2612137651 --> @JuxhinDB commented on GitHub (Jan 24, 2025): Right, apologies folks for the late reply. Added some traces surrounding the lookup and it is _seems_ to only occur when we get an `NXDOMAIN` (i.e., non-existent). ``` 2025-01-24T09:59:08.599411Z DEBUG ThreadId(02): enriching domain Permutation { domain: Domain { fqdn: "metamask.aoste.it", tld: "aoste.it", domain: "metamask" }, kind: Tld } 2025-01-24T09:59:08.599436Z TRACE ThreadId(02): hickory-dns resolving metamask.aoste.it 2025-01-24T09:59:08.603809Z WARN ThreadId(10): ignoring response from 8.8.8.8:53 because it does not match name_server: 8.8.4.4:53. 2025-01-24T09:59:08.603793Z TRACE ThreadId(02): hickory-dns resolution result for metamask-ssl.io: Err(ResolveError { kind: NoRecordsFound { query: Query { name: Name("metamask-ssl.io."), query_type: AAAA, query_class: IN }, soa: Some(Record { name_labels: Name("io."), rr_type: SOA, dns_class: IN, ttl: 1800, rdata: Some(SOA { mname: Name("a0.nic.io."), rname: Name("hostmaster.donuts.email."), serial: 1737712499, refresh: 7200, retry: 900, expire: 1209600, minimum: 3600 }) }), negative_ttl: Some(1800), response_code: NXDomain, trusted: true } }) ``` This is quite a common use-case for us (in fact one of the features). I'll try to find some time to add more introspection into hickory to see if I can narrow it down further, as I'd like to correlate exactly queries are failing.
Author
Owner

@bluejekyll commented on GitHub (Mar 2, 2025):

I'm not sure what to do with this. I really don't like the idea of supporting responses from different upstream servers, that could increase the chances of spoofed responses being accepted. Could this be a bug at Google? If not, I'd like to see something that supports changing the way that we trust responses in cases like this...

<!-- gh-comment-id:2692907533 --> @bluejekyll commented on GitHub (Mar 2, 2025): I'm not sure what to do with this. I really don't like the idea of supporting responses from different upstream servers, that could increase the chances of spoofed responses being accepted. Could this be a bug at Google? If not, I'd like to see something that supports changing the way that we trust responses in cases like this...
Author
Owner

@JuxhinDB commented on GitHub (Mar 3, 2025):

Apologies I haven't followed up from my end. Will try to make some time this evening and see if I can recreate a minimal test-bed. As for the behaviour, I agree. And to clarify this is only occurring with 8.8.8.8/8.8.4.4.

<!-- gh-comment-id:2693771005 --> @JuxhinDB commented on GitHub (Mar 3, 2025): Apologies I haven't followed up from my end. Will try to make some time this evening and see if I can recreate a minimal test-bed. As for the behaviour, I agree. And to clarify this is only occurring with 8.8.8.8/8.8.4.4.
Author
Owner

@Undef-a commented on GitHub (Jun 8, 2025):

I've also seen this log, but in my case between 8.8.8.8 and 1.1.1.1 as well as local resolvers. The key factor seems to be having multiple DNS servers enabled at once in /etc/resolv.conf and pulling hickory-resolver using read_system_conf().

<!-- gh-comment-id:2953858510 --> @Undef-a commented on GitHub (Jun 8, 2025): I've also seen this log, but in my case between 8.8.8.8 and 1.1.1.1 as well as local resolvers. The key factor seems to be having multiple DNS servers enabled at once in /etc/resolv.conf and pulling hickory-resolver using `read_system_conf()`.
Author
Owner

@djc commented on GitHub (Jun 12, 2025):

@Undef-a if you have a way to (somewhat) reliably reproduce this, would be great!

<!-- gh-comment-id:2966425813 --> @djc commented on GitHub (Jun 12, 2025): @Undef-a if you have a way to (somewhat) reliably reproduce this, would be great!
Author
Owner

@Undef-a commented on GitHub (Jun 14, 2025):

It's not simple and I'd say it's somewhat more than reliable, but the following code reproduces this for me maybe 2-3 runs out of 10.

The key bits are commented in the file:

  • I've only tested it on Linux, where it requires at least two, and preferably more entries in /etc/resolv.conf
  • It requires parallel execution of queries which I'm doing with Rayon.

Sorry it's not a minimal example, I started with just the first point above and kept adding until the bug triggered.
https://gist.github.com/Undef-a/3d76ca30d97944b81258635e1281340e

Output:

# cargo run | grep -v 'A(' 
<snip>
warning: `repro` (bin "repro") generated 2 warnings (run `cargo fix --bin "repro"` to apply 2 suggestions)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.07s
     Running `target/debug/repro`
2025-06-14T01:59:15.973540Z  WARN hickory_proto::udp::udp_client_stream: /home/user/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:317: ignoring response from 8.8.8.8:53 because it does not match name_server: 1.1.1.1:53.
2025-06-14T01:59:21.719391Z  WARN hickory_proto::udp::udp_client_stream: /home/user/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:317: ignoring response from 8.8.8.8:53 because it does not match name_server: 1.1.1.1:53.
2025-06-14T01:59:21.769152Z  WARN hickory_proto::udp::udp_client_stream: /home/user/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:338: expected message id: 20571 got: 61087, dropped
<!-- gh-comment-id:2972123237 --> @Undef-a commented on GitHub (Jun 14, 2025): It's not simple and I'd say it's somewhat more than reliable, but the following code reproduces this for me maybe 2-3 runs out of 10. The key bits are commented in the file: - I've only tested it on Linux, where it requires at least two, and preferably more entries in /etc/resolv.conf - It requires parallel execution of queries which I'm doing with Rayon. Sorry it's not a minimal example, I started with just the first point above and kept adding until the bug triggered. https://gist.github.com/Undef-a/3d76ca30d97944b81258635e1281340e Output: ``` # cargo run | grep -v 'A(' <snip> warning: `repro` (bin "repro") generated 2 warnings (run `cargo fix --bin "repro"` to apply 2 suggestions) Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.07s Running `target/debug/repro` 2025-06-14T01:59:15.973540Z WARN hickory_proto::udp::udp_client_stream: /home/user/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:317: ignoring response from 8.8.8.8:53 because it does not match name_server: 1.1.1.1:53. 2025-06-14T01:59:21.719391Z WARN hickory_proto::udp::udp_client_stream: /home/user/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:317: ignoring response from 8.8.8.8:53 because it does not match name_server: 1.1.1.1:53. 2025-06-14T01:59:21.769152Z WARN hickory_proto::udp::udp_client_stream: /home/user/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:338: expected message id: 20571 got: 61087, dropped ```
Author
Owner

@djc commented on GitHub (Jun 14, 2025):

Can you post the debug dump of both your sconf and system_opts?

I whittled your example down to this:

use std::net::{IpAddr, Ipv4Addr};

use hickory_resolver::config::NameServerConfigGroup;
use hickory_resolver::{
    Name, TokioResolver, config::ResolverConfig, name_server::TokioConnectionProvider,
};
use rand::distr::Alphanumeric;
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use rayon::prelude::{IntoParallelIterator, ParallelIterator};
use tokio::runtime::Runtime;
use tracing::info;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::{EnvFilter, fmt};

fn main() {
    tracing_subscriber::registry()
        .with(fmt::layer())
        .with(EnvFilter::from_default_env())
        .init();

    let group = NameServerConfigGroup::from_ips_clear(
        &SERVERS.iter().copied().map(IpAddr::V4).collect::<Vec<_>>(),
        53,
        true,
    );

    let config = ResolverConfig::from_parts(None, vec![], group);
    let resolver =
        TokioResolver::builder_with_config(config, TokioConnectionProvider::default()).build();

    let base_name = Name::from_utf8("example.com").unwrap();
    let mut names = vec![];
    let mut rng = StdRng::from_os_rng();
    for _ in 0..10000 {
        let random_sub = (&mut rng)
            .sample_iter(&Alphanumeric)
            .take(10)
            .map(char::from)
            .collect::<String>();
        names.push(base_name.prepend_label(random_sub).unwrap());
    }

    info!("starting resolution of {} names", names.len());
    let rt = Runtime::new().unwrap();
    names.into_par_iter().for_each(|name| {
        // Without the rayon par_iter here this doesn't reproduce the issue.
        match rt.block_on(resolver.ipv4_lookup(name.clone())) {
            Ok(ips) => println!("{:?}", ips),
            Err(_) => {}
        }
    });
}

const SERVERS: [Ipv4Addr; 3] = [
    Ipv4Addr::new(1, 1, 1, 1),
    Ipv4Addr::new(8, 8, 8, 8),
    Ipv4Addr::new(1, 0, 0, 1),
];

to try to make it independent of any system configuration, but on my macOS machine I could not reproduce the "ignoring response" message so far (after 5 runs).

<!-- gh-comment-id:2972805598 --> @djc commented on GitHub (Jun 14, 2025): Can you post the debug dump of both your `sconf` and `system_opts`? I whittled your example down to this: ```rust use std::net::{IpAddr, Ipv4Addr}; use hickory_resolver::config::NameServerConfigGroup; use hickory_resolver::{ Name, TokioResolver, config::ResolverConfig, name_server::TokioConnectionProvider, }; use rand::distr::Alphanumeric; use rand::rngs::StdRng; use rand::{Rng, SeedableRng}; use rayon::prelude::{IntoParallelIterator, ParallelIterator}; use tokio::runtime::Runtime; use tracing::info; use tracing_subscriber::layer::SubscriberExt; use tracing_subscriber::util::SubscriberInitExt; use tracing_subscriber::{EnvFilter, fmt}; fn main() { tracing_subscriber::registry() .with(fmt::layer()) .with(EnvFilter::from_default_env()) .init(); let group = NameServerConfigGroup::from_ips_clear( &SERVERS.iter().copied().map(IpAddr::V4).collect::<Vec<_>>(), 53, true, ); let config = ResolverConfig::from_parts(None, vec![], group); let resolver = TokioResolver::builder_with_config(config, TokioConnectionProvider::default()).build(); let base_name = Name::from_utf8("example.com").unwrap(); let mut names = vec![]; let mut rng = StdRng::from_os_rng(); for _ in 0..10000 { let random_sub = (&mut rng) .sample_iter(&Alphanumeric) .take(10) .map(char::from) .collect::<String>(); names.push(base_name.prepend_label(random_sub).unwrap()); } info!("starting resolution of {} names", names.len()); let rt = Runtime::new().unwrap(); names.into_par_iter().for_each(|name| { // Without the rayon par_iter here this doesn't reproduce the issue. match rt.block_on(resolver.ipv4_lookup(name.clone())) { Ok(ips) => println!("{:?}", ips), Err(_) => {} } }); } const SERVERS: [Ipv4Addr; 3] = [ Ipv4Addr::new(1, 1, 1, 1), Ipv4Addr::new(8, 8, 8, 8), Ipv4Addr::new(1, 0, 0, 1), ]; ``` to try to make it independent of any system configuration, but on my macOS machine I could not reproduce the "ignoring response" message so far (after 5 runs).
Author
Owner

@Undef-a commented on GitHub (Jun 24, 2025):

SCONF: ResolverConfig {
    domain: None,
    search: [],
    name_servers: NameServerConfigGroup {
        servers: [
            NameServerConfig {
                socket_addr: 1.1.1.1:53,
                protocol: Udp,
                tls_dns_name: None,
                http_endpoint: None,
                trust_negative_responses: false,
                bind_addr: None,
            },
            NameServerConfig {
                socket_addr: 1.1.1.1:53,
                protocol: Tcp,
                tls_dns_name: None,
                http_endpoint: None,
                trust_negative_responses: false,
                bind_addr: None,
            },
            NameServerConfig {
                socket_addr: 8.8.8.8:53,
                protocol: Udp,
                tls_dns_name: None,
                http_endpoint: None,
                trust_negative_responses: false,
                bind_addr: None,
            },
            NameServerConfig {
                socket_addr: 8.8.8.8:53,
                protocol: Tcp,
                tls_dns_name: None,
                http_endpoint: None,
                trust_negative_responses: false,
                bind_addr: None,
            },
            NameServerConfig {
                socket_addr: 1.0.0.1:53,
                protocol: Udp,
                tls_dns_name: None,
                http_endpoint: None,
                trust_negative_responses: false,
                bind_addr: None,
            },
            NameServerConfig {
                socket_addr: 1.0.0.1:53,
                protocol: Tcp,
                tls_dns_name: None,
                http_endpoint: None,
                trust_negative_responses: false,
                bind_addr: None,
            },
        ],
    },
}
SYSTEM_OPTS: ResolverOpts {
    ndots: 1,
    timeout: 5s,
    attempts: 2,
    check_names: true,
    edns0: false,
    validate: false,
    ip_strategy: Ipv4thenIpv6,
    cache_size: 32,
    use_hosts_file: Auto,
    positive_min_ttl: None,
    negative_min_ttl: None,
    positive_max_ttl: None,
    negative_max_ttl: None,
    num_concurrent_reqs: 2,
    preserve_intermediates: true,
    try_tcp_on_error: false,
    server_ordering_strategy: QueryStatistics,
    recursion_desired: true,
    avoid_local_udp_ports: {},
    os_port_selection: false,
    case_randomization: false,
    trust_anchor: None,
}

In my experience with the real application I'd see this issue in NOERROR A responses, targeting another domain, especially one with wildcard DNS may help.

(I replied via email a week ago but it never went through. Sorry for the delay)

<!-- gh-comment-id:2999525569 --> @Undef-a commented on GitHub (Jun 24, 2025): ``` SCONF: ResolverConfig { domain: None, search: [], name_servers: NameServerConfigGroup { servers: [ NameServerConfig { socket_addr: 1.1.1.1:53, protocol: Udp, tls_dns_name: None, http_endpoint: None, trust_negative_responses: false, bind_addr: None, }, NameServerConfig { socket_addr: 1.1.1.1:53, protocol: Tcp, tls_dns_name: None, http_endpoint: None, trust_negative_responses: false, bind_addr: None, }, NameServerConfig { socket_addr: 8.8.8.8:53, protocol: Udp, tls_dns_name: None, http_endpoint: None, trust_negative_responses: false, bind_addr: None, }, NameServerConfig { socket_addr: 8.8.8.8:53, protocol: Tcp, tls_dns_name: None, http_endpoint: None, trust_negative_responses: false, bind_addr: None, }, NameServerConfig { socket_addr: 1.0.0.1:53, protocol: Udp, tls_dns_name: None, http_endpoint: None, trust_negative_responses: false, bind_addr: None, }, NameServerConfig { socket_addr: 1.0.0.1:53, protocol: Tcp, tls_dns_name: None, http_endpoint: None, trust_negative_responses: false, bind_addr: None, }, ], }, } SYSTEM_OPTS: ResolverOpts { ndots: 1, timeout: 5s, attempts: 2, check_names: true, edns0: false, validate: false, ip_strategy: Ipv4thenIpv6, cache_size: 32, use_hosts_file: Auto, positive_min_ttl: None, negative_min_ttl: None, positive_max_ttl: None, negative_max_ttl: None, num_concurrent_reqs: 2, preserve_intermediates: true, try_tcp_on_error: false, server_ordering_strategy: QueryStatistics, recursion_desired: true, avoid_local_udp_ports: {}, os_port_selection: false, case_randomization: false, trust_anchor: None, } ``` In my experience with the real application I'd see this issue in NOERROR A responses, targeting another domain, especially one with wildcard DNS may help. (I replied via email a week ago but it never went through. Sorry for the delay)
Author
Owner

@ibigbug commented on GitHub (Jul 14, 2025):

Having similar issue here but it's about v6 and mapped v4 address, apparently hickory-proto things they are different addresses and refused the response(?)

src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:317: ignoring response from [::ffff:1.2.4.8]:53 because it does not match name_server: 1.2.4.8:53
<!-- gh-comment-id:3070069760 --> @ibigbug commented on GitHub (Jul 14, 2025): Having similar issue here but it's about v6 and mapped v4 address, apparently hickory-proto things they are different addresses and refused the response(?) ``` src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:317: ignoring response from [::ffff:1.2.4.8]:53 because it does not match name_server: 1.2.4.8:53 ```
Author
Owner

@djc commented on GitHub (Aug 13, 2025):

Having similar issue here but it's about v6 and mapped v4 address, apparently hickory-proto things they are different addresses and refused the response(?)

src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:317: ignoring response from [::ffff:1.2.4.8]:53 because it does not match name_server: 1.2.4.8:53

That seems unrelated, please file a separate issue for it -- or better a PR, since this seems straightforward to solve?

<!-- gh-comment-id:3183377068 --> @djc commented on GitHub (Aug 13, 2025): > Having similar issue here but it's about v6 and mapped v4 address, apparently hickory-proto things they are different addresses and refused the response(?) > > ``` > src/index.crates.io-1949cf8c6b5b557f/hickory-proto-0.25.2/src/udp/udp_client_stream.rs:317: ignoring response from [::ffff:1.2.4.8]:53 because it does not match name_server: 1.2.4.8:53 > ``` That seems unrelated, please file a separate issue for it -- or better a PR, since this seems straightforward to solve?
Author
Owner

@ibigbug commented on GitHub (Aug 15, 2025):

@djc sure https://github.com/hickory-dns/hickory-dns/pull/3207

<!-- gh-comment-id:3191757629 --> @ibigbug commented on GitHub (Aug 15, 2025): @djc sure https://github.com/hickory-dns/hickory-dns/pull/3207
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/hickory-dns#1052
No description provided.