[GH-ISSUE #612] Query multiple nameservers in parallel #252

Closed
opened 2026-03-07 23:01:51 +03:00 by kerem · 5 comments
Owner

Originally created by @stuartnelson3 on GitHub (Nov 11, 2018).
Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/612

Is your feature request related to a problem? Please describe.
Querying multiple nameservers in parallel is something done by musl to improve performance, cf. https://wiki.musl-libc.org/functional-differences-from-glibc.html#Name-Resolver/DNS.

My feature request is to allow this functionality.

Describe the solution you'd like
Allow setting parallel lookups in ResolverOpts, and limiting the number of parallel lookups at a time. For example, maybe I only want have 3 parallel requests at once (a la musl). Related to #606, I can further say to take the first response that isn't a failure, or if all fail, returning one of those. This requires some thought as a response is then only as fast as it's slowest failure (or timeout).

Describe alternatives you've considered
If a SERVFAIL didn't abort trying other nameservers, that would be an alternative, albeit trying requests in serial is not ideal typically.

Additional context
See #606

Originally created by @stuartnelson3 on GitHub (Nov 11, 2018). Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/612 **Is your feature request related to a problem? Please describe.** Querying multiple nameservers in parallel is something done by musl to improve performance, cf. https://wiki.musl-libc.org/functional-differences-from-glibc.html#Name-Resolver/DNS. My feature request is to allow this functionality. **Describe the solution you'd like** Allow setting parallel lookups in ResolverOpts, and limiting the number of parallel lookups at a time. For example, maybe I only want have 3 parallel requests at once (a la musl). Related to #606, I can further say to take the first response that isn't a failure, or if all fail, returning one of those. This requires some thought as a response is then only as fast as it's slowest failure (or timeout). **Describe alternatives you've considered** If a SERVFAIL didn't abort trying other nameservers, that would be an alternative, albeit trying requests in serial is not ideal typically. **Additional context** See #606
kerem 2026-03-07 23:01:51 +03:00
Author
Owner

@bluejekyll commented on GitHub (Nov 11, 2018):

FYI, I’ve got a patch coming for #606. This one, we’ll have to consider options on. I think we might want to have a maximum on the number of parallel queries, and default to 2 or so.

<!-- gh-comment-id:437688573 --> @bluejekyll commented on GitHub (Nov 11, 2018): FYI, I’ve got a patch coming for #606. This one, we’ll have to consider options on. I think we might want to have a maximum on the number of parallel queries, and default to 2 or so.
Author
Owner

@stuartnelson3 commented on GitHub (Nov 12, 2018):

Makes sense. It might be easier to say max number of nameservers. If we allow more nameservers than parallel attempts, what is the behavior if all nameservers in the N that get queried fail? Continue trying with the next batch, or fail, and start priority sorting nameservers...

I think musl's policy makes sense: set a max number of nameservers to be queried.

<!-- gh-comment-id:437862722 --> @stuartnelson3 commented on GitHub (Nov 12, 2018): Makes sense. It might be easier to say max number of nameservers. If we allow more nameservers than parallel attempts, what is the behavior if all nameservers in the N that get queried fail? Continue trying with the next batch, or fail, and start priority sorting nameservers... I think musl's policy makes sense: set a max number of nameservers to be queried.
Author
Owner

@bluejekyll commented on GitHub (Nov 12, 2018):

Perhaps both a max nameserver attempts and max parallel requests would be best? There already exists a max_retries option so we should consider its relationship to these new options.

I haven’t really considered exactly how to implement this yet, but we should be able to pop N nameservers from the set of M in the pool, wher N is the number to execute in parallel. Then in the case where no nameserver returned a successful result, use the existing loop to continue popping N until we’ve attempted all M, unless we hit the max_attempts as described here.

<!-- gh-comment-id:437921288 --> @bluejekyll commented on GitHub (Nov 12, 2018): Perhaps both a max nameserver attempts and max parallel requests would be best? There already exists a max_retries option so we should consider its relationship to these new options. I haven’t really considered exactly how to implement this yet, but we should be able to pop N nameservers from the set of M in the pool, wher N is the number to execute in parallel. Then in the case where no nameserver returned a successful result, use the existing loop to continue popping N until we’ve attempted all M, unless we hit the max_attempts as described here.
Author
Owner

@stuartnelson3 commented on GitHub (Nov 12, 2018):

Sounds reasonable to me

<!-- gh-comment-id:437927946 --> @stuartnelson3 commented on GitHub (Nov 12, 2018): Sounds reasonable to me
Author
Owner

@bluejekyll commented on GitHub (Nov 14, 2018):

PR #615 is the current proposed fix for this, I think it's simple enough to incorporate. Right now it defaults to 2 concurrent requests.

<!-- gh-comment-id:438587986 --> @bluejekyll commented on GitHub (Nov 14, 2018): PR #615 is the current proposed fix for this, I think it's simple enough to incorporate. Right now it defaults to 2 concurrent requests.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/hickory-dns#252
No description provided.