mirror of
https://github.com/hickory-dns/hickory-dns.git
synced 2026-04-25 03:05:51 +03:00
[GH-ISSUE #185] Resolver: mpsc error #383
Labels
No labels
blocked
breaking-change
bug
bug:critical
bug:tests
cleanup
compliance
compliance
compliance
crate:all
crate:client
crate:native-tls
crate:proto
crate:recursor
crate:resolver
crate:resolver
crate:rustls
crate:server
crate:util
dependencies
docs
duplicate
easy
easy
enhance
enhance
enhance
feature:dns-over-https
feature:dns-over-quic
feature:dns-over-tls
feature:dnsssec
feature:global_lb
feature:mdns
feature:tsig
features:edns
has workaround
ops
perf
platform:WASM
platform:android
platform:fuchsia
platform:linux
platform:macos
platform:windows
pull-request
question
test
tools
tools
trust
unclear
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/hickory-dns#383
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @NfNitLoop on GitHub (Sep 12, 2017).
Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/185
I'm getting this Error when trying to use Resolver::lookup():
I'm not creating any mpsc channels so I'm pretty sure that's all on the Resolver. :p
Here's the code that's doing the lookup. I'm calling from an outer loop that just reads a "domain" (a line from stdin) and calls this function:
@bluejekyll commented on GitHub (Sep 12, 2017):
Yikes! Yes, there are is a mpsc internal to the Client for tracking messages in and out. Is this on first request or on subsequent requests? Also, what platform are you on linux, windows or mac?
Can you enable debug! logging for the library and see what's going on there?
Btw, you don't "need" to put in the trailing dot, that just forces it to be an FQDN, so that only one query is attempted. Also, if you upgrade to the 0.5.0 version, I added a new method for this in particular: https://docs.rs/trust-dns-resolver/0.5.0/trust_dns_resolver/struct.Resolver.html#method.ipv4_lookup
@bluejekyll commented on GitHub (Sep 12, 2017):
Are you configuring this with the systems resolv.conf? By default that adds Nameserver configs for both UDP and TCP to the Resolver. I wonder if you've gotten unlucky in it trying TCP first, and that not working (i.e. the remote authority might not have TCP enabled?). The resolver by default reattempts 2 times. This can be increased, to see if that is the issue.
Another option would be to explicitly construct the NameserverConfig and create a ResolverConfig with that information, only specifying the protocol you want to use, e.g. UDP.
@NfNitLoop commented on GitHub (Sep 12, 2017):
Hmm, I'm not able to reproduce it now. The network I'm on is a bit flaky, maybe it was just having issues? But it seems like flaky network shouldn't cause channel errors.
Yep!
ahaa, good to know. I'd prefer UDP-only. Maybe I should manually configure it.
I've got the attempts down to 1.
It was often on subsequent requests, after having left the Resolver unused for a while. Could it be that it was using a TCP connection, and the connection was getting dropped in the meantime?
@bluejekyll commented on GitHub (Sep 12, 2017):
Yeah, this might just be poor Error messaging, i.e. I might be losing the original Error in translation. It's something I've been meaning to go clean up...
Yes, definitely do that. I think I will also add some logic to prefer UDP over TCP as well. This is related to another issue I've filed, #178 , for promoting to TCP when responses are truncated (generally only on large packets).
resolv.confdoesn't (as far as I'm aware) have an option for disabling TCP... But I should definitely err on using UDP over TCP.Yeah, the logic right now is to work through all the Nameservers in the pool and try to determine the best one. It's weighted at the moment to try ones that haven't yet been tried, to make sure to balance the requests. So with an attempt of 1, if it hits one that hasn't been tried before it could fail. Any reason you don't want a retry in there?
Possibly. There could be a bug there, so I'll see about building a test case for that specific event. The logic should cause connections that fail to be closed, and then reopened. I believe I have test coverage for that, so I suspect that if that's what's happening, then the issue is that since only one attempt is being made, it won't have a chance to reconnect (for TCP) if the connection was dropped.
@NfNitLoop commented on GitHub (Sep 12, 2017):
The docs weren't 100% clear so I assumed that the timeout duration was per attempt, so I didn't want to double that time. But to be honest it's because I'm trying to first do a like-for-like rewrite of the C tool I'm hoping to replace, and it didn't do retries. (The ability to easily update the
attemptconfig in the future is a feature I'm going to point out in Rust/trust-dns's favor, though.) 😄@bluejekyll commented on GitHub (Sep 12, 2017):
I could definitely use help on the documentation side of things... A lot of those options are straight out of the
resolv.confdefinitions from POSIX systems, so some of the wording could definitely be more accurate.@bluejekyll commented on GitHub (Sep 20, 2017):
Part of this discussion raised the issue of TCP being used instead of UDP. This was resolved in #189 .
@bluejekyll commented on GitHub (Jun 12, 2018):
Closing as out-of-date