mirror of
https://github.com/hickory-dns/hickory-dns.git
synced 2026-04-25 11:15:54 +03:00
[GH-ISSUE #137] Panic when running query on SyncClient #363
Labels
No labels
blocked
breaking-change
bug
bug:critical
bug:tests
cleanup
compliance
compliance
compliance
crate:all
crate:client
crate:native-tls
crate:proto
crate:recursor
crate:resolver
crate:resolver
crate:rustls
crate:server
crate:util
dependencies
docs
duplicate
easy
easy
enhance
enhance
enhance
feature:dns-over-https
feature:dns-over-quic
feature:dns-over-tls
feature:dnsssec
feature:global_lb
feature:mdns
feature:tsig
features:edns
has workaround
ops
perf
platform:WASM
platform:android
platform:fuchsia
platform:linux
platform:macos
platform:windows
pull-request
question
test
tools
tools
trust
unclear
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/hickory-dns#363
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @opensourcegeek on GitHub (May 25, 2017).
Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/137
I'm using trust_dns as a client to resolve names on 2 interfaces I have (just like
dig @local_interface boo.com), I get a panic on the thread which I guess is because it cannot connect to a DNS server. Just trying to understand how to correctly query using trust_dns.@bluejekyll commented on GitHub (May 26, 2017):
I will look at this more in depth later, but the expected socketaddr for constructing the connection is the remote dns server/resolver address, not the local host. I'm not sure which you're using there...
@opensourcegeek commented on GitHub (May 26, 2017):
I've a slightly complex setup. There is a local dns server (which forwards to another dns server) and the idea is to check if I can resolve addresses as if requests are coming from 2 different VLANs. It seems to work when the local dns server is running, but if it doesn't then I hit this problem.
I was expecting ClientResult to come back so I can handle even if DNS server isn't running but for some reason I get a panic. Thanks!
@bluejekyll commented on GitHub (May 26, 2017):
I'm looking at your stacktrace. It's not obvious to me who's code is panicking; but I think I have some negative tests for that case... so it should be recoverable. I don't see anything obviously wrong with your code, so it is possible you uncovered a bug, but I haven't had any other reports around the timeout logic, and I've definitely seen it in the wild myself without a panic.
Btw, it looks like you're using threads for the timeout. If you're a little more adventurous you could use the ClientFuture which is async and wouldn't require the separate thread.
@bluejekyll commented on GitHub (May 26, 2017):
I just reviewed my tests. I have timeout tests on ClientFuture, but not on SyncClient. I'll add one there and see what I can uncover.
@bluejekyll commented on GitHub (May 26, 2017):
Ok, so I don't see any issues with the new tests I've added. See #138 for validation.
This did help uncover a different issue, where TCP initial connection isn't abiding by the timeout. I will look into a fix for that.
@opensourcegeek commented on GitHub (May 26, 2017):
Thanks - it seems to be happening only when I have the outer loop, if I send the same address directly it doesn't seem to panic. According to backtrace it is on line I invoke
client.query. I'll try to isolate it and let you know.@opensourcegeek commented on GitHub (May 26, 2017):
Below is code with most of the threads/channels all stripped out, I then noticed even though it looks like there's been a panic it seems to print "Took 5" and "Took 10" - same goes with threaded runner as well. So when I commented out
prinln!in theErrbranch the panic isn't printed to the screen. I then realized it is sendingError(Timeout...)along with the back trace! So I guess it's been working fine all along, I just saw whole backtrace and I thought the thread is panicking. Thanks again for looking into it and apologies for wasting your time. I'll look intoClientFutureas well.@bluejekyll commented on GitHub (May 26, 2017):
No problem! You did help me notice that there is a bug in the TCP connect with timeout logic that I need to fix. So, it's not a waste of time.