mirror of
https://github.com/hickory-dns/hickory-dns.git
synced 2026-04-25 03:05:51 +03:00
[GH-ISSUE #483] Resolver binary #203
Labels
No labels
blocked
breaking-change
bug
bug:critical
bug:tests
cleanup
compliance
compliance
compliance
crate:all
crate:client
crate:native-tls
crate:proto
crate:recursor
crate:resolver
crate:resolver
crate:rustls
crate:server
crate:util
dependencies
docs
duplicate
easy
easy
enhance
enhance
enhance
feature:dns-over-https
feature:dns-over-quic
feature:dns-over-tls
feature:dnsssec
feature:global_lb
feature:mdns
feature:tsig
features:edns
has workaround
ops
perf
platform:WASM
platform:android
platform:fuchsia
platform:linux
platform:macos
platform:windows
pull-request
question
test
tools
tools
trust
unclear
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/hickory-dns#203
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @msehnout on GitHub (May 22, 2018).
Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/483
Hi, as I mentioned on the
users.rust-lang.orgforum, I am interested in the standalone resolver built on top of Trust DNS. Would you like it to be a separate binary or just a configuration option in the named binary?I was looking at the code and as far as I can understand it the resolver could be implemented using the
RequestHandlertrait. TheMessageRequeststructure contains a name, type, and options that are needed for theLookupFuturefrom the Resolver crate. This way, the catalog could be replaced with a resolver.Would you be interested in a PR, if I tried to implement this?
@bluejekyll commented on GitHub (May 22, 2018):
This is a good question. I think we may want both. The
resolverbinary would be a simple design, whereas thenamedwould be a little more complex. I think both would be built with on top of the common server components. To make this newresolvermay require some refactoring of the server crate, but should be easy enough.For
namedwe'll want to implement this as a configurableAuthority, where we'd want to potentially register many forwarding agents, for different zones etc. #55 is the original forwarding feature for this. And theZoneType::Forward, https://github.com/bluejekyll/trust-dns/blob/master/server/src/authority/mod.rs#L34.For the
resolverit should be easily configurable I think, wherenamedwould allow for separate Resolver runtimes for each forwarding zone, theresolverwould just have a single configuration, and forward all zones. This could replace theCatalogas you suggest. I think all the type interfaces are generic enough to support both use cases.There are some runtime complexities to figure out in terms of interaction with DHCP and
/etc/resolv.confwhere that file would be replaced during DHCP configuration. I don't have an answer to that.My recommendation here, start with what you're suggesting with the
Catalog, as a separate easily configurableresolverbinary. Then in the future we'll take that integration work and build out the forwarders in thenamedbinary. Hopefully all of this makes sense, and I'm really excited for this, so thank you for picking this up!@msehnout commented on GitHub (May 23, 2018):
I think this needs to be solved for each OS separately. For example in Fedora, DHCP is handled by NetworkManager in the default configuration, but you can acquire the information about DNS servers via NetworkManager hooks, or via D-Bus and then rewrite the /etc/resolv.conf file. I am experimenting with automatic DNS configuration right now (https://github.com/msehnout/client-side-dnssec/tree/master/config-dns), but it is complicated and I have to debug my Internet connection often.
Anyway, I will try to implement something useful and submit a PR. Right now I am trying to understand the relation between the resolver and the client crates and go through your blog posts.
@bluejekyll commented on GitHub (May 23, 2018):
Yes, I concur. I was mainly pointing out that I have no strategy, or enough knowledge right now, to offer direction.
Originally, the Resolver was a thin wrapper on top of the Client crate. When @briansmith got involved in the project, there was some concern that this had too much going on in it. So we worked together to extract what became the Proto crate, and then reimplemented the Resolver on the proto crate. The server hasn’t gotten as much attention while the work on the Resolver was done. currently the reason the server has a dependency on the Client crate, is that the client has all the DNSSec signing logic, whereas the Resolver only shares the DNSSec validation code.
My blog posts don’t generally get detailed enough to describe the design of the libraries, so feel free to reach out with any questions.
@bluejekyll commented on GitHub (May 28, 2018):
@msehnout how's this going? need any help?
@msehnout commented on GitHub (May 28, 2018):
Well, I tried to glue together the ServerFuture from named binary and NameServerPool from resolver crate, but I found out, it was not the best idea... As far as I can understand the RequestHandler::handle_request, it is supposed to return immediately, which is not ideal for the Resolver as it needs to contact the upstream resolver first, wait for the response and only after that call the response_handler.
I store my progress here:
github.com/msehnout/trust-dns@a5a3c7b12cAnyway, now I am looking at ResolverFuture and how it could theoretically be used instead of the ServerFuture, but I'm still confused with all the layers of abstraction, though I think it is getting better :-D.
@bluejekyll commented on GitHub (May 28, 2018):
I think I understand. We're probably going to need to change the ResponseHandler into a future, though there may be some ownership issues if we do that around the results... I'll look more into your code when I have a moment.
There are definitely a lot of levels of abstraction... async is not always easy ;)
@msehnout commented on GitHub (May 28, 2018):
Looking at PR #487, I should probably base my work on it. It seems to be almost ready to merge, right?
@bluejekyll commented on GitHub (May 29, 2018):
This was merged yesterday. Not sure that it will help your usecase much right now.
@msehnout commented on GitHub (Jun 1, 2018):
Sorry for my inactivity lately, I have been busy finishing my studies, but I am still planning on implementing this.
@bluejekyll commented on GitHub (Jun 1, 2018):
No worries. Based on your experience I think there are a few things that need to be generalized and changed for this to properly work. It might require a lot of reworking of the server code.
I’m working on a couple changes now that might help move us in that directions.
@kpcyrd commented on GitHub (Jul 14, 2018):
Coming from #531, changing
ResponseHandlerto accept a future should be sufficient for my usecase. In theory it should be possible to runAsyncResolverandServerFutureinside the same core (from a select(3) point of view, I'm not that familiar with tokio internals).@bluejekyll could you elaborate on the ownership issues that this might cause?
@bluejekyll commented on GitHub (Jul 14, 2018):
I think it depends on how far we want to go with cloning/performance. I have #492 open, that needs to be redone as it's drifted too far off of master, where I'm attempting to stream records directly from the authority. I think as @msehnout was discovering, there are a lot of things to refactor, and not simple. The ownership issues are more related to the borrowing that's being done from the Authority, as that may make implementing other types in the Authority a little complicated.
On top of that, the Resolver right now passes back Lookup's, and it's cache is based on Query rather than Record, so we may need to generalize the Resovler a bit to fit into this context.