[GH-ISSUE #483] Resolver binary #203

Open
opened 2026-03-07 22:46:39 +03:00 by kerem · 12 comments
Owner

Originally created by @msehnout on GitHub (May 22, 2018).
Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/483

Hi, as I mentioned on the users.rust-lang.org forum, I am interested in the standalone resolver built on top of Trust DNS. Would you like it to be a separate binary or just a configuration option in the named binary?

I was looking at the code and as far as I can understand it the resolver could be implemented using the RequestHandler trait. The MessageRequest structure contains a name, type, and options that are needed for the LookupFuture from the Resolver crate. This way, the catalog could be replaced with a resolver.

Would you be interested in a PR, if I tried to implement this?

Originally created by @msehnout on GitHub (May 22, 2018). Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/483 Hi, as I mentioned on the `users.rust-lang.org` forum, I am interested in the standalone resolver built on top of Trust DNS. Would you like it to be a separate binary or just a configuration option in the named binary? I was looking at the code and as far as I can understand it the resolver could be implemented using the `RequestHandler` trait. The `MessageRequest` structure contains a name, type, and options that are needed for the `LookupFuture` from the Resolver crate. This way, the catalog could be replaced with a resolver. Would you be interested in a PR, if I tried to implement this?
Author
Owner

@bluejekyll commented on GitHub (May 22, 2018):

a separate binary or just a configuration option in the named binary?

This is a good question. I think we may want both. The resolver binary would be a simple design, whereas the named would be a little more complex. I think both would be built with on top of the common server components. To make this new resolver may require some refactoring of the server crate, but should be easy enough.

RequestHandler trait. The MessageRequest structure contains a name, type, and options that are needed for the LookupFuture from the Resolver crate. This way, the catalog could be replaced with a resolver.

For named we'll want to implement this as a configurable Authority, where we'd want to potentially register many forwarding agents, for different zones etc. #55 is the original forwarding feature for this. And the ZoneType::Forward, https://github.com/bluejekyll/trust-dns/blob/master/server/src/authority/mod.rs#L34.

For the resolver it should be easily configurable I think, where named would allow for separate Resolver runtimes for each forwarding zone, the resolver would just have a single configuration, and forward all zones. This could replace the Catalog as you suggest. I think all the type interfaces are generic enough to support both use cases.

There are some runtime complexities to figure out in terms of interaction with DHCP and /etc/resolv.conf where that file would be replaced during DHCP configuration. I don't have an answer to that.

My recommendation here, start with what you're suggesting with the Catalog, as a separate easily configurable resolver binary. Then in the future we'll take that integration work and build out the forwarders in the named binary. Hopefully all of this makes sense, and I'm really excited for this, so thank you for picking this up!

<!-- gh-comment-id:391073720 --> @bluejekyll commented on GitHub (May 22, 2018): > a separate binary or just a configuration option in the named binary? This is a good question. I think we may want both. The `resolver` binary would be a simple design, whereas the `named` would be a little more complex. I think both would be built with on top of the common server components. To make this new `resolver` may require some refactoring of the server crate, but should be easy enough. > RequestHandler trait. The MessageRequest structure contains a name, type, and options that are needed for the LookupFuture from the Resolver crate. This way, the catalog could be replaced with a resolver. For `named` we'll want to implement this as a configurable `Authority`, where we'd want to potentially register many forwarding agents, for different zones etc. #55 is the original forwarding feature for this. And the `ZoneType::Forward`, https://github.com/bluejekyll/trust-dns/blob/master/server/src/authority/mod.rs#L34. For the `resolver` it should be easily configurable I think, where `named` would allow for separate Resolver runtimes for each forwarding zone, the `resolver` would just have a single configuration, and forward all zones. This could replace the `Catalog` as you suggest. I think all the type interfaces are generic enough to support both use cases. There are some runtime complexities to figure out in terms of interaction with DHCP and `/etc/resolv.conf` where that file would be replaced during DHCP configuration. I don't have an answer to that. My recommendation here, start with what you're suggesting with the `Catalog`, as a separate easily configurable `resolver` binary. Then in the future we'll take that integration work and build out the forwarders in the `named` binary. Hopefully all of this makes sense, and I'm really excited for this, so thank you for picking this up!
Author
Owner

@msehnout commented on GitHub (May 23, 2018):

There are some runtime complexities to figure out in terms of interaction with DHCP and /etc/resolv.conf where that file would be replaced during DHCP configuration. I don't have an answer to that

I think this needs to be solved for each OS separately. For example in Fedora, DHCP is handled by NetworkManager in the default configuration, but you can acquire the information about DNS servers via NetworkManager hooks, or via D-Bus and then rewrite the /etc/resolv.conf file. I am experimenting with automatic DNS configuration right now (https://github.com/msehnout/client-side-dnssec/tree/master/config-dns), but it is complicated and I have to debug my Internet connection often.

Anyway, I will try to implement something useful and submit a PR. Right now I am trying to understand the relation between the resolver and the client crates and go through your blog posts.

<!-- gh-comment-id:391328846 --> @msehnout commented on GitHub (May 23, 2018): > There are some runtime complexities to figure out in terms of interaction with DHCP and /etc/resolv.conf where that file would be replaced during DHCP configuration. I don't have an answer to that I think this needs to be solved for each OS separately. For example in Fedora, DHCP is handled by NetworkManager in the default configuration, but you can acquire the information about DNS servers via NetworkManager hooks, or via D-Bus and then rewrite the /etc/resolv.conf file. I am experimenting with automatic DNS configuration right now (https://github.com/msehnout/client-side-dnssec/tree/master/config-dns), but it is complicated and I have to debug my Internet connection often. Anyway, I will try to implement something useful and submit a PR. Right now I am trying to understand the relation between the resolver and the client crates and go through your blog posts.
Author
Owner

@bluejekyll commented on GitHub (May 23, 2018):

I think this needs to be solved for each OS separately.

Yes, I concur. I was mainly pointing out that I have no strategy, or enough knowledge right now, to offer direction.

Right now I am trying to understand the relation between the resolver and the client crates and go through your blog posts.

Originally, the Resolver was a thin wrapper on top of the Client crate. When @briansmith got involved in the project, there was some concern that this had too much going on in it. So we worked together to extract what became the Proto crate, and then reimplemented the Resolver on the proto crate. The server hasn’t gotten as much attention while the work on the Resolver was done. currently the reason the server has a dependency on the Client crate, is that the client has all the DNSSec signing logic, whereas the Resolver only shares the DNSSec validation code.

My blog posts don’t generally get detailed enough to describe the design of the libraries, so feel free to reach out with any questions.

<!-- gh-comment-id:391358347 --> @bluejekyll commented on GitHub (May 23, 2018): > I think this needs to be solved for each OS separately. Yes, I concur. I was mainly pointing out that I have no strategy, or enough knowledge right now, to offer direction. > Right now I am trying to understand the relation between the resolver and the client crates and go through your blog posts. Originally, the Resolver was a thin wrapper on top of the Client crate. When @briansmith got involved in the project, there was some concern that this had too much going on in it. So we worked together to extract what became the Proto crate, and then reimplemented the Resolver on the proto crate. The server hasn’t gotten as much attention while the work on the Resolver was done. currently the reason the server has a dependency on the Client crate, is that the client has all the DNSSec signing logic, whereas the Resolver only shares the DNSSec validation code. My blog posts don’t generally get detailed enough to describe the design of the libraries, so feel free to reach out with any questions.
Author
Owner

@bluejekyll commented on GitHub (May 28, 2018):

@msehnout how's this going? need any help?

<!-- gh-comment-id:392393449 --> @bluejekyll commented on GitHub (May 28, 2018): @msehnout how's this going? need any help?
Author
Owner

@msehnout commented on GitHub (May 28, 2018):

Well, I tried to glue together the ServerFuture from named binary and NameServerPool from resolver crate, but I found out, it was not the best idea... As far as I can understand the RequestHandler::handle_request, it is supposed to return immediately, which is not ideal for the Resolver as it needs to contact the upstream resolver first, wait for the response and only after that call the response_handler.
I store my progress here: github.com/msehnout/trust-dns@a5a3c7b12c

Anyway, now I am looking at ResolverFuture and how it could theoretically be used instead of the ServerFuture, but I'm still confused with all the layers of abstraction, though I think it is getting better :-D.

<!-- gh-comment-id:392443967 --> @msehnout commented on GitHub (May 28, 2018): Well, I tried to glue together the ServerFuture from named binary and NameServerPool from resolver crate, but I found out, it was not the best idea... As far as I can understand the RequestHandler::handle_request, it is supposed to return immediately, which is not ideal for the Resolver as it needs to contact the upstream resolver first, wait for the response and only after that call the response_handler. I store my progress here: https://github.com/msehnout/trust-dns/commit/a5a3c7b12cfc48122c0b2dc8bfbb3beb266ffd80 Anyway, now I am looking at ResolverFuture and how it could theoretically be used instead of the ServerFuture, but I'm still confused with all the layers of abstraction, though I think it is getting better :-D.
Author
Owner

@bluejekyll commented on GitHub (May 28, 2018):

I think I understand. We're probably going to need to change the ResponseHandler into a future, though there may be some ownership issues if we do that around the results... I'll look more into your code when I have a moment.

There are definitely a lot of levels of abstraction... async is not always easy ;)

<!-- gh-comment-id:392445276 --> @bluejekyll commented on GitHub (May 28, 2018): I think I understand. We're probably going to need to change the ResponseHandler into a future, though there may be some ownership issues if we do that around the results... I'll look more into your code when I have a moment. There are definitely a lot of levels of abstraction... async is not always easy ;)
Author
Owner

@msehnout commented on GitHub (May 28, 2018):

Looking at PR #487, I should probably base my work on it. It seems to be almost ready to merge, right?

<!-- gh-comment-id:392482992 --> @msehnout commented on GitHub (May 28, 2018): Looking at PR #487, I should probably base my work on it. It seems to be almost ready to merge, right?
Author
Owner

@bluejekyll commented on GitHub (May 29, 2018):

This was merged yesterday. Not sure that it will help your usecase much right now.

<!-- gh-comment-id:392777222 --> @bluejekyll commented on GitHub (May 29, 2018): This was merged yesterday. Not sure that it will help your usecase much right now.
Author
Owner

@msehnout commented on GitHub (Jun 1, 2018):

Sorry for my inactivity lately, I have been busy finishing my studies, but I am still planning on implementing this.

<!-- gh-comment-id:393809402 --> @msehnout commented on GitHub (Jun 1, 2018): Sorry for my inactivity lately, I have been busy finishing my studies, but I am still planning on implementing this.
Author
Owner

@bluejekyll commented on GitHub (Jun 1, 2018):

No worries. Based on your experience I think there are a few things that need to be generalized and changed for this to properly work. It might require a lot of reworking of the server code.

I’m working on a couple changes now that might help move us in that directions.

<!-- gh-comment-id:393910652 --> @bluejekyll commented on GitHub (Jun 1, 2018): No worries. Based on your experience I think there are a few things that need to be generalized and changed for this to properly work. It might require a lot of reworking of the server code. I’m working on a couple changes now that might help move us in that directions.
Author
Owner

@kpcyrd commented on GitHub (Jul 14, 2018):

Coming from #531, changing ResponseHandler to accept a future should be sufficient for my usecase. In theory it should be possible to run AsyncResolver and ServerFuture inside the same core (from a select(3) point of view, I'm not that familiar with tokio internals).

@bluejekyll could you elaborate on the ownership issues that this might cause?

<!-- gh-comment-id:405054873 --> @kpcyrd commented on GitHub (Jul 14, 2018): Coming from #531, changing `ResponseHandler` to accept a future should be sufficient for my usecase. In theory it should be possible to run `AsyncResolver` and `ServerFuture` inside the same core (from a select(3) point of view, I'm not that familiar with tokio internals). @bluejekyll could you elaborate on the ownership issues that this might cause?
Author
Owner

@bluejekyll commented on GitHub (Jul 14, 2018):

I think it depends on how far we want to go with cloning/performance. I have #492 open, that needs to be redone as it's drifted too far off of master, where I'm attempting to stream records directly from the authority. I think as @msehnout was discovering, there are a lot of things to refactor, and not simple. The ownership issues are more related to the borrowing that's being done from the Authority, as that may make implementing other types in the Authority a little complicated.

On top of that, the Resolver right now passes back Lookup's, and it's cache is based on Query rather than Record, so we may need to generalize the Resovler a bit to fit into this context.

<!-- gh-comment-id:405056502 --> @bluejekyll commented on GitHub (Jul 14, 2018): I think it depends on how far we want to go with cloning/performance. I have #492 open, that needs to be redone as it's drifted too far off of master, where I'm attempting to stream records directly from the authority. I think as @msehnout was discovering, there are a lot of things to refactor, and not simple. The ownership issues are more related to the borrowing that's being done from the Authority, as that may make implementing other types in the Authority a little complicated. On top of that, the Resolver right now passes back Lookup's, and it's cache is based on Query rather than Record, so we may need to generalize the Resovler a bit to fit into this context.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/hickory-dns#203
No description provided.