[GH-ISSUE #347] Do not resolve publically-routable DNS names to private/loopback IP addresses #454

Closed
opened 2026-03-15 22:33:45 +03:00 by kerem · 47 comments
Owner

Originally created by @DemiMarie on GitHub (Feb 18, 2018).
Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/347

A public DNS name should never resolve to a private IP address. trust-dns should enforce this.

Originally created by @DemiMarie on GitHub (Feb 18, 2018). Original GitHub issue: https://github.com/hickory-dns/hickory-dns/issues/347 A public DNS name should never resolve to a private IP address. `trust-dns` should enforce this.
kerem 2026-03-15 22:33:45 +03:00
Author
Owner

@bluejekyll commented on GitHub (Feb 18, 2018):

This will require an option on the Server for forwarded zones, specifically forwarding on general resolution queries to the internet.

We can have local addresses be ignored by default, and force zone authorities or other forwarders to explicitly allow local addresses in records.

<!-- gh-comment-id:366538444 --> @bluejekyll commented on GitHub (Feb 18, 2018): This will require an option on the Server for forwarded zones, specifically forwarding on general resolution queries to the internet. We can have local addresses be ignored by default, and force zone authorities or other forwarders to explicitly allow local addresses in records.
Author
Owner
<!-- gh-comment-id:370236118 --> @bmisiak commented on GitHub (Mar 4, 2018): This sort of made news this week: https://arstechnica.com/information-technology/2018/03/hackers-exploiting-rtorrent-to-install-unix-coin-miner-have-netted-4k-so-far/
Author
Owner

@bluejekyll commented on GitHub (Mar 4, 2018):

For rebinding attacks, in the Resolver we could protect 127.0.0.1 and ::1 to only be valid for resolution from the hosts file and reject it from all remote queries.

This will only work for 127/8 and ::1 networks though, as we can’t know 192/8, 10/8 or 172/8 (I’m being overly general in those cidrs) are valid resolutions from an interior DNS configuration.

Not sure why I didn’t think about the localhost option for the Resolver separately.

<!-- gh-comment-id:370252180 --> @bluejekyll commented on GitHub (Mar 4, 2018): For rebinding attacks, in the Resolver we could protect 127.0.0.1 and ::1 to only be valid for resolution from the hosts file and reject it from all remote queries. This will only work for 127/8 and ::1 networks though, as we can’t know 192/8, 10/8 or 172/8 (I’m being overly general in those cidrs) are valid resolutions from an interior DNS configuration. Not sure why I didn’t think about the localhost option for the Resolver separately.
Author
Owner

@197g commented on GitHub (Mar 5, 2018):

Is there knowledge about the usage of publishing local addresses in the DNS? Because off the top of my head, I could think of 2 legitimate use cases:

  1. An ISP providing a readable (and memorable) name entry which resolves to their customers Router. This is a counter example for private blocks
  2. A company setting up testing.company.example to resolve to their internal network for integration tests.

I don't want to argue if any of the above are smart but at least they are feasible and would break should this change be adopted rashly. Note that in both of the cases, the operating party can not necessarily act as a zone authority.

<!-- gh-comment-id:370340266 --> @197g commented on GitHub (Mar 5, 2018): Is there knowledge about the usage of publishing local addresses in the DNS? Because off the top of my head, I could think of 2 legitimate use cases: 1) ~~An ISP providing a readable (and memorable) name entry which resolves to their customers Router.~~ This is a counter example for private blocks 2) A company setting up `testing.company.example` to resolve to their internal network for integration tests. I don't want to argue if any of the above are smart but at least they are feasible and would break should this change be adopted rashly. Note that in both of the cases, the operating party can not necessarily act as a zone authority.
Author
Owner

@partim commented on GitHub (Mar 5, 2018):

There is at least one application that uses 127.0.0.1/8 addresses in public A records: spam blacklists. Some of them are published via a hash of the domain name as a subdomain and encoding the status of the domain as an A record using the 127.0.0.1/8 range.

In addition, it isn’t entirely unlikely that someone will use DNS instead of host files in a virtual machine setup to for inter-VM communication.

<!-- gh-comment-id:370358556 --> @partim commented on GitHub (Mar 5, 2018): There is at least one application that uses 127.0.0.1/8 addresses in public A records: spam blacklists. Some of them are published via a hash of the domain name as a subdomain and encoding the status of the domain as an A record using the 127.0.0.1/8 range. In addition, it isn’t entirely unlikely that someone will use DNS instead of host files in a virtual machine setup to for inter-VM communication.
Author
Owner

@ssokolow commented on GitHub (Mar 5, 2018):

I also have a system set up where a Python cronscript grabs the MVPS HOSTS file once a month, applies a sanitizer to ensure it's free of non-127.0.0.1 records, and then feeds it to the copy of dnsmasq on my router to override externally-specified DNS records for a certain minimum level of ad- and malware- protection at the LAN level.

I wouldn't want trust-dns to interfere with that.

<!-- gh-comment-id:370372193 --> @ssokolow commented on GitHub (Mar 5, 2018): I also have a system set up where a Python cronscript grabs the [MVPS HOSTS](http://winhelp2002.mvps.org/hosts.htm) file once a month, applies a sanitizer to ensure it's free of non-`127.0.0.1` records, and then feeds it to the copy of [dnsmasq](https://en.wikipedia.org/wiki/Dnsmasq) on my router to override externally-specified DNS records for a certain minimum level of ad- and malware- protection at the LAN level. I wouldn't want trust-dns to interfere with that.
Author
Owner

@pfigel commented on GitHub (Mar 5, 2018):

Is there knowledge about the usage of publishing local addresses in the DNS?

Another example would be Plex' HTTPS implementation, see https://support.plex.tv/articles/206225077-how-to-use-secure-server-connections/#dnsrebinding

It seems most affected resolvers allow domains to be whitelisted as a workaround.

<!-- gh-comment-id:370451787 --> @pfigel commented on GitHub (Mar 5, 2018): > Is there knowledge about the usage of publishing local addresses in the DNS? Another example would be Plex' HTTPS implementation, see https://support.plex.tv/articles/206225077-how-to-use-secure-server-connections/#dnsrebinding It seems most affected resolvers allow domains to be whitelisted as a workaround.
Author
Owner

@bluejekyll commented on GitHub (Mar 5, 2018):

Thank you for the great responses. I'll clarify this and respond to each and I think I'll change the issue title. What I've implemented only deals with loopback interfaces, 127/8 and ::1, and not the private networks in the original. We can open a separate issue for the other private networks.

@HeroicKatora :

  1. An ISP providing a readable (and memorable) name entry which resolves to their customers Router.

This should never be on the loopback interface, i.e. in this case we're talking about the private IP space of an internal network, i.e. 192/8, 10/8 or 172/8. This change is only talking about 127/8 and ::1.

  1. A company setting up testing.company.example to resolve to their internal network for integration tests

Similar comment. I don't expect them to return loopback addresses, but I could understand that they might.


@partim :

Some of them are published via a hash of the domain name as a subdomain and encoding the status of the domain as an A record using the 127.0.0.1/8 range.

Interesting. I haven't started looking at how to support blacklists yet, thought I expect that we'd support that through an out-of-band request, not necessarily a resolution. So I don't see that as an issue at the moment.

In addition, it isn’t entirely unlikely that someone will use DNS instead of host files in a virtual machine setup to for inter-VM communication.

Generally these bridge networks should not be setup over loopback interfaces. I often see them on a 172 network for example.


@ssokolow :

feeds it to the copy of dnsmasq on my router to override externally-specified DNS records for a certain minimum level of ad- and malware- protection at the LAN level.

I don't think this change (localhost only) will interfere with that. In essence the change will just not return A or AAAA records that have 127/8 or ::1 addresses in their RData responses. Getting nothing seems safer than getting something routing to localhost unintentionally.

<!-- gh-comment-id:370469909 --> @bluejekyll commented on GitHub (Mar 5, 2018): Thank you for the great responses. I'll clarify this and respond to each and I think I'll change the issue title. What I've implemented only deals with loopback interfaces, 127/8 and ::1, and not the private networks in the original. We can open a separate issue for the other private networks. @HeroicKatora : > 1. An ISP providing a readable (and memorable) name entry which resolves to their customers Router. This should never be on the loopback interface, i.e. in this case we're talking about the private IP space of an internal network, i.e. 192/8, 10/8 or 172/8. This change is only talking about 127/8 and ::1. > 2. A company setting up testing.company.example to resolve to their internal network for integration tests Similar comment. I don't expect them to return loopback addresses, but I could understand that they might. ---- @partim : > Some of them are published via a hash of the domain name as a subdomain and encoding the status of the domain as an A record using the 127.0.0.1/8 range. Interesting. I haven't started looking at how to support blacklists yet, thought I expect that we'd support that through an out-of-band request, not necessarily a resolution. So I don't see that as an issue at the moment. > In addition, it isn’t entirely unlikely that someone will use DNS instead of host files in a virtual machine setup to for inter-VM communication. Generally these bridge networks should not be setup over loopback interfaces. I often see them on a 172 network for example. ---- @ssokolow : > feeds it to the copy of dnsmasq on my router to override externally-specified DNS records for a certain minimum level of ad- and malware- protection at the LAN level. I don't think this change (localhost only) will interfere with that. In essence the change will just not return `A` or `AAAA` records that have `127/8` or `::1` addresses in their RData responses. Getting nothing seems safer than getting something routing to localhost unintentionally.
Author
Owner

@rebasar commented on GitHub (Mar 5, 2018):

I know that there are some TLDs (I don't remember exactly which ones, but it should not be so hard to find) that return 127.0.0.1 when they cannot resolve a hostname. While making it a failure would benefit everyone highly by preventing this behavior, however I think it would also make it harder to find out if you are having this error because of registry behavior or some other reason.

<!-- gh-comment-id:370477561 --> @rebasar commented on GitHub (Mar 5, 2018): I know that there are some TLDs (I don't remember exactly which ones, but it should not be so hard to find) that return 127.0.0.1 when they cannot resolve a hostname. While making it a failure would benefit everyone highly by preventing this behavior, however I think it would also make it harder to find out if you are having this error because of registry behavior or some other reason.
Author
Owner

@bluejekyll commented on GitHub (Mar 5, 2018):

I’m open to making this a failure. My concern was that it might be inconvenient, but if that were something preferable we can do that.

<!-- gh-comment-id:370479987 --> @bluejekyll commented on GitHub (Mar 5, 2018): I’m open to making this a failure. My concern was that it might be inconvenient, but if that were something preferable we can do that.
Author
Owner

@ssokolow commented on GitHub (Mar 5, 2018):

@bluejekyll :

I don't think this change (localhost only) will interfere with that. In essence the change will just not return A or AAAA records that have 127/8 or ::1 addresses in their RData responses. Getting nothing seems safer than getting something routing to localhost unintentionally.

Safer perhaps, but not necessarily what everyone wants. Some Windows users run special minimal HTTP servers on localhost:80 designed to suppress the browser's "not found" response for ad-blocking hosts files by returning a blank white/transparent example of whatever the Accept header specifies.

(On my Linux workstations, I sometimes set Apache or nginx or whatever I'm developing with to use a custom 404 page that uses a white-on-white color scheme containing nothing but the text "404" to be revealed by Ctrl+A if I actually need the diagnostic info... though I usually have uBlock Origin installed in the browser in question, so I just set it to use the MVPS HOSTS file as a supplementary source of rules so it'll collapse such ads away.)


see #13 for tracking issue related to blacklisting.

<!-- gh-comment-id:370500631 --> @ssokolow commented on GitHub (Mar 5, 2018): @bluejekyll : > I don't think this change (localhost only) will interfere with that. In essence the change will just not return A or AAAA records that have 127/8 or ::1 addresses in their RData responses. Getting nothing seems safer than getting something routing to localhost unintentionally. Safer perhaps, but not necessarily what everyone wants. Some Windows users run special minimal HTTP servers on `localhost:80` designed to suppress the browser's "not found" response for ad-blocking hosts files by returning a blank white/transparent example of whatever the `Accept` header specifies. (On my Linux workstations, I sometimes set Apache or nginx or whatever I'm developing with to use a custom 404 page that uses a white-on-white color scheme containing nothing but the text "404" to be revealed by <kbd>Ctrl</kbd>+<kbd>A</kbd> if I *actually* need the diagnostic info... though I usually have uBlock Origin installed in the browser in question, so I just set it to use the MVPS HOSTS file as a supplementary source of rules so it'll collapse such ads away.) ---- see #13 for tracking issue related to blacklisting.
Author
Owner

@quininer commented on GitHub (Mar 5, 2018):

I think it should not return an error, but should make developers aware that this may return an unexpected result.

for example:

enum IpResult {
    Global(IpAddr),
    Private(IpAddr),
    Loopback(IpAddr)
}

fn lookup(hostname: &str) -> IpResult { .. }
<!-- gh-comment-id:370520571 --> @quininer commented on GitHub (Mar 5, 2018): I think it should not return an error, but should make developers aware that this may return an unexpected result. for example: ```rust enum IpResult { Global(IpAddr), Private(IpAddr), Loopback(IpAddr) } fn lookup(hostname: &str) -> IpResult { .. } ```
Author
Owner

@DemiMarie commented on GitHub (Mar 5, 2018):

It should be an error. Making it an error is absolutely critical to
ensuring that one cannot remotely compromise a server on localhost. Period.

On Mar 5, 2018 1:43 PM, "quininer" notifications@github.com wrote:

I think it should not return an error, but should make users aware that
this may return an unexpected result.

for example:

enum IpResult {
Global(IpAddr),
Private(IpAddr),
Loopback(IpAddr)
}
fn lookup(hostname: &str) -> IpResult { .. }


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/bluejekyll/trust-dns/issues/347#issuecomment-370520571,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGGWB4lP85IBy0wACmz_trFXgus0745Vks5tbYdjgaJpZM4SJy59
.

<!-- gh-comment-id:370522897 --> @DemiMarie commented on GitHub (Mar 5, 2018): It should be an error. Making it an error is absolutely critical to ensuring that one cannot remotely compromise a server on localhost. Period. On Mar 5, 2018 1:43 PM, "quininer" <notifications@github.com> wrote: > I think it should not return an error, but should make users aware that > this may return an unexpected result. > > for example: > > enum IpResult { > Global(IpAddr), > Private(IpAddr), > Loopback(IpAddr) > } > fn lookup(hostname: &str) -> IpResult { .. } > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/bluejekyll/trust-dns/issues/347#issuecomment-370520571>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AGGWB4lP85IBy0wACmz_trFXgus0745Vks5tbYdjgaJpZM4SJy59> > . >
Author
Owner

@197g commented on GitHub (Mar 5, 2018):

No. Not exposing any sockets without authorization to the web browser is absolutely critical to ensuring that one cannot remotely compromise a server on localhost.

<!-- gh-comment-id:370525849 --> @197g commented on GitHub (Mar 5, 2018): No. Not exposing any sockets without authorization to the web browser is absolutely critical to ensuring that one cannot remotely compromise a server on localhost.
Author
Owner

@bluejekyll commented on GitHub (Mar 5, 2018):

@quininer :

I think it should not return an error, but should make users aware that this may return an unexpected result.

This is already available on Ipv4Addr and Ipv6Addr:

https://doc.rust-lang.org/std/net/struct.Ipv4Addr.html#method.is_loopback
https://doc.rust-lang.org/std/net/struct.Ipv4Addr.html#method.is_private
https://doc.rust-lang.org/std/net/struct.Ipv4Addr.html#method.is_global

and

https://doc.rust-lang.org/std/net/struct.Ipv6Addr.html#method.is_loopback
https://doc.rust-lang.org/std/net/struct.Ipv6Addr.html#method.is_unicast_site_local
https://doc.rust-lang.org/std/net/struct.Ipv6Addr.html#method.is_global

So if that's enough, for consumers of the lookup API, then there's no value in TRust-DNS doing anything.

<!-- gh-comment-id:370536269 --> @bluejekyll commented on GitHub (Mar 5, 2018): @quininer : > I think it should not return an error, but should make users aware that this may return an unexpected result. This is already available on `Ipv4Addr` and `Ipv6Addr`: https://doc.rust-lang.org/std/net/struct.Ipv4Addr.html#method.is_loopback https://doc.rust-lang.org/std/net/struct.Ipv4Addr.html#method.is_private https://doc.rust-lang.org/std/net/struct.Ipv4Addr.html#method.is_global and https://doc.rust-lang.org/std/net/struct.Ipv6Addr.html#method.is_loopback https://doc.rust-lang.org/std/net/struct.Ipv6Addr.html#method.is_unicast_site_local https://doc.rust-lang.org/std/net/struct.Ipv6Addr.html#method.is_global So if that's enough, for consumers of the `lookup` API, then there's no value in TRust-DNS doing anything.
Author
Owner

@quininer commented on GitHub (Mar 5, 2018):

@bluejekyll Most SSRF are caused by developers forgetting that query may return private address.

This is to make users aware of this.

Just like pointer.is_null() vs Option.

<!-- gh-comment-id:370543246 --> @quininer commented on GitHub (Mar 5, 2018): @bluejekyll Most `SSRF` are caused by developers forgetting that query may return private address. This is to make users aware of this. Just like [`pointer.is_null()`](https://doc.rust-lang.org/stable/std/primitive.pointer.html#method.is_null) vs [`Option`](https://doc.rust-lang.org/stable/std/option/enum.Option.html).
Author
Owner

@briansmith commented on GitHub (Mar 5, 2018):

As a general principle, I think that it should be possible to configure TRust-DNS to work like gethostbyname(), perhaps with a few exceptions. I think that principle applies here: If gethostbyname() doesn't do this extra check, then there should be a way to configure Trust-DNS to avoid doing this.

Also as a general principle, all differences from common built-in DNS resolver behavior (e.g. glibc's) or additional restrictions beyond what the RFCs require in the default configuration of TRust-DNS should should be clearly documented. https://wiki.musl-libc.org/functional-differences-from-glibc.html is a great example of such documentation. Thus, if this feature is enabled by default then this should documented as a divergence.


see #249 tracking issue for documentation on resolution differences.

<!-- gh-comment-id:370598635 --> @briansmith commented on GitHub (Mar 5, 2018): As a general principle, I think that it should be possible to configure TRust-DNS to work like `gethostbyname()`, perhaps with a few exceptions. I think that principle applies here: If `gethostbyname()` doesn't do this extra check, then there should be a way to configure Trust-DNS to avoid doing this. Also as a general principle, all differences from common built-in DNS resolver behavior (e.g. glibc's) or additional restrictions beyond what the RFCs require in the default configuration of TRust-DNS should should be clearly documented. https://wiki.musl-libc.org/functional-differences-from-glibc.html is a great example of such documentation. Thus, if this feature is enabled by default then this should documented as a divergence. ---- see #249 tracking issue for documentation on resolution differences.
Author
Owner

@quininer commented on GitHub (Mar 6, 2018):

This feature does not solve all the problems.

  1. It does not solve the rebinding attack. The adversary can still attack 192/8.
  2. It does not defensive SSRF attack at all.
<!-- gh-comment-id:370647278 --> @quininer commented on GitHub (Mar 6, 2018): This feature does not solve all the problems. 1. It does not solve the rebinding attack. The adversary can still attack 192/8. 2. It does not defensive SSRF attack at all.
Author
Owner

@briansmith commented on GitHub (Mar 6, 2018):

FWIW, I mostly agree with @quininer. This might be useful for avoiding some accidents but it is definitely more of a defense-in-depth thing rather than a primary defense. I think that + the extra complexity + the need for configurability vs. the benefit all need to be weighed to determine whether it makes sense to implement this.

<!-- gh-comment-id:370648649 --> @briansmith commented on GitHub (Mar 6, 2018): FWIW, I mostly agree with @quininer. This might be useful for avoiding some accidents but it is definitely more of a defense-in-depth thing rather than a primary defense. I think that + the extra complexity + the need for configurability vs. the benefit all need to be weighed to determine whether it makes sense to implement this.
Author
Owner

@bluejekyll commented on GitHub (Mar 6, 2018):

@quininer thanks for your feedback.

  1. It does not solve the rebinding attack. The adversary can still attack 192/8.

It does not, I see this as an initial step in that direction. In order to implement something general purpose for blocking all private networks, some form of whitelisting is required. Before that is built, this does limit one avenue of attack, which would be a direct remapping of some internet routable name to localhost. We can implement the rest, but I think that feature requires more time, where denying localhost was a quick win (i.e. < 10 minute change). To do whitelisting, the API needs to be considered for configuration, and then see if there is a file based config method, and figure out what would look like cross-platform.

  1. It does not defensive SSRF attack at all.

You are correct. For SSRF it might be interesting to come up with another feature, where the resolver could have some notion of whitelisted contexts. Something the application could essentially pass in on a per query basis. The whitelisting outlined above would be the general rule whereas these contextual resolutions would be more narrow/wide based on the context.

<!-- gh-comment-id:370683135 --> @bluejekyll commented on GitHub (Mar 6, 2018): @quininer thanks for your feedback. > 1. It does not solve the rebinding attack. The adversary can still attack 192/8. It does not, I see this as an initial step in that direction. In order to implement something general purpose for blocking all private networks, some form of whitelisting is required. Before that is built, this does limit one avenue of attack, which would be a direct remapping of some internet routable name to localhost. We can implement the rest, but I think that feature requires more time, where denying localhost was a quick win (i.e. < 10 minute change). To do whitelisting, the API needs to be considered for configuration, and then see if there is a file based config method, and figure out what would look like cross-platform. > 2. It does not defensive SSRF attack at all. You are correct. For SSRF it might be interesting to come up with another feature, where the resolver could have some notion of whitelisted contexts. Something the application could essentially pass in on a per query basis. The whitelisting outlined above would be the general rule whereas these contextual resolutions would be more narrow/wide based on the context.
Author
Owner

@bluetech commented on GitHub (Mar 6, 2018):

It's not necessarily legitimate, but another use case for this is what dropbox does, or at least used to do:

  • User installs a local dropbox application.
  • Local dropbox application runs a web server on localhost, with SSL for domain www.dropboxlocalhost.com which resolves to localhost. The web server exposes some RPC for things that cannot be achieved using web APIs. How that certificate got issued, I don't know, might be self-signed.
  • Dropbox website connects to the local webserver using dropboxlocalhost. Since it's SSL, mixed-content warnings/restrictions are avoided.

Found this thread now which contains more examples (github, spotify) and discussion.

<!-- gh-comment-id:370686818 --> @bluetech commented on GitHub (Mar 6, 2018): It's not necessarily legitimate, but another use case for this is what dropbox does, or at least used to do: - User installs a local dropbox application. - Local dropbox application runs a web server on localhost, with SSL for domain www.dropboxlocalhost.com which resolves to localhost. The web server exposes some RPC for things that cannot be achieved using web APIs. How that certificate got issued, I don't know, might be self-signed. - Dropbox website connects to the local webserver using dropboxlocalhost. Since it's SSL, mixed-content warnings/restrictions are avoided. Found [this thread](https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/eV89JXcsBC0) now which contains more examples (github, spotify) and discussion.
Author
Owner

@quininer commented on GitHub (Mar 6, 2018):

@bluejekyll I think rudely denying localhost is not a smart solution. should be up developer to decide how to deal with unexpected address.

Configuration files, whitelists, these are not what a resolver should do.

Maybe we can provide an additional limited resolver?

<!-- gh-comment-id:370690980 --> @quininer commented on GitHub (Mar 6, 2018): @bluejekyll I think rudely denying localhost is not a smart solution. should be up developer to decide how to deal with unexpected address. Configuration files, whitelists, these are not what a resolver should do. Maybe we can provide an additional limited resolver?
Author
Owner

@bluejekyll commented on GitHub (Mar 6, 2018):

@bluetech, thank you for posting that. I just finished reading through that thread. That puts a very different perspective on this. I can't help but feel that it's outright wrong, but those are large enough use cases that this implementation would be dangerous without some escape valve.

<!-- gh-comment-id:370691890 --> @bluejekyll commented on GitHub (Mar 6, 2018): @bluetech, thank you for posting that. I just finished reading through that thread. That puts a very different perspective on this. I can't help but feel that it's outright wrong, but those are large enough use cases that this implementation would be dangerous without some escape valve.
Author
Owner

@bluejekyll commented on GitHub (Mar 6, 2018):

@quininer again, thank you for the feedback.

To be very clear here, this does not "rudely" deny localhost. It deny's localhost via external resolution from the system. This means, localhost will resolve when queried directly, as will *.localhost to either A or AAAA as expected to 127.0.0.1 or ::1. Conversely, From @bluetech's example, www.dropboxlocalhost.com would be blocked as that would require a remote query to resolve it to 127.0.0.1 or ::1.

I have no intention of breaking resolution for valid use cases, even if I disagree with them.

Configuration files, whitelists, these are not what a resolver should do.

I'm not sure I understand this comment. Is it your position that system Resolvers like glibc, musl, trust-dns, etc, should not attempt to sanitize the records returned from upstream queries?

Maybe we can provide an additional limited resolver?

That's akin to what I was thinking with using the contextual configurations I mentioned in my earlier response. Whether it's limited, or you express the desired limitation as part of the query, I think would be open to an implementation decision.

<!-- gh-comment-id:370695434 --> @bluejekyll commented on GitHub (Mar 6, 2018): @quininer again, thank you for the feedback. To be very clear here, this does not "rudely" deny localhost. It deny's localhost via external resolution from the system. This means, `localhost` will resolve when queried directly, as will `*.localhost` to either `A` or `AAAA` as expected to `127.0.0.1` or `::1`. Conversely, From @bluetech's example, `www.dropboxlocalhost.com` would be blocked as that would require a remote query to resolve it to `127.0.0.1` or `::1`. I have no intention of breaking resolution for valid use cases, even if I disagree with them. > Configuration files, whitelists, these are not what a resolver should do. I'm not sure I understand this comment. Is it your position that system Resolvers like glibc, musl, trust-dns, etc, should not attempt to sanitize the records returned from upstream queries? > Maybe we can provide an additional limited resolver? That's akin to what I was thinking with using the contextual configurations I mentioned in my earlier response. Whether it's limited, or you express the desired limitation as part of the query, I think would be open to an implementation decision.
Author
Owner

@quininer commented on GitHub (Mar 6, 2018):

@bluejekyll

Is it your position that system Resolvers like glibc, musl, trust-dns, etc, should not attempt to sanitize the records returned from upstream queries?

I think system resolver should be keep honest.

<!-- gh-comment-id:370712132 --> @quininer commented on GitHub (Mar 6, 2018): @bluejekyll > Is it your position that system Resolvers like glibc, musl, trust-dns, etc, should not attempt to sanitize the records returned from upstream queries? I think system resolver should be keep honest.
Author
Owner

@197g commented on GitHub (Mar 6, 2018):

Is it your position that system Resolvers like glibc, musl, trust-dns, etc, should not attempt to sanitize the records returned from upstream queries?

It is my position that resolvers should adhere to the standard, i.e. RFCs. Dropping queries/answers which are not compliant, or being fault resistant is one thing. Not accepting a standard compliant answer on the other hand leads down a very narrow path of being a maintainance nightmare and breaking unexplicably in the future.

In this specific case, I think it's arguably safer to be more explicit than required. What @quininer suggested with a wrapping interface type sound reasonable enough to at least require some thought from direct api users about its security implications while not restricting the possibilities.

<!-- gh-comment-id:370894019 --> @197g commented on GitHub (Mar 6, 2018): > Is it your position that system Resolvers like glibc, musl, trust-dns, etc, should not attempt to sanitize the records returned from upstream queries? It is my position that resolvers should adhere to the standard, i.e. RFCs. Dropping queries/answers which are not compliant, or being fault resistant is one thing. Not accepting a standard compliant answer on the other hand leads down a very narrow path of being a maintainance nightmare and breaking unexplicably in the future. In this specific case, I think it's arguably safer to be more explicit than required. What @quininer suggested with a wrapping interface type sound reasonable enough to at least require some thought from direct api users about its security implications while not restricting the possibilities.
Author
Owner

@bluejekyll commented on GitHub (Mar 6, 2018):

@HeroicKatora

Not accepting a standard compliant answer on the other hand leads down a very narrow path of being a maintainance nightmare and breaking unexplicably in the future.

This I agree with, and tend to think maybe an error would be the correct result in a rejection case. The complexity of that comes in what an attacker may be trying to do, for example, interleaving private/localhost IPs with public IPs. It might be the case that some are "safe" and others are not.

wrapping interface type sound reasonable enough to at least require some thought from direct api users about its security implications while not restricting the possibilities

The type safety here would be a win. My hesitation with it at the moment relates to a large divergence from the std::net lookup API, it may be cumbersome for users.

<!-- gh-comment-id:370935933 --> @bluejekyll commented on GitHub (Mar 6, 2018): @HeroicKatora > Not accepting a standard compliant answer on the other hand leads down a very narrow path of being a maintainance nightmare and breaking unexplicably in the future. This I agree with, and tend to think maybe an error would be the correct result in a rejection case. The complexity of that comes in what an attacker may be trying to do, for example, interleaving private/localhost IPs with public IPs. It might be the case that some are "safe" and others are not. > wrapping interface type sound reasonable enough to at least require some thought from direct api users about its security implications while not restricting the possibilities The type safety here would be a win. My hesitation with it at the moment relates to a large divergence from the std::net lookup API, it may be cumbersome for users.
Author
Owner

@DemiMarie commented on GitHub (Mar 7, 2018):

I think the biggest reason that this is so important is that many
development tools use TCP sockets on localhost without proper
authentication. We do NOT want this to cause remote code execution.

On Mar 6, 2018 4:33 PM, "Benjamin Fry" notifications@github.com wrote:

@HeroicKatora https://github.com/heroickatora

Not accepting a standard compliant answer on the other hand leads down a
very narrow path of being a maintainance nightmare and breaking
unexplicably in the future.

This I agree with, and tend to think maybe an error would be the correct
result in a rejection case. The complexity of that comes in what an
attacker may be trying to do, for example, interleaving private/localhost
IPs with public IPs. It might be the case that some are "safe" and others
are not.

wrapping interface type sound reasonable enough to at least require some
thought from direct api users about its security implications while not
restricting the possibilities

The type safety here would be a win. My hesitation with it at the moment
relates to a large divergence from the std::net lookup API, it may be
cumbersome for users.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/bluejekyll/trust-dns/issues/347#issuecomment-370935933,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGGWB5PXp0h0FjjCSHG-gp_gW7TPCZEQks5tbwCHgaJpZM4SJy59
.

<!-- gh-comment-id:371002078 --> @DemiMarie commented on GitHub (Mar 7, 2018): I think the biggest reason that this is so important is that many development tools use TCP sockets on localhost without proper authentication. We do NOT want this to cause remote code execution. On Mar 6, 2018 4:33 PM, "Benjamin Fry" <notifications@github.com> wrote: > @HeroicKatora <https://github.com/heroickatora> > > Not accepting a standard compliant answer on the other hand leads down a > very narrow path of being a maintainance nightmare and breaking > unexplicably in the future. > > This I agree with, and tend to think maybe an error would be the correct > result in a rejection case. The complexity of that comes in what an > attacker may be trying to do, for example, interleaving private/localhost > IPs with public IPs. It might be the case that some are "safe" and others > are not. > > wrapping interface type sound reasonable enough to at least require some > thought from direct api users about its security implications while not > restricting the possibilities > > The type safety here would be a win. My hesitation with it at the moment > relates to a large divergence from the std::net lookup API, it may be > cumbersome for users. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/bluejekyll/trust-dns/issues/347#issuecomment-370935933>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AGGWB5PXp0h0FjjCSHG-gp_gW7TPCZEQks5tbwCHgaJpZM4SJy59> > . >
Author
Owner

@briansmith commented on GitHub (Mar 7, 2018):

I think the biggest reason that this is so important is that many
development tools use TCP sockets on localhost without proper
authentication. We do NOT want this to cause remote code execution.

Consider a server that responds to everything with:

HTTP/1.0 301 Moved Permanently
Location: http://localhost/...

or

HTTP/1.0 301 Moved Permanently
Location: http://127.0.0.1/...

And consider the possibility that the HTTP client is HTTP/1.0 and so it doesn't even send the Host: header field.

I suspect that this would be just as bad as that website's hostname resolving to 127.0.0.1.

<!-- gh-comment-id:371002571 --> @briansmith commented on GitHub (Mar 7, 2018): > I think the biggest reason that this is so important is that many > development tools use TCP sockets on localhost without proper > authentication. We do NOT want this to cause remote code execution. Consider a server that responds to everything with: ``` HTTP/1.0 301 Moved Permanently Location: http://localhost/... ``` or ``` HTTP/1.0 301 Moved Permanently Location: http://127.0.0.1/... ``` And consider the possibility that the HTTP client is HTTP/1.0 and so it doesn't even send the `Host:` header field. I suspect that this would be just as bad as that website's hostname resolving to 127.0.0.1.
Author
Owner

@bluejekyll commented on GitHub (Mar 7, 2018):

Exactly. Everything I read about with SSRF also points in the same direction. This is really an application level issue. The only strong mitigation against record spoofing in DNS is DNSSEC, and that has a similar problem and question on what to do on failure to verify.

The best we seem to be able to do in DNS is try and help prevent some narrow exploits, but everything in the end really goes back to requiring TLS in the application (browser, BitTorrent, etc), and those enforcing that there are not unexpected network context switches there.

While for localhost specifically I want the inverse of this to also be true (the subject of this debate), https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-00.html, it’s already clear that there isn’t consensus on this.

It seems that the result of the lookup_ip operation, could easily just have a filter on the returned iterator that limits the IPs to the scope expected.

<!-- gh-comment-id:371014255 --> @bluejekyll commented on GitHub (Mar 7, 2018): Exactly. Everything I read about with SSRF also points in the same direction. This is really an application level issue. The only strong mitigation against record spoofing in DNS is DNSSEC, and that has a similar problem and question on what to do on failure to verify. The best we seem to be able to do in DNS is try and help prevent some narrow exploits, but everything in the end really goes back to requiring TLS in the application (browser, BitTorrent, etc), and those enforcing that there are not unexpected network context switches there. While for localhost specifically I want the inverse of this to also be true (the subject of this debate), https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-00.html, it’s already clear that there isn’t consensus on this. It seems that the result of the `lookup_ip` operation, could easily just have a filter on the returned iterator that limits the IPs to the scope expected.
Author
Owner

@bluejekyll commented on GitHub (Mar 7, 2018):

Everyone, thank you for all the feedback on this topic. It's all greatly appreciated. Based on all of this dialogue it seems most prudent to offer consumers of the API the optional choice on how to treat returned names. I still think the concern around this area is enough that we can offer something, but not be as extreme as the original PR. Weighing all the concerns of the comments, here's what I'm thinking:

I propose changing the LookupIp trait (and possibly others as desirable). This is the result of the Future returned from ResolverFuture and Resolver after performing a lookup_ip, which I think is the general interface in question here. Today LookupIp has one method:

fn iter(&self) -> LookupIpIter

which returns an Iterator over the resultant IpAddrs. As @quininer astutely points out, we can leverage type safety here by adding other variants (I'm not keen on these names yet):

fn any_iter(&self) -> LookupIpIter; // same as todays iter, no filter
fn global_iter(&self) -> LookupIpIter; // only return external ips
fn private_iter(&self) -> LookupIpIter; // only return private ips
fn loopback_iter(&self) -> LookupIpIter; // only return localhost ips

I think there are some questions to answer about exactly what is allowed in private/loopback and if those should just be merged into one. I think this strikes a balance between the different perspectives on this. It doesn't allow domain whitelisting/blacklisting, but I think that's a different topic.

I'd welcome feedback on this. Thanks!


FYI: #354 has been updated to partly include these impls so that I could see what that looks like.

The impl can't land until some ipv6 options land in std::net, so this will be waiting for those to stabilize.

<!-- gh-comment-id:371242256 --> @bluejekyll commented on GitHub (Mar 7, 2018): Everyone, thank you for all the feedback on this topic. It's all greatly appreciated. Based on all of this dialogue it seems most prudent to offer consumers of the API the optional choice on how to treat returned names. I still think the concern around this area is enough that we can offer something, but not be as extreme as the original PR. Weighing all the concerns of the comments, here's what I'm thinking: I propose changing the `LookupIp` trait (and possibly others as desirable). This is the result of the `Future` returned from `ResolverFuture` and `Resolver` after performing a `lookup_ip`, which I think is the general interface in question here. Today `LookupIp` has one method: ```rust fn iter(&self) -> LookupIpIter ``` which returns an `Iterator` over the resultant `IpAddr`s. As @quininer astutely points out, we can leverage type safety here by adding other variants (I'm not keen on these names yet): ```rust fn any_iter(&self) -> LookupIpIter; // same as todays iter, no filter fn global_iter(&self) -> LookupIpIter; // only return external ips fn private_iter(&self) -> LookupIpIter; // only return private ips fn loopback_iter(&self) -> LookupIpIter; // only return localhost ips ``` I think there are some questions to answer about exactly what is allowed in private/loopback and if those should just be merged into one. I think this strikes a balance between the different perspectives on this. It doesn't allow `domain` whitelisting/blacklisting, but I think that's a different topic. I'd welcome feedback on this. Thanks! ---- FYI: #354 has been updated to partly include these impls so that I could see what that looks like. The impl can't land until some ipv6 options land in std::net, so this will be waiting for those to stabilize.
Author
Owner

@197g commented on GitHub (Mar 7, 2018):

@DemiMarie

I think the biggest reason that this is so important is that many development tools use TCP sockets on localhost without proper authentication.

Those tools should be subject to a properly set CORS policy, at minimum, which is enforced for websockets and alike as well. Since the default would protect them, applications need to actively make themselves insecure and any such behaviour must be considered a bug.

Another big use case I just noticed is OAuth 2.0 where a public clients wants to be able to catch redirects from the authorization server. Some server implementations forbid raw IP-address entries, for various reasons, and to be able to receive a request on the local application, one needs to have a domain name resolving to the local port.

That should also exemplify the primary function of DNS, which is to associate a stable, humanly readable name with possibly dynamic, machine readable data that the originator of the entry wants to have globally available. It should be noted that DNSSEC only aims to ensures integrity&authenticity of the data w.r.t. its originator but would not provide you with any assurance about the semantics of the response content. For this, there are other mechanisms in place, for example using TLS to ensure confidentiality and identity of the communication partner.

<!-- gh-comment-id:371258743 --> @197g commented on GitHub (Mar 7, 2018): @DemiMarie > I think the biggest reason that this is so important is that many development tools use TCP sockets on localhost without proper authentication. Those tools should be subject to a properly set CORS policy, at minimum, which is enforced for websockets and alike as well. Since the default would protect them, applications need to actively make themselves insecure and any such behaviour must be considered a bug. Another big use case I just noticed is OAuth 2.0 where a public clients wants to be able to catch redirects from the authorization server. Some server implementations forbid raw IP-address entries, for various reasons, and to be able to receive a request on the local application, one needs to have a domain name resolving to the local port. That should also exemplify the primary function of DNS, which is to associate a stable, humanly readable name with possibly dynamic, machine readable data that the originator of the entry wants to have globally available. It should be noted that DNSSEC only aims to ensures integrity&authenticity of the data w.r.t. its originator but would not provide you with any assurance about the semantics of the response content. For this, there are other mechanisms in place, for example using TLS to ensure confidentiality and identity of the communication partner.
Author
Owner

@DemiMarie commented on GitHub (Mar 18, 2018):

@HeroicKatora You are correct. And any website can make an AJAX request to localhost. It won’t get a response back, but by then it is too late.

What is really needed is a simple command-line tool that does the following:

  1. Creates a certificate authority
  2. Creates client and server certificates.
  3. Installs the client certificate and the CA in the browser’s per-user certificate store.
  4. Installs the server certificate and the CA in the server’s store.
  5. Sets some sort of flag to tell the browser that only the given CA is valid (HPKP?), if this is possible. If not, rely on public CA certificates being invalid for private domains.

There might also be some trick involving Kerberos, which all browsers support for the purpose of accessing websites that use Windows authentication. I do not know.

<!-- gh-comment-id:374068213 --> @DemiMarie commented on GitHub (Mar 18, 2018): @HeroicKatora You are correct. And any website can make an AJAX request to localhost. It won’t get a response back, but by then it is too late. What is really needed is a simple command-line tool that does the following: 1. Creates a certificate authority 2. Creates client and server certificates. 3. Installs the client certificate and the CA in the browser’s per-user certificate store. 4. Installs the server certificate and the CA in the server’s store. 5. Sets some sort of flag to tell the browser that only the given CA is valid (HPKP?), if this is possible. If not, rely on public CA certificates being invalid for private domains. There might also be some trick involving Kerberos, which all browsers support for the purpose of accessing websites that use Windows authentication. I do not know.
Author
Owner

@197g commented on GitHub (Mar 19, 2018):

Generating, Installing and using a self signed CA on every client computer is a very bad idea. Not that other people don't make similar mistakes, like a German lawyer association. Other idea, use the websocket you setup to tunnel some other authenticated protocol. This could also involve certificates but means there is no need to mess with the users browser agent and has the advantage that you can't accidentally increase the attack surface beyond your application.

but by then it is too late.

I'm not quite sure what you mean here.

<!-- gh-comment-id:374075272 --> @197g commented on GitHub (Mar 19, 2018): Generating, Installing and using a self signed CA on every client computer is a very bad idea. Not that other people don't make similar mistakes, [like a German lawyer association](https://youtu.be/I_tyTYAVYDo?t=33m57s). Other idea, use the websocket you setup to tunnel some other authenticated protocol. This could also involve certificates but means there is no need to mess with the users browser agent and has the advantage that you can't accidentally increase the attack surface beyond your application. > but by then it is too late. I'm not quite sure what you mean here.
Author
Owner

@DemiMarie commented on GitHub (Mar 19, 2018):

@HeroicKatora I am referring to the program (such as Jupyter) creating its own certificate, private key, and CA. Not signing a shared key for which the private key can be leaked. None of the private keys ever leave the machine they are generated and used on in this model.

but by then it is too late.

I'm not quite sure what you mean here.

The payload has already been delivered. The response does not matter.

<!-- gh-comment-id:374076283 --> @DemiMarie commented on GitHub (Mar 19, 2018): @HeroicKatora I am referring to the program (such as Jupyter) creating its own certificate, private key, and CA. Not signing a shared key for which the private key can be leaked. None of the private keys ever leave the machine they are generated and used on in this model. > > but by then it is too late. > > I'm not quite sure what you mean here. The payload has already been delivered. The response does not matter.
Author
Owner

@197g commented on GitHub (Mar 19, 2018):

As I said, Not that other people don't make similar mistakes, but only because Jupyter does it, does not mean it is a necessarily good idea. At least they ask you kindly to come up with a password on their guide but we both know how good user chosen passwords are.

Still, if a local service is exploitable by sending a non-interactive, unauthorized request from anywhere then the problem is not DNS. The same service would be exploitable by an unauthorized user on the same machine for example, essentially providing priviledge escalation, without involving any DNS at all. For more commentary on these sort of band-aid fixes to security problems which could have been avoided by proper coding, Linux Torvalds provides fitting comments on detecting and crashing processes asking for suspicious addresses, essentially like dns but for the ram.

<!-- gh-comment-id:374077974 --> @197g commented on GitHub (Mar 19, 2018): As I said, `Not that other people don't make similar mistakes`, but only because Jupyter does it, does not mean it is a necessarily good idea. At least they ask you kindly to come up with a password on their guide but we both know how good user chosen passwords are. Still, if a local service is exploitable by sending a non-interactive, unauthorized request from anywhere then the problem is *not* DNS. The same service would be exploitable by an unauthorized user on the same machine for example, essentially providing priviledge escalation, without involving any DNS at all. For more commentary on these sort of band-aid fixes to security problems which could have been avoided by proper coding, [Linux Torvalds provides fitting comments on detecting and crashing processes asking for suspicious addresses](http://lkml.iu.edu/hypermail/linux/kernel/1711.2/01701.html), essentially like dns but for the ram.
Author
Owner

@DemiMarie commented on GitHub (Mar 19, 2018):

@HeroicKatora Would you mind explaining how this can be exploited? That is why I specifically stated that the program would install a client certificate into the browser, which the server would then check. The server certificate would be installed into a store only accessible by the user running the program. TLS client certificates are considered the most secure means of authentication available for web applications.

Jupyter does not do this, but it would be good if it did. For me, there are no other users on my system (I run Qubes OS), but for most people that is not the case (there are often system-generated low-privilege accounts created for sandboxing purposes).

<!-- gh-comment-id:374080718 --> @DemiMarie commented on GitHub (Mar 19, 2018): @HeroicKatora Would you mind explaining how this can be exploited? That is why I specifically stated that the program would install a *client* certificate into the browser, which the server would then check. The server certificate would be installed into a store only accessible by the user running the program. TLS client certificates are considered the most secure means of authentication available for web applications. Jupyter does not do this, but it would be good if it did. For *me*, there are no other users on my system (I run Qubes OS), but for most people that is not the case (there are often system-generated low-privilege accounts created for sandboxing purposes).
Author
Owner

@197g commented on GitHub (Mar 19, 2018):

@DemiMarie The attack surface opened by installing a CA is not the original one, but rather something different (and arguably worse). Any program with access to the root ca certificate could intercept and modify, adaptively any https connections to any website without your browser or you being the wiser. And that root cert is available to your own user, readily being able to sign any certificate.

TLS client certificates are considered the most secure means of authentication available for web applications.

Yes, if issuing, using and checking them is strictly seperated. Otherwise, it doesn't provide any guarantees against impersonation and isn't better than without client certificates.

Edit: Note that if no user-agent (browser) is involved, for example a native app communicating with your own server, the situation is slightly better and self-signed certificates might be defendable but only if they are signed by the server and not locally by the client. Even then it is perferrable that the implementation perform some sort of pinning.

<!-- gh-comment-id:374081948 --> @197g commented on GitHub (Mar 19, 2018): @DemiMarie The attack surface opened by installing a CA is not the original one, but rather something different (and arguably worse). Any program with access to the root ca certificate could intercept and modify, adaptively *any* https connections to *any* website without your browser or you being the wiser. And that root cert is available to your own user, readily being able to sign any certificate. > TLS client certificates are considered the most secure means of authentication available for web applications. Yes, if issuing, using and checking them is strictly seperated. Otherwise, it doesn't provide any guarantees against impersonation and isn't better than without client certificates. Edit: Note that if no user-agent (browser) is involved, for example a native app communicating with your own server, the situation is slightly better and self-signed certificates might be defendable but **only** if they are signed by the server and not locally by the client. Even then it is perferrable that the implementation perform some sort of pinning.
Author
Owner

@DemiMarie commented on GitHub (Mar 19, 2018):

In this case, the private key of the temporary CA would be stored entirely
in memory, and wiped after use. So no other process could access it. In
any case, even if it were swapped to disk, it would be inaccessible by any
program that did not have the full permissions of the user the program
creating the CA has, and which could do exactly the same thing.

On Sun, Mar 18, 2018, 10:17 PM HeroicKatora notifications@github.com
wrote:

@DemiMarie https://github.com/demimarie The attack surface opened by
installing a CA is not the original one, but rather something different.
Any program with access to the root ca certificate could intercept and
modify, adaptively any https connections to any website without your
browser or you being the wiser.

TLS client certificates are considered the most secure means of
authentication available for web applications.

Yes, if issuing, using and checking them is strictly seperated. Otherwise,
it doesn't provide any guarantees against impersonation and isn't better
than without client certificates.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/bluejekyll/trust-dns/issues/347#issuecomment-374081948,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGGWB5ETivdjbDaHKOILKPTN2nXinFlRks5tfxUdgaJpZM4SJy59
.

<!-- gh-comment-id:374085295 --> @DemiMarie commented on GitHub (Mar 19, 2018): In this case, the private key of the temporary CA would be stored entirely in memory, and wiped after use. So no other process could access it. In any case, even if it were swapped to disk, it would be inaccessible by any program that did not have the full permissions of the user the program creating the CA has, and which could do exactly the same thing. On Sun, Mar 18, 2018, 10:17 PM HeroicKatora <notifications@github.com> wrote: > @DemiMarie <https://github.com/demimarie> The attack surface opened by > installing a CA is not the original one, but rather something different. > Any program with access to the root ca certificate could intercept and > modify, adaptively *any* https connections to *any* website without your > browser or you being the wiser. > > TLS client certificates are considered the most secure means of > authentication available for web applications. > > Yes, if issuing, using and checking them is strictly seperated. Otherwise, > it doesn't provide any guarantees against impersonation and isn't better > than without client certificates. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/bluejekyll/trust-dns/issues/347#issuecomment-374081948>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AGGWB5ETivdjbDaHKOILKPTN2nXinFlRks5tfxUdgaJpZM4SJy59> > . >
Author
Owner

@197g commented on GitHub (Mar 19, 2018):

That may be a somewhat sensible scheme from a security standpoint¹. But at this point the problems it solves only tangentially concerns the initial question for DNS, basically reducing the initial conclusion that client security can be solved at the application level and thus, this weird mix of restricting the application of one protocol to achieve only superficial security for others who should not be vulnerable in the first place can be avoided.

¹ While it might look okay, I don't condone hastily derived schemes and would only use reviewed schemes which most probably exist, after cross-checking with the actual application profile.

<!-- gh-comment-id:374088252 --> @197g commented on GitHub (Mar 19, 2018): That *may* be a somewhat sensible scheme from a security standpoint¹. But at this point the problems it solves only tangentially concerns the initial question for DNS, basically reducing the initial conclusion that client security **can** be solved at the application level and thus, this weird mix of restricting the application of one protocol to achieve only superficial security for others who should not be vulnerable in the first place can be avoided. ¹ While it might look okay, I don't condone hastily derived schemes and would only use reviewed schemes which most probably exist, after cross-checking with the actual application profile.
Author
Owner

@197g commented on GitHub (Mar 19, 2018):

Since this seems to be drifting away from the original point, if you want to discuss further I invite you to send me mail instead.

<!-- gh-comment-id:374088487 --> @197g commented on GitHub (Mar 19, 2018): Since this seems to be drifting away from the original point, if you want to discuss further I invite you to send me mail instead.
Author
Owner

@divergentdave commented on GitHub (Dec 5, 2024):

I think this will need to be configurable, both because different deployments may have different conceptions of which networks are private, and because of the above DNS-based filtering use cases. Unbound has implemented DNS rebinding protections, see the private-address and private-domain configuration parameters for a description of the logic it uses.

<!-- gh-comment-id:2521747217 --> @divergentdave commented on GitHub (Dec 5, 2024): I think this will need to be configurable, both because different deployments may have different conceptions of which networks are private, and because of the above DNS-based filtering use cases. Unbound has implemented DNS rebinding protections, see the [private-address](https://unbound.docs.nlnetlabs.nl/en/latest/manpages/unbound.conf.html#unbound-conf-private-address) and [private-domain](https://unbound.docs.nlnetlabs.nl/en/latest/manpages/unbound.conf.html#unbound-conf-private-domain) configuration parameters for a description of the logic it uses.
Author
Owner

@divergentdave commented on GitHub (Oct 16, 2025):

#3298 added a recursor configuration option for this.

<!-- gh-comment-id:3411858358 --> @divergentdave commented on GitHub (Oct 16, 2025): #3298 added a recursor configuration option for this.
Author
Owner

@djc commented on GitHub (Oct 17, 2025):

@divergentdave why did you leave this open? What work is left here to do after this PR?

<!-- gh-comment-id:3414173421 --> @djc commented on GitHub (Oct 17, 2025): @divergentdave why did you leave this open? What work is left here to do after this PR?
Author
Owner

@divergentdave commented on GitHub (Oct 17, 2025):

I think there is still a desire for similar configuration on forwarders. See the second comment, for example.

<!-- gh-comment-id:3415880605 --> @divergentdave commented on GitHub (Oct 17, 2025): I think there is still a desire for similar configuration on forwarders. See the second comment, for example.
Author
Owner

@djc commented on GitHub (Oct 19, 2025):

@marcus0x62 how hard would it be to just move this handling from the recursor into the resolver?

<!-- gh-comment-id:3419351857 --> @djc commented on GitHub (Oct 19, 2025): @marcus0x62 how hard would it be to just move this handling from the recursor into the resolver?
Author
Owner

@marcus0x62 commented on GitHub (Oct 19, 2025):

Not terribly difficult, although there will be some minor impact in doing so:

  • Bailiwick filtering will need to stay in the recursor - we we'll have at least two separate record filter events going on.
  • The buffer from the DnsResponse isn't used where answer filtering is done in the recursor. In the most obvious place I can think of to do this in the resolver:

github.com/hickory-dns/hickory-dns@842b109d92/crates/resolver/src/name_server/name_server_pool.rs (L87-L94)

the buffer part of the DnsResponse could well be used by the caller, so that will have to be reconstituted from the updated message, and there will be some (small) performance impact.

Other than that, AccessControlSet will need to move somewhere (proto? The recursor will still need it for name server filtering, and moving it to resolver doesn't really make much sense) and a bit of work to expose an interface for resolver users to provide their allow/deny lists.

<!-- gh-comment-id:3419627404 --> @marcus0x62 commented on GitHub (Oct 19, 2025): Not terribly difficult, although there will be some minor impact in doing so: - Bailiwick filtering will need to stay in the recursor - we we'll have at least two separate record filter events going on. - The buffer from the DnsResponse isn't used where answer filtering is done in the recursor. In the most obvious place I can think of to do this in the resolver: https://github.com/hickory-dns/hickory-dns/blob/842b109d9207e882311c161667661ed41a2904f6/crates/resolver/src/name_server/name_server_pool.rs#L87-L94 the buffer part of the DnsResponse could well be used by the caller, so that will have to be reconstituted from the updated message, and there will be some (small) performance impact. Other than that, AccessControlSet will need to move somewhere (proto? The recursor will still need it for name server filtering, and moving it to resolver doesn't really make much sense) and a bit of work to expose an interface for resolver users to provide their allow/deny lists.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/hickory-dns#454
No description provided.