Okay, I thought whether my idea was too dumb to even receive any
reaction. Thank you for at least some comment.
But, I think each nameserver on path might have a bit different scope.
Therefore all those servers do not know the same information and are
*not* inter-changeable. EDSR is interesting piece of technology, but
useful only if you are redirected to equal server deployed on different
IP or network topology.
What I had on mind were cascade of routers, virtual machines and nested
containers. They may want to know how their configured server processes
the queries received. I work mostly in enterprise networks, where
internal policy may require using just specific set of servers. For a
better performance, it may make sense to use a cascade of caching
resolvers instead of TLS point-to-point resolvers into huge cloud
operated service.
Consider something I have on my (software engineer's) laptop:
+==+ authoritative servers
|
+ [ cloud based resolver, ~10k clients, RPZ filtering of malware content ]
|
+--+ [network based resolver, ~500 clients ]
|
+---+ [laptop localhost resolver, 1 client + all VMs and VPS]
|
+---+ [VM1, own localhost resolver]
+---+ [VM2, own localhost resolver, multiple VPS]
|
+---* [container1 in VM2]
+---+ [container2 in VM2, own localhost resolver]
|
+--* client APP asking for dns-traceroute
What I think is important, there are several layers of DNS caching. All
of them improves performance for their direct clients. But container
localhost resolver may know some special localhost-only names or
entries. Similar with VM2, it may know names of other containers, which
are spawned on VM2. And my laptop may run multiple virtual machines
having internal only names, resolvable only on laptop itself. network
based resolver does not need to know those names, therefore redirecting
to that server leads to lost information. Similarly, I would like my
laptop to have reverse entries for all my spawned VMs, but my network
resolver does not need to know them all.
My opinion is most of DNS-OARC people work on something like cloud based
resolver. On network based resolver EDSR makes perfect sense, as does on
cloud based resolvers to choose best working instance, topologically
close enough or less loaded.
But if my clients are working with internal scopes, using private
addresses and non-public names (.internal or .home.arpa), throwing all
queries over TLS to the cloud provider directly does not seem working to
me. Since most of communication on laptop happens internally, it does
not seem necessary to use DoT servers on the laptop internally. But I
would like to know whether laptop to network resolver is protected. Yes,
I trust my own virtual machines are not lying to me and need no proof
for that. Similarly network based might be operated by own organization,
where we can trust it won't lie to me.
My proposal allows me to fetch cloud provider name, even if firewall on
network boundary does not allow me to query it directly. I don't know if
it is the best practice, but is a real-world configuration of our network.
More bits below.
On 11/11/2024 15:43, Ben Schwartz wrote:
I don't think we should reuse SVCB records in this way. The records
here do not have SVCB semantics, so I think it would be very confusing.
I think it uses very similar semantics. Not same, but similar. They
encode information about next hop. About connection of another client to
another server, no myself. It does not seem too different, except we may
want to include unencrypted protocol usage if it is used.
In general, I have difficulty understanding the problem that is being
solved here. DNS forwarders can have arbitrarily complicated,
customized policies that we cannot hope to standardize. It's also not
clear how this information helps the client, especially since it is
unverifiable.
What I were thinking were discovering hop-by-hop path, similar to what
traceroute does on network level. Each gateway emits ICMP packet. You
are not allowed to connect to each gateway to ask directly. You use just
default route on your host and other gateways on path deliver your packets.
I think DNS can be similar. Okay, I admit client has no way to verify he
is not being lied to. But I assume since immediate hop server identity
is known to client, he may assume some trust of the service he/she is using.
The discussion of "DNS Traceroute" on a per-message basis makes a lot
more sense to me, as it avoids the need to encode the entire
forwarding policy in advance.
Except this won't work in case internal client is not allowed to make
direct connection to parent server himself. Or to other public servers
on the internet. We have such configuration in our internal offices.
Internal resolvers have RPZ policy filters on content, acting as
Protective DNS service for internal clients. Sure, IP source is not
strong information. Something like CARED draft is much better, where
machine has crypto authentication of identity.
I am not proposing to encode whole policy in single response. One
resolver would know just its next-hop. Then it needs a way to identify
"upstream" resolver to its client, giving it some name. Would
_{number}._dns-forward.resolver.arpa make more sense, indicating
number=2 is my parent's parent? Returning what
_1._dns-forward.resolver.arpa received to its _dns.resolver.arpa SVCB?
query?
Instead of this direction, I would encourage you to look at EDSR
(https://datatracker.ietf.org/doc/html/draft-ietf-add-encrypted-dns-server-redirection-01),
which can provide useful SVCB records to the client of a forwarder.
--Ben Schwartz
I thank you for that draft pointer. It is interesting. But I do not
think it is useful for my use case. It requires peer-to-peer
connectivity and non-changing (name) scope. I think today's internet is
full of various PROXYv2 mechanism, delivering queries on behalf of
someone else. I think also traceroute command sends all packets to
destination IP via gw, always the same. Just TTL is changing, returning
ICMP responses from different hosts. Again, traceroute sends every
packet to my default gateway. I consider default gateway as my host's
resolver and expect ability to query parent's responses.
Maybe I have just a very different idea what is meant by dns-traceroute
and what was Petr's idea. I have describe mine.
Regards,
Petr
------------------------------------------------------------------------
*From:* Petr Menšík <pemen...@redhat.com>
*Sent:* Friday, November 8, 2024 3:04 AM
*To:* dnsop@ietf.org <dnsop@ietf.org>
*Subject:* [DNSOP] Re: DNS traceroute - in-person discussion after
dnsop WG on Thursday (today)
Hi Petr,
I am unable to meet in person about this, but I were thinking about
providing some way of forwarder specification. My usage would be a
common way to discover where and how are responses forwarded. The
primary task for be would be discovering, how is the next hop protected,
if at all.
I think DDR record _dns.server.example.com could be reused, at least
partially. SVCB record seems like a good alternative, although I would
need to encode in that also plaintext forwarding.
For example, I would ask localhost resolver, whatever it is:
_dns-forward.resolver.arpa SVCB?
Because forwarding caching server knows where does it forward, it can
answer easily. And it might respond with:
_dns-forward IN SVCB 1 dns.google. alpn="dot" ipv4hint="8.8.4.4"
Great, now I know next hop is encrypted by dot and leads to google. Then
in theory, I might be able to ask still the localhost resolver for next
hop information:
_dns-forward.dns.google SVCB?
Which my localhost resolver would forward normally. Now it might
indicate it uses recursive mode from there, no further forwarding.
_dns-forward.dns.google SVCB 1 .
Nice thing the similar protocol may allow asking for specific domain
redirection.
example.net._dns-forward.resolver.arpa SVCB?
Asking for where does lead example.net. In split-horizon DNS this might
help discovering differences in forwarding and presenting them, whatever
resolver is configured.
I am not sure how to encode forwarding just to IP addresses. PTR-like
record for in-addr.arpa reverse addresses might be a solution. Another
question is how to encode plain udp or tcp protocol used, because SVCB
does not specify that. Would some custom alpn be okay for that, although
they are not supposed to be used in TLS session? Some new parameter
instead?
I think for common configurations, it would be okay to share this
information when the query source has enabled recursion. Of course
similar thing should have own ACLs definition possible, making it
possibly more strict. I think allowing this from localhost would be
usually okay. Another question is how to encode stub zone definiton, if
at all.
Do you think such idea would help you in your traceroute problem?
Cheers,
Petr Menšík
On 07/11/2024 12:34, Petr Špaček wrote:
> Hi!
>
> Have you ever debugged DNS forwarding topology with no clear idea
> where the packets go and why?
>
> Can be something done about it?
>
> Given enough imagination, can we invent something like DNS traceroute?
>
> If you are interested in this topic catch me after dnsop session today
> and we can discuss, possibly with some drinks or food...
>
--
Petr Menšík
Software Engineer, RHEL
Red Hat, https://www.redhat.com/
PGP: DFCF908DB7C87E8E529925BC4931CA5B6C9FC5CB
--
Petr Menšík
Software Engineer, RHEL
Red Hat,https://www.redhat.com/
PGP: DFCF908DB7C87E8E529925BC4931CA5B6C9FC5CB
_______________________________________________
DNSOP mailing list -- dnsop@ietf.org
To unsubscribe send an email to dnsop-le...@ietf.org