Hi, Joe,
Please allow me to interject, on a few different issues from this thread…

Sent from my iPhone

> On Aug 12, 2021, at 4:39 PM, Joe Abley <jab...@hopcount.ca> wrote:
> 
> Hi Paul,
> 
>> On 12 Aug 2021, at 15:48, Paul Wouters <p...@nohats.ca> wrote:
>> 
>> On Thu, 12 Aug 2021, Joe Abley wrote:
>> 
>>>> This would have been excellent to do when we did DS. It would still be
>>>> good to do this now, I agree. But it would be too late for some of the
>>>> things discussed now.
>>> 
>>> Can you talk more about why you think so?
>> 

Without referring to his presentation (literally or figuratively), and having 
not been active at that stage of DNSSEC, my take is that getting DS added to 
the EPP stuff was the o e opportunity where any other additions could have been 
proposed. That was also a few thousand TLDs ago and possibly pre-NSEC3.

>> I did a small presentation during IETF 111 DPRIVE. You can find the
>> presentation deck and the recording on the IETF site.
> 
> Thanks, I will go and dig that out at some point.
> 
>>> Support for novel interpretations of particular DS algorithms will require 
>>> support on both the provisioning and consumer side. Is it really that much 
>>> more work to specify new DS-like RRTypes?
>> 
>> It does not neccessarilly require support on the provisioning side, as
>> it is "just another DS record" from the provisioning point of view
>> unless lawyers insisted the TLD somehow verifies the pubkey is
>> pre-published or in an algorithm they 'support' (allow).
> 
> I think the set of acceptable algorithms is constrained sufficiently often by 
> registries and registrars that it makes little sense to consider any other 
> case. But you may have different use-cases in mind.

I’m not sure, but I think there are ICANN things that govern algorithms that 
are mandatory to support, at least in the gTLD space? (Thus the reason for the 
draft we are discussing, I believe… ie how new algorithms get added to the IANA 
registry.

> 
>>> There's truck-roll in both cases. Neither path is really going to make 
>>> these features generally available any time soon.
>> 
>> There is a huge difference between "support for a DS record with
>> unknown or unexpected content at the RRR level" and "change all
>> DNS and EPP software on the planet". The first one has like 1500
>> actors. The second has millions or billions.
> 
> I agree, but I don't think that observation is particularly helpful.

See back to my comment about what I think Paul meant. If new DS records can be 
used (with either new DNSKEY algorithms or new DS hash algorithms), the EPP 
mechanics can be used as-is with no protocol changes required (and likely no 
code changes in the majority of registries, hopefully).

There is an uptake curve for the usefulness of this mechanism (Ie validation by 
resolvers and/or clients) but it is fully backward compatible. No truck roll 
required.

> 
> My earlier point was that any mechanism that changes the implementation of a 
> referral is going to need to be backwards compatible to validators and 
> authoritative servers that don't support it. Truck roll is required not to 
> maintain the integrity of the DNS, but to enable the new desired 
> functionality.
> 
> I think there's a lot of inertia on the provisioning side and no matter what 
> mechanism is preferred. It might seem like accepting a new DS algorithm is 
> much easier, but in practice it might also be that you only need a new RRType 
> to be supported in two or three code bases before substantially all TLDs have 
> support. My experience is that it can take years to have new algorithms 
> supported by the RR machinery, regardless of how simple they are to express 
> and consume in the DNS. It's always worth remembering that registry systems 
> are not DNS systems; they are operated and constrained very differently. 
> Without data I would suggest it's not especially helpful to predict which 
> path is more rocky.
> 
> On the resolver side, a very small handful of resolvers account for the bulk 
> of the world's validation. But regardless of what critical mass you identify 
> as important to establish, there's substantially the same weight of change 
> required on the consumer side. Whether you special-case a DS hack or 
> understand what to do with a new RRType received as part of a referral, you 
> still need code changes.

(Nodding in agreement…)

> 
>> And as I argued, even if we do this by overloading DS or NS, is that
>> overloading really something we need? As it is only required for
>> privacy to nameservers that are in-bailiwick to the domain, which in
>> itself is already pretty much a dead give away even when you only
>> can observe encrypted TLS traffic to an IP address of a well known
>> published nameserver.

Actually (sorry, changing who I’m replying to mid-stream), this is not accurate.

Protection of NS is needed for out-of-bailiwick domains too. The NS name is 
required for TLS.

This is also the case when THAT domain is served by nameservers in yet another 
domain.

The first two delegations do not strictly require glue A/AAAA records, and if 
the latter-most domain is in a different TLD, glue is not even permitted.

(Resuming stream of reply now to Joe…)

> 
> I certainly don't fully understand the degree of risk from subverting a 
> resolver to send a query to a bogus authority server.

If all domains are signed, the degree of risk is likely insignificant.

However, there are use cases where the delegation of an unsigned zone to an 
authoritative server whose name is in a signed zone results in a large change 
in risk, at least in the privacy use case. Having the delegation (NS set) 
protected via some record signed in the parent, ensures that the right name 
server name is used. When that name server name is in a signed zone, other 
record types can securely be obtained, such as TLSA records. If the NS is not 
protected this way, it is not possible to securely obtain transport parameters.

Actually, sending the query to the wrong authoritative server always results in 
loss of privacy information.

IFF the delegation is not over a secure transport, unvalidated NS responses are 
possible to subvert via an on-path attacker, or by a cache poisoning attack.

The unsigned delegation gets even worse at this point, since a successful 
poisoning attack can elevate an off-path attacker to an on-path attacker.


> I am not arguing for or against any kind of mechanism to mitigate unsigned 
> glue. So I don't have any order of preference. However:
> 
>> So my own order of preference is likely something like:
>> 
>> 1) Forget about protecting in-bailiwick nameservers
>> 2) Do it securely using DS at parent
>>  (only requires new code for validating nameservers that don't exist yet)
> 
> I don't understand what the parenthetical comment here means. You're 
> suggesting that existing validating resolvers that don't know how to 
> interpret a weird algorithm in a DS RRSet received during a referral don't 
> need to be changed?
> 

This is correct. It is specified in 4033/4/5, and has been confirmed 
experimentally with several major resolvers and diagnostic tools.
Paraphrasing, “treat unknown algorithms as insecure”, and ignore lack of 
corresponding matching DNSKEY records below the cut on unknown algorithms.

(I did a lot of experiments first before proposing any of this.)

The bigger picture on which WG handles any DS type algorithms (DNSKEY or DS) is 
what is placed on the parent side of a zone cut, for what purpose, and how that 
gets sent to the parent.

It seems to me that DPRIVE folks want to optimize for cold cache use cases, 
fewer DNS queries, and to do this using inappropriate trust/security models. 
For example, placing signals on transport mechanisms and potentially even 
public keys on the parent side of a zone cut.

IMNSHO, the use of signed child zones to provide things like TLSA records is 
the only secure and trustworthy mechanism for the privacy use case (over TLS).

Other than a handful of CCTLDs, there is no CDS available to provide DS to the 
parent. CDS polling directly from registry to registrant is pretty much ruled 
out by RRR, and Registrar polling still relies on EPP for uploading DS to the 
registry. There is no data integrity protection, and registrar level 
credentials are used.

Adding new DS types is required to avoid truck rolls.

Doing this in DNSOP is how we ensure it is possible to use DANE/TLSA.

That this would also provide  insecure delegations with privacy, is a feature, 
not a bug.

Sorry, didn’t mean for this to be such a long message.

Brian
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to