That’s where I’m heading as well…

1) Benign collisions aren’t major headaches, except perhaps for the key manager 
(because rare events are headaches)
2) Validator resource consumption is a general issue, not tied to key tag 
collisions

My kicking this off was not the KeyTrap issue, a report of a potential 
maliciously abused vulnerability.  My kickoff was the report that a TLD “went 
down in DNSSEC” because they published the wrong key in a key tag collision 
set.  That’s why I keep raising the key management angle.

From: Ted Lemon <mel...@fugue.com>
Date: Tuesday, February 20, 2024 at 09:48
To: Edward Lewis <edward.le...@icann.org>
Cc: Mark Andrews <ma...@isc.org>, Paul Wouters <p...@nohats.ca>, 
"dnsop@ietf.org" <dnsop@ietf.org>
Subject: Re: [DNSOP] Detecting, regeneration and avoidance was Re: [Ext] About 
key tags

Sorry, I did not mean that the attack isn't a serious problem. I mean that 
insisting that there be no key hash collisions in a verification attempt is not 
as hard a problem as you were suggesting. The main issue is that it would 
require a flag day, but the number of affected zones in the wild is probably 
small enough that this could be managed. My point was that the keying and 
signing of large, sensitive zones should not be an impediment to having it be 
the rule that key hash collisions aren't allowed. Like, I'm not saying there 
aren't problems, but this is not an insurmountable problem.

On Tue, Feb 20, 2024 at 9:41 AM Edward Lewis 
<edward.le...@icann.org<mailto:edward.le...@icann.org>> wrote:


From: Ted Lemon <mel...@fugue.com<mailto:mel...@fugue.com>>
Date: Tuesday, February 20, 2024 at 09:05
To: Edward Lewis <edward.le...@icann.org<mailto:edward.le...@icann.org>>
Cc: Mark Andrews <ma...@isc.org<mailto:ma...@isc.org>>, Paul Wouters 
<p...@nohats.ca<mailto:p...@nohats.ca>>, 
"dnsop@ietf.org<mailto:dnsop@ietf.org>" <dnsop@ietf.org<mailto:dnsop@ietf.org>>
Subject: Re: [DNSOP] Detecting, regeneration and avoidance was Re: [Ext] About 
key tags

>This seems like an implementation detail.

I don’t want to brush this off that quickly.

>The random likelihood of the root and com key hashes colliding seems pretty 
>small.

This is very true - in nature.  The scare raise here is that someone may 
intentionally concoct a situation, intending to cause havoc.  I do have a dose 
of skepticism when use case is discovered academically as opposed to being seen 
in operational packet flows, but that doesn’t mean the vulnerability is 
irrelevant.  Probably there are lots of holes remaining in the protocol design, 
not yet discovered, so long as they aren’t being they aren’t operationally 
impactful.

The KeyTrap issue is resource consumption/depletion attack and it mentions key 
tag collisions as an ingredient, which is driving the urgency of this 
discussion.  My read of the paper is that, at heart, this is a general resource 
exhaustion problem which stems from the agility of the DNS protocol to find an 
answer no matter how hard it is to find.  Key tag collisions help in hiding 
intent of a malicious configuration by lowering the number of signature records 
needed.

>And while com is rather large, computes aren't as expensive as they were when 
>y'all invented the ritual. I suspect that if you just always pick two keys and 
>sign the zones twice, this problem becomes so improbable that we never have to 
>fall back to actually re-doing the ceremony. But if we did have to fall back 
>once in a blue moon and re-do the ceremony, that might be quite a bit cheaper 
>than allowing key hash collisions in situations where it's actually a problem. 
>I think it would be completely reasonable to insist that if there is a key 
>collision between e.g. com and fugue.com 
>[fugue.com]<https://urldefense.com/v3/__http:/fugue.com__;!!PtGJab4!52sLdAP9_ILh2m4N5k6puN4H9Muh5caOPGzze8vHKdSPc_3Kk48D2xgluq5vE9VesRqSm1Hbnpk8sfr1PuR_81s$>,
> that fugue.com 
>[fugue.com]<https://urldefense.com/v3/__http:/fugue.com__;!!PtGJab4!52sLdAP9_ILh2m4N5k6puN4H9Muh5caOPGzze8vHKdSPc_3Kk48D2xgluq5vE9VesRqSm1Hbnpk8sfr1PuR_81s$>
> could be obligated to regenerate its key rather than com.

In validation, key tag collisions are a problem when there is malicious intent 
and no more than a nuisance in a benign collision.

If an operator had two active ZSKs, there would be two signatures and two keys. 
 With non-colliding key tags, it would be easier to line them up - and recall 
the rule that it only takes one successful operation to declare success.  With 
a collision, there’s a 50% chance of a misalignment at first, which is what 
leads to the 1.5 signature verification operations per instance comes from.  
Given the low probability of a collision (it’s rare!) that 1.5 isn’t a big 
deal.  (No one as suggested a 3-key collision, which would be rarer, especially 
as most operators never exceed two keys of the same role per DNS security 
algorithm.)

Nevertheless, in a malicious case (no more need be said) … this makes me think 
the appropriate solution is for validators implement self-protection (timeouts) 
and not try to avoid collisions.

‘Course, collisions still are a problem for the key managers, but that is a 
local problem.  Unless they publish the wrong key in the collision set.
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to