On 2/16/24, 15:05, "DNSOP on behalf of Mark Andrews" <dnsop-boun...@ietf.org on 
behalf of ma...@isc.org> wrote:

Pardon ... perhaps this issue has died down, but I've been off a few days, and 
I just saw this...

>Generating a new key is not hard to do.

That's not the issue, it's knowing that it would be the wise thing to do that 
is the issue.

>Adding a check against the common key store is not hard to do in a 
>multi-signer scenario.  It can be completely automated.

I'm not in agreement with that.  Some keys are managed with off-net HSM 
devices, accessed only during a key ceremony.  There may be some cases where 
the key set is assembled and signed without access of the 'net.  This is a 
result of an early design rule in DNSSEC, we had to design around a system that 
air-gapped the private keys from the open network.

This does underscore the importance of coherency in the key set even in a 
multi-signer scenario.  (There was talk of trying to let each server have its 
own key set perspective.)  In order to detect key tag collisions, the managing 
entity has to be able to see the entire set.

>We could even use the DNS and UPDATE to do that. Records with tuples of 
>algorithm, tag and operator. Grab the current RRset. Add it as a prerequisite 
>with a update for the new tag.  

This approach leaves open a race condition.  It's possible that two signers 
simultaneously generate keys with colliding key tags and each gets to add 
because they don't see each other.  My point, while this is admirable, 
achieving the perfect solution is out of reach, so let's not assume we can ever 
totally avoid key tag collisions.

My thesis is - key tag collisions are not the driver for validation resource 
consumption.  In the research paper, collisions do contribute by scaling the 
impact up.  Through using invalid signature values, resource consumption can be 
drained by throwing multiple "good-looking" signatures along with data set and 
having many keys.  The fact that key tags can collide only mean that I can 
cause multiple checks per signature, which may help hide my malicious tracks.

And remember, the paper requires that the crypto operations always fail.  I.e., 
the is no success to be missed by not trying all the combinations of keys and 
signatures.  A simple timer is all that is needed.

Key tag collisions are a pain in key management, operators that have 
experienced them have shown not to tolerate them for long even if there were no 
outages.  To me, whatever can be done to easily avoid them would be good, 
trying to define an interoperable way (standard) to eliminate them would prove 
to be overkill.  And...my original point was...don't include this idea in a 
future design.

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to