Sorry, Bob, this is just me being ignorant—my experience of zone signing and validation is largely as a consumer, not an author of code. If this can only occur within a single zone, then I think what I said still applies—it's hard to see how this is a serious problem in that case. Again, I don't mean the attack isn't the problem—I mean that forbidding collisions should be very doable, although as I said previously, it might require some process changes.
On Tue, Feb 20, 2024 at 9:47 AM Bob Harold <rharo...@umich.edu> wrote: > > On Tue, Feb 20, 2024 at 9:06 AM Ted Lemon <mel...@fugue.com> wrote: > >> This seems like an implementation detail. The random likelihood of the >> root and com key hashes colliding seems pretty small. And while com is >> rather large, computes aren't as expensive as they were when y'all invented >> the ritual. I suspect that if you just always pick two keys and sign the >> zones twice, this problem becomes so improbable that we never have to fall >> back to actually re-doing the ceremony. But if we did have to fall back >> once in a blue moon and re-do the ceremony, that might be quite a bit >> cheaper than allowing key hash collisions in situations where it's actually >> a problem. I think it would be completely reasonable to insist that if >> there is a key collision between e.g. com and fugue.com, that fugue.com >> could be obligated to regenerate its key rather than com. >> > > I thought key collisions were only in a single domain. Anytime you are > looking for a key tag, you already know the zone. Collisions across zones > don't matter, unless your implementation is only tracking keys by tag and > not by zone. > Or am I missing something? > > -- > Bob Harold > > >> >> On Tue, Feb 20, 2024 at 8:42 AM Edward Lewis <edward.le...@icann.org> >> wrote: >> >>> On 2/16/24, 15:05, "DNSOP on behalf of Mark Andrews" < >>> dnsop-boun...@ietf.org on behalf of ma...@isc.org> wrote: >>> >>> Pardon ... perhaps this issue has died down, but I've been off a few >>> days, and I just saw this... >>> >>> >Generating a new key is not hard to do. >>> >>> That's not the issue, it's knowing that it would be the wise thing to do >>> that is the issue. >>> >>> >Adding a check against the common key store is not hard to do in a >>> multi-signer scenario. It can be completely automated. >>> >>> I'm not in agreement with that. Some keys are managed with off-net HSM >>> devices, accessed only during a key ceremony. There may be some cases >>> where the key set is assembled and signed without access of the 'net. This >>> is a result of an early design rule in DNSSEC, we had to design around a >>> system that air-gapped the private keys from the open network. >>> >>> This does underscore the importance of coherency in the key set even in >>> a multi-signer scenario. (There was talk of trying to let each server have >>> its own key set perspective.) In order to detect key tag collisions, the >>> managing entity has to be able to see the entire set. >>> >>> >We could even use the DNS and UPDATE to do that. Records with tuples of >>> algorithm, tag and operator. Grab the current RRset. Add it as a >>> prerequisite with a update for the new tag. >>> >>> This approach leaves open a race condition. It's possible that two >>> signers simultaneously generate keys with colliding key tags and each gets >>> to add because they don't see each other. My point, while this is >>> admirable, achieving the perfect solution is out of reach, so let's not >>> assume we can ever totally avoid key tag collisions. >>> >>> My thesis is - key tag collisions are not the driver for validation >>> resource consumption. In the research paper, collisions do contribute by >>> scaling the impact up. Through using invalid signature values, resource >>> consumption can be drained by throwing multiple "good-looking" signatures >>> along with data set and having many keys. The fact that key tags can >>> collide only mean that I can cause multiple checks per signature, which may >>> help hide my malicious tracks. >>> >>> And remember, the paper requires that the crypto operations always >>> fail. I.e., the is no success to be missed by not trying all the >>> combinations of keys and signatures. A simple timer is all that is needed. >>> >>> Key tag collisions are a pain in key management, operators that have >>> experienced them have shown not to tolerate them for long even if there >>> were no outages. To me, whatever can be done to easily avoid them would be >>> good, trying to define an interoperable way (standard) to eliminate them >>> would prove to be overkill. And...my original point was...don't include >>> this idea in a future design. >>> >>> _______________________________________________ >>> DNSOP mailing list >>> DNSOP@ietf.org >>> https://www.ietf.org/mailman/listinfo/dnsop >>> >> _______________________________________________ >> DNSOP mailing list >> DNSOP@ietf.org >> https://www.ietf.org/mailman/listinfo/dnsop >> >
_______________________________________________ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop