> I removed a lot of logic, as it seems dead on.  But...
> 
> >    This would allow validators to reject any DS or DNSKEY RR set that has a
> >    duplicate key tag.
> 
> "This" refers to barring keys from having duplicate key tags.  My
> knee-jerk response is that validators are already permitted to
> reject anything they want to reject.  (We used to talk about the
> catch-all "local policy" statements in the early specs.)  You don't
> have to bar duplicate key tags to allow validators to dump them,
> validators already have that "right."

The basics of protocol design is that parties that want the protocol to
work follow the protocol. Of course there will be random failures, and in
the case of security protocols, also attackers. 

If we have a protocol where validators are allowed to discard RR sets with
duplicate key tags but we place no restriction on signers, then we have a 
protocol with a high chance of failure even if all parties follow the 
protocol.

So we have essentially 2 options for a successful protocol:
1) the current one where validators tolerator key tag collissions
2) or a potential one where signers ensure that key tag collisions do not
   happen.

If validators violate the protocol then all kinds of things can happen. They
just place themselves outside the protocol and cannot rely on the properties
of the protocol.

At the end of the day, following the protocol is voluntary. But if we want
to be able to reason about the protocol, then we have to assume that all
interested parties try to follow the protocol.

> >Duplicate key tags in RRSIGs is a harder problem
> 
> I'm not clear on what you mean.
> 
> I could have RRSIG generated by the same key (binary-ily speaking,
> not key tag-speaking) that have different, overlapping temporal
> validities.  If you want to draw a malicious use case, I could take
> an RRSIG resource record signed in January with an expiration in
> December for an address record that is changed in March, and replay
> that along with a new signature record, signed in April and valid
> in December.  One would validate and the other not.  But this isn't
> a key tag issue, it's a bad signing process issue.

Indeed. But the question is, if a validator finds both RRSIGs associated with a
RR set and we have guarantees about uniqueness of key tags for public key,
can the validator then discard those signatures?

> >But for the simple question, would requiring unique key tags in DNSSEC be
> >doable without significant negative effects, then I think the answer is yes.
> 
> Heh, heh, if you make the problem simpler, then solving it is
> possible.
> 
> Seriously, while I do believe in the need for a coherent DNSKEY
> resource record set, there are some multi-signer proposals that do
> not.  If the key set has to be coherent, then someone can guard
> against two keys being published with the same key tag.  The recovery
> may not be easy as you'd have to determine what key needs to be
> kicked and who does it and where (physically in HSMs or process-wise).
> I have some doubt that key tag collisions can be entirely avoided.

So now we moved the problem away from the core DNSSEC protocols to the
realm of multi signer protocols.

The first step to conclude is that for the core DNSSEC protocol, requiring
unique key tags is doable. Even without a lot of effort (other the usual
of coordinating changes to the protocol).

Then the question becomes, how hard will it be to adapt multi signer protocols
to ensure that the effective set of DNSKEYs has unique key tags.

> Even if you could - you still have the probablility that someone
> intentionally concocts a key tag collision.  Not everyone plays by
> the rules, especially when they don't want to.

That is not a problem. If we modify the core DNSSEC protocol and 
direct validators to just discard anything that has duplicate key tags,
then the attack would go nowhere.

> So - to me - it keeps coming back to - a validator has to make
> reasonable choices when it comes to using time/space/cpu to evaluate
> an answer.  No matter whether or not the protocol "bars" duplicate
> key tags and whether or not signers are instructed to avoid such
> duplication.

But the protocol also has to take reasonable measures to limit the amount
of time a validator has to spend on normal (including random exceptional)
cases.

For example, without key tags, validators would have to try all keys in
a typical DNSKEY RR set or face high random failures.

Going a step further, we have to decide where to place complexity. Unique key
tags simplifies validator code in many ways. But, it increases the
complexity of signers, in particular multi signer setups.

So the question is, does requiring unique key tags significantly reduce the
attack surface for a validator?

Are there other benefits (for example in diagnotics tools) for unique key
tags that outweight the downside or making multi signer protocols more
complex?

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to