> For about the hundredth time, the woy you deal with any of this is
> resource limits, not trying to invent new rules about stuff we
> might have forbidden if we'd thought of it 20 years ago.

There are a number of problems with resource limits:
1) We haven't written it down (in an RFC). So we got to the point that many
   validators got it wrong. A bit of hand waving that implementations should
   do resource limiting doesn't magically make it happen.
2) Operators of validators don't want customer facing errors due resource
   limit constraits. So they set them generous enough that it works for
   real traffic. Nobody knows what happens during a new attack.
3) Some content providers are quite creative with the way they use DNS. 
   So the limits need to high enough to accomodate them.
4) Because there are no standards for those limits, we cannot really reason
   about them. 
5) It is tricky for researchers because they first have to figure out how
   popular software works in order to exploit it. But if it is tunable
   resource limit it doesn't result in a lot of credit.

So from a validator point is, it is better to move some of those resource
limits to the protocol. Even if the DNSSEC spec would say that you only
have to validate with 2 public keys and two signatures per RR set, then
that would be a massive improvement over the vagueness in the current specs.

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to