On Tue, Feb 23, 2021 at 11:05 PM Brian Dickson <
brian.peter.dick...@gmail.com> wrote:
...

> My perspective is that most zone operators will only want to deploy a
> single algorithm, and improving the rate at which new algorithms are
> feasible to be adopted should be an explicit goal.
>

I don't think this is realistic.  Consider, for example, the deprecation of
TLS 1.0 [1] 15 years after it was superseded by TLS 1.1.  I see no reason
to expect that DNSSEC validator update cycles will be faster.

Thanks to secure negotiation, TLS was not subject to all the weaknesses of
TLS 1.0 and its ciphers during that time.  Without Strict Mode, DNSSEC has
no such protection.

[1] https://datatracker.ietf.org/doc/draft-ietf-tls-oldversions-deprecate/

Making it feasible to use new algorithms exclusively, necessarily means
> breaking validation for unmanaged or unmaintained resolvers or forwarders.
>

Note that "newness" is not the only factor.  There are also persistent
disagreements about security levels (esp. national crypto).

If it is possible to break validation for those devices/systems, without
> adversely affecting clients (due to improved client logic and/or
> capabilities), that would seem to be a good thing to pursue.
>

Given the experience with TLS, I don't think we can reasonably assume that
"clients" are on a much faster update cycle.

I think, at core, there's a philosophical question here.  Do we intend for
DNSSEC to actually be used for critical security in open systems?  If so,
it will have to work like TLS: a 1% failure rate will be utterly
intolerable, so servers will have to retain support for the 99th percentile
of awful ancient clients.

Alternatively, we can admit that DNSSEC is only intended for critical use
in closed systems, where zones and validators can be updated together.

>

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to