On Sun, 19 May 2024, Steve Crocker wrote:

[speaking as individual only]

In my view, an algorithm moves through seven phases during its lifecycle.

1. Experimental – defined and included in the IANA registry 
2. Adopted – begin inclusion in validation suite
3. Available – ok to use for signing
4. Mainstream – recommended for signing
5. Phaseout  – transition to newer signing algorithm
6. Deprecated  – signing should have stopped
7. Obsolete  – ok to remove from validation suite

This is a very theoretical transition in an ideal world. But that's not
how things have gone in the past. GOST went from 3 to 7. SHA1 went
from 5 to an unlisted state of "6 but avoid because broken at some OSes".

Also you can't have validation without signing so implementation wise
2 and 3 are the same state.

There is also no "Mainstream". For example some crowds need to follow
FIPS, while other crowds try to avoid FIPS algoritms.

There is also a huge difference in "supported by no one uses it" and
"mainstream".

An example over at TLS/IPsec, right now PCI DSS got rid of ffDH. Is that
now no longer Mainstream or is it still Mainstream ?

I think it is better to just have humans evaluate every few years and
come up more flexible transitions. Sometimes that means it doesn't
follow the above 7 steps though in general you'd hope something along
these lines things would be done.

Each transition from one phase to another should be controlled by an expert 
group that advises IANA.  (In some cases, an
algorithm might be deprecated before reaching the "Mainstream" stage.)

There is a lot of interaction based on certifications, payment
industries, webPKI, laws and cryptographic research.

Comment: In the recent Red Hat snafu, I heard a comment that it was not 
possible to disable use of an algorithm for
signing without also disabling the algorithm for validation.

Note everything is possible. It is just that the system's crypto
parameters are being controlled centrally using "crypto-policies"
and require manual tweaking to work. See "man crypto-policies".

For example on the fedora-40 version of crypto-policies:

      DEFAULT
           The DEFAULT policy is a reasonable default policy for today’s 
standards. It allows the TLS 1.2, and TLS 1.3
           protocols, as well as IKEv2 and SSH2. The Diffie-Hellman parameters 
are accepted if they are at least 2048
           bits long. This policy provides at least 112-bit security with the 
exception of allowing SHA-1 signatures in
           DNSSec where they are still prevalent.

Older versions of the crypto-policies had "sha1_in_dnssec" as option,
but the current man page states to use the @DNSSec keyword instead:

        sha1_in_dnssec: Allow SHA1 usage in DNSSec protocol even if it is not 
present in the hash and sign lists
                (recommended replacements: hash@DNSSec, sign@DNSSec).


Hence, when Red Hat wanted to shut off use of an algorithm
for signing, it removed it from the crypto suite and thus disabled validation.

It's more complicated. It disabled it but the dns software initially all
just got crypto errors for the verify operations and thus returned
ServFail. One had to recompile with --no-sha1 to ensure sha1 signatures
were treated as unsigned data and skipped validation and were returned
to the client without AD bit set - avoiding the servfail. Then DNS
software started probing for working/not-working algo and handle this
more dynamically.

While it's understable that a
straightforward implementation of a crypto suite might provide the same path to 
an algorithm irrespective of whether it
will be used for signing or validation, it is possible provide distinct paths 
for these two uses and thereby permit
validation but not signing during phases 2 and 6 at the beginning and end of 
the algorithm's life cycle.

The problem lies in certifcations. If you have "some uses allowed" it
becomes extremely complicated to prove you are compliant, and you
have to make a case to auditors to prove "not doing this makes the
system only weaker, and we guarantee this extra code doesn't cause
different consumers from accidentally validating this".

Yes, your 7 step program is great theory. But in practise it just
won't be happening that often. Just like people tend to ignore
our advise until something breaks and only then will they update
their configs/OS/regulations. SHA1 was signing was "NOT RECOMMENDED"
since June 2019. In October 2020 RSASHA1 finally got mostly turned off.
In June 2021 there were still 2M RSASHA1-NSEC3 which finally got mostly
turned off in Oct 2021. Are those the points where we could have said
NO to sha1? Based on numbers yes, but what if a super important domain
would still be using sha1 then?

As such, I don't see a value in publishing an RFC with this lifecycle
recommendation. It is already known and taken as input. But when the
rubber meets the road we have to do things. I think one of the things
we haven't done well (with DNSSEC, IPsec and to a point TLS) is that
we didn't get rid of old things fast enough. But that is not solved
by publishing more RFCs with MUST NOT. It is an interaction with the
world to push them off these things where at some point we can give
a final nudge to say "really, dont". Another great example here is
IKEv1 which was obsoleted by IKEv2 in 2005 but only in April 2023
did we dare to finally say "really, don't" in RFC9395.

Paul

_______________________________________________
DNSOP mailing list -- dnsop@ietf.org
To unsubscribe send an email to dnsop-le...@ietf.org

Reply via email to