Answering to the broader thread: when I said "uncontroversial" I was thinking
more about _how_ it should be done, not _if_ it should be used.

Answer to email below follows.

On Saturday, 16 November 2024 09:57:03 CET, D. J. Bernstein wrote:
Watson Ladd writes:
Authentication is not like encryption.

I presume that you're alluding to the following process: if the PQ
signature system is broken, we revert to ECC signatures, and then the
attacker doesn't benefit from forging the no-longer-accepted signatures
(whereas we can't stop attackers from breaking previous ciphertexts).

This process leaves computers completely exposed until they've reverted
to ECC. Sure, some environments are fast to make changes, but some
aren't. For comparison, using ECC+PQ in the first place avoids this
security failure, and will make many people less hesitant to upgrade.

The revert-in-case-of-disaster process also leaves computers completely
exposed to PQ attacks that haven't come to the public's attention yet.
Out of the 69 round-1 submissions to NIST, 33 have been publicly broken
by now (see https://cr.yp.to/papers.html#pqsrc), with some of the
attacks not published for years; is it so hard to imagine that
large-scale attackers found some attacks before the public did?

More broadly, conflating "no attacks have been published" with "no
attacks are being carried out" is unjustified, an extreme form of
availability bias. Occasionally there are leaks from attackers
illustrating how much damage this mistake has done. Example:

https://www.washingtonpost.com/world/national-security/nsa-infiltrates-links-to-yahoo-google-data-centers-worldwide-snowden-documents-say/2013/10/30/e51d661e-4166-11e3-8b74-d89d714ca4dd_story.html

All good points, ones I agree with, but I think those are arguments
against wide deployment of pure ML-DSA, not against describing how
the algorithms should be implemented on technical level.

The reality is that we have very tight deadlines from CNSA2.0, with customers actively asking for post-quantum support. For those for whom those requirements
apply, use of ML-DSA is not only uncontroversial, but mandatory.

And personally, I'd prefer them using ML-DSA than LMS or XMSS...

For the wider Internet, where we want fail-safe options, yes, hybrids are
probably better. Unfortunately, I don't think we have a rough consensus in
LAMPS on how hybrid signatures should be done just yet, and without that,
we can't standardise it for TLS.

(that being said, I don't think ML-DSA will be completely broken over-night,
I suspect it will be weakened over time, so migration off of it won't need
to happen with high agility... but only time will tell how it will play out)
--
Regards,
Alicja (nee Hubert) Kario
Principal Quality Engineer, RHEL Crypto team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 115, 612 00, Brno, Czech Republic

_______________________________________________
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org

Reply via email to