Scott Fluhrer (sfluhrer) writes: > My real question is "why is there such push-back from such a small change?"
For the same reason there would have been pushback if the KEM rollouts had done PQ instead of ECC+PQ: that would have been reckless from a security perspective. Consider CECPQ2b, which applied ECC+SIKE to real user data. Replacing ECC+SIKE with SIKE is a very small code change to skip the "ECC+" part, but this would have had much larger security consequences against attacks known today: the change would have made the connections exploitable in seconds even _without_ a quantum computer. Of course, that wasn't known back then. Many people were enthusiastic about SIKE (see, e.g., https://eprint.iacr.org/2021/543), praising its small keys and how secure it supposedly was. Only a few people (me, for example) were on record sounding alarm bells about SIKE. Fortunately, the cryptographic community's habitual overconfidence was overridden by common-sense security practices. Those practices include keeping an ECC layer just in case the PQ layer goes horribly wrong. > I would understand it if there were a real security vulnerability at stake, This is like claiming in 2021 that there's no vulnerability in SIKE, ergo might as well simplify ECC+SIKE to SIKE. https://cr.yp.to/papers.html#qrcsp accumulates attack data for the 69 round-1 submissions to the NIST competition, and finds that 48% have been broken already. That's counting only mathematical breaks, ignoring all the implementation breaks such as KyberSlash. > however if we believe that ML-DSA has a real security vulnerability, > we ought to abandon it entirely We're not talking about the extreme case of deploying something today that has already been broken. We're talking about managing _risks_ of _future_ attacks. The reason to upgrade from ECC to ECC+PQ is to simultaneously address (1) the risks of quantum attacks by year Y for various Y and (2) the risks of the PQ parts being broken. > I donât believe that it is reasonable for the working group to demand > that everyone make that same trade-off Laissez-faire is a security disaster, as illustrated by the history of TLS, not to mention the broader history of security. TLS 1.3 learned from this and prohibited various things that were allowed before. I'm not saying that more options are always worse. I'm saying that one has to look at the details, rather than resorting to generic arguments such as "more options are better" or "I've heard that someone will pay for this ergo IETF should allow it". > allowing such differing trade-offs is just assigning a few additional > code points I quoted two examples of proposals of non-hybrid adoption and non-hybrid standardization. Have those proposals been withdrawn? There are more possibilities, as ekr noted. Some are easier than others, but the WG's top priority should be security. RFC 7465, banning RC4, took more work than leaving RC4 in place, but was still the right thing to do from a security perspective. ---D. J. Bernstein
_______________________________________________ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org