> To clarify, are you continuing to claim that there's "no damage possible > (at least, in the TLS context) caused by PQ DSA break", despite the > facts that (1) upgrades often take a long time and (2) attackers aren't > going to announce their secret attacks? For (1) I call it not an “upgrade” (i.e., to something new and often untested yet), but a “downgrade” – reverting to the “old mature and well-tested ECC code”. Shouldn’t take long at all. For (2) – why do you assume there are no secret attacks against ECC? Merely because you couldn’t find one, and nobody announced it yet? >> then don’t move to PQ DSA until either CRQC is announced > > That would be too late. It completely fails to address the large risk of > quantum attacks happening before the first public attack demos, plus it > leaves users vulnerable during the upgrade period. You don’t really need PQ DSA until CRQC is here. At this point, everybody seems to agree that there is time before CRQC arrives. So, keep studying/exploring/attacking PQ DSA, and prepare code and infrastructure to deploy it – but use ECC for now. . . . .
The deployment timeline for new algorithms and standards is lengthy. Of course. But we aren’t talking about new algorithms here! Unless you consider ECC and/or RSA that have been in the deployed codebases for ages now – new? I’m repeating my points: * Hybrid only make any sense for one scenario: CRQC is not available, and PQ algorithm used is broken via Classic attack. In all the other possibilities Hybrid is useless from the security point of view, and only adds unnecessary burden. * Thus, if/when CTQC arrives – ECC (or any other Classic algorithm) become useless, regardless of whether the PQ part is or is not broken. * Until CRQC arrives – one can safely use ECC (or other trusted Classic algorithm) for TLS signatures, and keep experimenting with and studying PQ algorithms to one’s heart’s content. Moreover, it may not always be feasible to easily tweak configurations to enable/disable algorithms dynamically when CRQCs become publicly known. We would like to also consider the potential impact of zero-day vulnerabilities, where exploits are discovered and used before vulnerabilities are publicly disclosed. Proactive preparation and deployment of hybrid signature schemes reduces the risk of being caught unprepared in such deployments. When CRQC existence becomes known – hybrids would have no place anyway. At that point tweaking is useless. Then, either your PQ algorithm is strong, or it is broken. If it is broken – you’re dead, regardless. If it is strong – the fact that you carried (e.g.,) an ECC signature along the PQ one, was just a waste. As for PQ algorithms maturity (“but can we ‘really trust’ PQ? What if…?”) – let’s look back at, say, ECC: * 1985 - invented (I probably still have the original paper by Victor Miller, then at IBM Research) * 1998 – IEEE P1363 standard * 1999 – NSA Suite B * 2012 – TLS includes ECC (note that a big holding-back factor was legal: Certicom held ECC patents, so many people wanted to wait until they expire in 2015 before actually deploying ECC-using commercial code) * 2015 – dominance And now compare this with, e.g., Lattice-based crypto: * 1950s – mathematical study of Lattices * 1982 – Lenstra’s LLL algorithm to approximate lattice basis reduction * 1996 – NTRU concept by Hoffstein, Pipher, and Silverman * 1997 – proposal by Ajtai for Lattice-based PK encryption scheme * 2001 – NTRU cryptosystem formalized * 2005 – new hardness assumptions (Oded Regev), LWE * 2010s – Ring-LWE * 2022 – NSA CNSA-2.0 * 2024 – NIST standards Looking at the above time-scale, Lattice-based crypto appears to be roughly where ECC was when the crypto-world decided “we’ve studied enough, it is reasonably safe now to rely on ECC”. How long until you decide “yes, we’ve studied RLWE enough to place our bet on it”?
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org