Here is a following theorectical attack from such reusing randomness. But I am not sure how practical to amount such an attack in real systems.
-1). An attacker A is monitoring all communication with a TLS server S. -2). Once a victim C as a client is connecting to server S, A is also trying to connect with S. -3). If both TLS sessions above are established successuly, their shared secret (randomness) will be the same with a probabilty p. -4) So, A will be able to decrypt the communication between S and C with probability p. Guilin 发件人:Scott Fluhrer (sfluhrer) <[email protected]<mailto:[email protected]>> 收件人:Deirdre Connolly <[email protected]<mailto:[email protected]>>;Muhammad Usama Sardar <[email protected]<mailto:[email protected]>> 抄 送:[email protected] <[email protected]<mailto:[email protected]>> 时 间:2026-03-02 23:29:38 主 题:[TLS] Re: WG Last Call: draft-ietf-tls-mlkem-07 (Ends 2026-02-27) Correction: it turns out that reusing randomness during encapsulation isn't quite as broken as I first thought. Now, the two clients that you encrypted to can both learn each other's shared secret (and so the MUST NOT statement is perfectly appropriate); however a third party cannot. On 01.03.26 18:18, Scott Fluhrer (sfluhrer) wrote: Oh, and I just noticed (and perhaps this is common knowledge): if you used the same encapsulation randomness to encapsulate to two different public keys (from the same parameter set), then it is fairly easy to recover both shared secrets (assuming access to both ciphertexts and public keys). Hence, the MUST NOT reuse encapsulation randomness statement is there for an extremely good reason.
_______________________________________________ TLS mailing list -- [email protected] To unsubscribe send an email to [email protected]
