Thanks for showing the probability p when at least one of the servers use good 
randomess.

Yes, p is negligible in this case.

The assumption for the attacking scenario I described is acutally that the 
randomness is maliciously repeatedly used for a while or chosen from a small 
range, due to mistakenly selecting/implementing the PRG or bad protocol 
specification (say, allowing reuse of randomess in for multiple connections 
within an interval of time).

In this case, the attacks may be exploited by an attacker to decrypt the 
communications in which some victims are involved.

Sorry for not explicitly state this assumption earlier. I thought the 
discussion in this thread assumes this or similar one already.

Guilin

发件人:Scott Fluhrer (sfluhrer) <[email protected]<mailto:[email protected]>>
收件人:Wang Guilin 
<[email protected]<mailto:[email protected]>>;Deirdre
 Connolly <[email protected]<mailto:[email protected]>>;Muhammad 
Usama Sardar 
<[email protected]<mailto:[email protected]>>
抄 送:[email protected] <[email protected]<mailto:[email protected]>>;Wang Guilin 
<[email protected]<mailto:[email protected]>>
时 间:2026-03-10 10:47:13
主 题:Re: [TLS] Re: WG Last Call: draft-ietf-tls-mlkem-07 (Ends 2026-02-27)

Well, it's the client that generates the ML-KEM private key, not the server.  
So, let's switch your attack around so that the client under attack connects to 
two different servers with the same ML-KEM public/private key (one of which is 
adversary controlled).

In that case, we have p \approx 2^{-255} that they generate the same ML-KEM 
shared secret (assuming one of the servers use good random input for their 
encapsulation); the possibilities are:


  *
Both servers happen to use the same random input during the encapsulation 
process.  This randomness is a 256 bit value, and if one of the two sides 
generate it honestly, the probability of a collision is 2^{-256}
  *
Both servers use different random inputs, but they happen to generate the same 
shared secret.  This shared secret comes from G(m || H(ek)), where m is the 
random input, and ek is the public key (consistent for the two servers).  G is 
a strong hash function, and so the probability that the two different 'm' 
values generate the same 256 bit shared secret is 2^{-256}

The sum of these two probabilities is about 2^{-255} (it's not exact, because 
these two probabilities aren't precisely independent, but it's pretty close).

That's not a probability that concerns me.  I'm not a fan of reusing private 
keys in this context, but this isn't a valid argument against it.

________________________________
From: Wang Guilin <[email protected]>
Sent: Monday, March 9, 2026 9:28 PM
To: Scott Fluhrer (sfluhrer) <[email protected]>; Deirdre Connolly 
<[email protected]>; Muhammad Usama Sardar 
<[email protected]>
Cc: [email protected] <[email protected]>; Wang Guilin <[email protected]>
Subject: [TLS] Re: WG Last Call: draft-ietf-tls-mlkem-07 (Ends 2026-02-27)


Here is a following theorectical attack from such reusing randomness. But I am 
not sure how practical to amount such an attack in real systems.

-1). An attacker A is monitoring all communication with a TLS server S.
-2). Once a victim C as a client is connecting to server S, A is also trying to 
connect with S.
-3). If both TLS sessions above are established successuly, their shared secret 
(randomness) will be the same with a probabilty p.
-4) So, A will be able to decrypt the communication between S and C with 
probability p.


Guilin

发件人:Scott Fluhrer (sfluhrer) 
<[email protected]<mailto:[email protected]>>
收件人:Deirdre Connolly 
<[email protected]<mailto:[email protected]>>;Muhammad Usama 
Sardar 
<[email protected]<mailto:[email protected]>>
抄 送:[email protected] <[email protected]<mailto:[email protected]>>
时 间:2026-03-02 23:29:38
主 题:[TLS] Re: WG Last Call: draft-ietf-tls-mlkem-07 (Ends 2026-02-27)

Correction: it turns out that reusing randomness during encapsulation isn't 
quite as broken as I first thought.

Now, the two clients that you encrypted to can both learn each other's shared 
secret (and so the MUST NOT statement is perfectly appropriate); however a 
third party cannot.

On 01.03.26 18:18, Scott Fluhrer (sfluhrer) wrote:

Oh, and I just noticed (and perhaps this is common knowledge): if you used the 
same encapsulation randomness to encapsulate to two different public keys (from 
the same parameter set), then it is fairly easy to recover both shared secrets 
(assuming access to both ciphertexts and public keys).  Hence, the MUST NOT 
reuse encapsulation randomness statement is there for an extremely good reason.
_______________________________________________
TLS mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to