First of all, I would say that this is excellent work, and I would support 
making this a working group item.

As for my comments (both on the document itself, and my opinions on 
alternatives that the document lists):


  *   2.1. Negotiation of the use of hybridization in general and component 
algorithms specifically?

One point that the current doc does not address explicitly are the costs 
relevant to the negotiation method; that is, how do we judge that solution A is 
better than solution B.  From my view point, the (somewhat conflicting) goals 
that the solution should attempt to achieve are:

     *   Simplicity; in terms of ease of implementing it correctly (and hard to 
accidentally get it "almost right"), ease in testing that an implementation is 
correct (especially, does not accept any proposal outside the security policy), 
and the ease of documenting it clearly in an eventual RFC
     *   Simplicity 2; the hybridization solution should not complicate the 
protocol in the case that you don't need to do hybridization (even if what 
you're proposing is a Nextgen algorithm).
     *   Backwards compatibility; we'll need to live with older clients and 
older servers (which don't implement this extension) for a long time; we need 
to ability to either (depending on our security policy) fall back to only 
traditional algorithms, or abort the negotiation cleanly.
     *   Performance; being fairly cheap in terms of the messages to be 
transmitted, and the amount of processing required
     *   Completeness; being able to negotiate the most likely security 
policies (and in a manner that is consistent with the performance goal)
     *   Extensibility; being able to handle likely future requirements (new 
algorithms, etc) without excessive redesign.
These goals are somewhat nebulous (e.g. what are the "most likely security 
policies?  What sorts of future requirements are likely?), however I believe we 
should write them down.

  *   3.2. How many component algorithms to combine?

My opinion: unless allowing three or more algorithms has significant cost (even 
if we are using only two), I would strongly prefer an alternative that could 
scale to three or more.  The whole point of this design is that we don't fully 
trust any one algorithm; I believe that there will be people who would not want 
to rely on two partially trusted algorithms, but would prefer to use three (or 
possibly more).

  *   3.3.1. (Shares-concat) Concatenate key shares

My concern with this is that there may be algorithms with variable key share 
size (I don't know of any right now, but 'extensibility'); if we do this, we 
would want internal length markers

  *   3.4.2. (Comb-XOR) XOR keys and then KDF

This one makes me nervous.  Yes, for practical key agreement protocols, this is 
probably safe; however to prove it, it would appear that you'd have to make 
additional assumptions on the key agreement protocols.  It would appear to be 
safer to use one of the alternatives.  And, if you're concerned with the extra 
processing time running the KDF, I expect that the KDF time to be insignificant 
compared to the time taken by the key agreement protocol.

  *   5.1. Active security

As for whether CPA is good enough, well, one of the overall goals of TLS 1.3 is 
PFS.  If we are negotiating a fresh set of key shares each time, then CPA works 
(the issue with CPA protocols is that an invalid key share will allow the 
attacker to learn some information about the private key; however if we discard 
the private key each time, the attacker gains no benefit).  On the other hand, 
a valid concern would be if an implementor decides to optimize the protocol by 
reusing key shares.  Also, see 'Failures' below for another argument for CCA.

  *   5.3. Failures

Yes, some postquantum algorithms can fail between two honest parties (with 
probabilities ranging from 2**-40 to 2**-256).  On such a failure, the two 
sides would create distinct session keys, and so the encrypted portions of the 
Server Hello would not decrypt properly, and so there would not be a security 
issue here.  Three thoughts:

  1.  small failure probabilities might not happen in practice at all; if we 
assume 2**39 TLS devices (about 100 for every person), and each device made an 
average of 2**30 TLS connections over its lifetime, then we probably will never 
see even one failure anywhere over the lifetime of the protocol if we had a 
security failure probability significantly less than 2**-69
  2.  TCP errors; a similar failure would happen if there was a TCP error 
transmitting the key_share (actually, any part of the client or server hello); 
if the key exchange failure rate is significantly smaller than this, we have 
not affected the reliability of the protocol.
  3.  If we do go with only with CCA alternatives, these always have very small 
(2**-128 or less) failure rates (because a failure does generally leak some 
information about the private key, and so designers are forced to select 
parameters with tiny failure rates); with the above two arguments, this would 
appear to make it clear that failures are never a concern with CCA algorithms.

  *   Another comment that came up with the corresponding IPsec proposal; do we 
want to support key agreement algorithms with extremely long key shares 
(motivator: Classical McEliece and NTS; they're the most generally trusted 
postquantum algorithms, but with public keys that range from 300k to 1.4Meg)?  
If we do decide to support them, how do we do so?  URL and hash?  I'm 
personally OK with us deciding not to support them; I'm just raising the 
question.





_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to