On Sat, Apr 30, 2022 at 01:24:58AM +0100, Stephen Farrell wrote:
> 
> Hiya,
> 
> On 27/04/2022 16:27, Christopher Wood wrote:
> > This email commences a two week WGLC for draft-ietf-tls-hybrid-design, 
> > located here:
> > 
> >     https://datatracker.ietf.org/doc/draft-ietf-tls-hybrid-design/
> 
> As I guess I've said before, I think progressing this draft
> now, even with this WGLC-then-park approach, is premature.
> 
> However, I seem to be in the rough on that so can live with
> this ad-hoc process (teeth grinding mind you:-) so long as we
> park this for a sufficient period *and* are open to changing
> anything at the end of the parking-lot period.
> 
> Even so, I think this is not yet ready for any such ad-hoc
> parking-lot:
> 
> - section 1.5 implies a wish to reduce the number of
> octets sent - ECH creates a way to point from one part
> of the (encrypted, inner) ClientHello to another (the
> outer ClientHello). I don't think we want two such
> mechanisms, or one mechanism defined in ECH but none at
> all here, or even worse a second method. For me, that
> implies not "freezing" the structural work here 'till
> we see if ECH gets widespread deployment at which point
> we should consider re-use of the ECH mechanism. (Or maybe
> even consider both cases of re-using octets and invent
> another thing, but not 'till we see if ECH gets traction.)

I don't think compression method like ECH uses would work here.

However, I did come up with compression method:

1) Sub-shares in CH may be be just replaced by a group id (two octets).
   The replacements can be deduced from length of the whole share.
2) First sub-share copies from first octets of share for the designated
   group.
3) Second sub-share copies from last octets of share for the designated
   group.

This can be decoded regardless of if the sever knows what the referenced
groups are. The compression can also never run into loop, as recursive
references are not allowed.


So for example, if one wants to send x25519, p256, x25519+saber and
p256+saber, one can do that as:

- x25519: <x25519 share> (32+4 octets)
- p256: <p256 share> (65+4 octets)
- x25519+saber: <x25519 id><saber share> (2+992+4 octets)
- p256+saber: <p256 id><x25519+saber id> (2+2+4 octets)

Total overhead is 22 octets. 16 for 4 groups, and 6 for the compression
itself.


> - section 2: if "classic" DH were broken, and we then
> depend on a PQ-KEM, doesn't that re-introduce all the
> problems seen with duplicating RSA private keys in
> middleboxes? If not, why not? If so, I don't recall
> that discussion in the WG (and we had many mega-threads
> on RSA as abused by MITM folks so there has to be stuff
> to be said;-)

No. The private key is held by the client, and client sends the public
key to use in its client hello. Furthermore, every connection should use
different public key.

> - similar to the above: if PQ KEM public values are
> like RSA public keys, how does the client know what
> value to use in the initial, basic 1-RTT ClientHello?
> (sorry if that's a dim question:-) If the answer is
> to use something like a ticket (for a 2nd connection)
> then that should be defined here I'd say, if it were
> to use yet another SVCB field that also ought be
> defined (or at least hinted at:-)

Whatever public key the keygen() operation outputs.

> - I'm also HRR-confused - if we don't yet know the
> details of the range of possible PQ KEM algs we want to
> allow here, how do we know that we almost always continue
> to avoid HRR in practice and thus benefit from a mixture of
> classic and PQ algs? (It's also a bit odd that HRR,
> much as I dislike it, doesn't get a mention here;-) I
> think the problem is that we don't want HRR to push a
> client back to only "classic" groups, if the client but
> not the server is worried about PQ stuff while both
> prioritise interop.

Well, avoiding HRR impiles that client is willing to bloat its client
hello even for servers that do not support PQ. And for such clients,
using PQ at all requires servers to priorize it (send HRR even if
acceptable share is present).

> - section 4: if this cannot support all NIST finalists
> due to length limits then we're again being premature
> esp. if NIST are supposed to be picking winners soon.
> We'd look pretty dim if we didn't support a NIST winner
> for this without saying why.

Just yeet McEliece. Its keys are just too large for it to be practical
in TLS, even if the keys did not bust hard limits.

After removing McEliece from consideration, all the finalists and
alternates can trivially be supported (albeit FrodoKEM busts some
soft limits).

> - section 5: IMO all combined values here need to have
> recommended == "N" in IANA registries for a while and
> that needs to be in this draft before it even gets
> parked. Regardless of whether or not the WG agree with
> me on that, I think the current text is missing stuff
> in this section and don't recall the WG discussing that

I think that having recommended = Y for any combined algorithm requires
NIST final spec PQ part and recommended = Y for the classical part
(which allows things like x25519 to be the classical part).

That is, using latest spec for NISTPQC winner is not enough. This
impiles recommended = Y for combined algorithm is some years out at the
very least.
 
> Nits etc below:
> 
> - TLS1.1 and earlier mixed hash functions when deriving
> keys on the basis of then-suspected weaknesses in MD5, yet
> there were arguments made that that ad-hoc mixing wasn't
> useful, so we moved away from that in TLS1.2+. I don't
> see an argument or pointer in this draft to a justification
> that this (also seemingly ad-hoc?) catenation approach
> won't suffer analagous infelicity. Absent that, ISTM trying
> to finalise the structural parts of this now may be a
> cryptographically bad plan. (Even if in practice that's ok.)

The intuition is that if you concatenate the shared secrets and feed the
result to a KDF, unless the KDF is really broken, predicting the output
is going to require predicting both shared secrets.

There is no similar intuition for TLS 1.1 and earlier hash combiner.
Just running the same data through both would give something that turns
out to be as strong as SHA-1, but the combiner does not do that. It
instead does something else that is cryptographically completely
unsound.

I am bit surprised I never saw anyone claim that the TLS 1.0 and
TLS 1.1 (IIRC, I checked that the combiner had not been inherited from
SSLv3) hash combiner was a backdoor. I have seen much less weird things
called backdoors.

> - section 2: the tendency to document APIs (e.g. "KeyGen()")
> in protocol documents seems to me a bit of a double-edged
> sword - it should help some implementers but OTOH might
> mean we fail to consider how implementations with other
> internals might perform, so I'd prefer we are more clear
> as to how those APIs are optional, but that's really a
> matter of style I guess

The keygen() operation is defined by the generic KEM model, which
NISTPQC uses. So all the candidate specifications define what the
keygen() operation does.



-Ilari

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to