Re: [TLS] I-D Action: draft-ietf-tls-curve25519-01.txt
On Sunday 12 July 2015 16:39:37 Simon Josefsson wrote: > Hubert Kario writes: > > As is described in secion 5.1. of RFC 4492, and then reiterated in > > section 2.2. of this draft - the elliptic_curves (a.k.a. supported_groups) > > guides both the ECDH curves and curves understandable by peer for ECDSA > > signatures. > > > > As Curve25519 and Curve448 can only be used for ECDHE, maybe they should > > be > > > > defined/named in the registry as such, to remove any ambiguity[1]: > > enum { > > > > Curve25519_ecdh(TBD1), > > Curve448_ecdh(TBD2), > > > > } NamedCurve; > > I don't care strongly. One disadvantage with this is that if we decide > to reuse these NamedCurve allocations to have something to do with > Ed25519, the naming above will be confusing. However, I believe it is > already likely that Ed25519 will have its own NamedCurve. Given that there certainly will be implementations that support ecdh and not the signatures, we certainly *don't* want to reuse this codepoint for anything else. So unless the PKIX and TLS parts are defined at the same time, in the same document, we definitely need to keep them apart. -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Deprecate SHA1 for signatures in TLS 1.3 (was Re: TLS 1.3 draft-07 sneak peek)
On Saturday 11 July 2015 17:09:27 Dave Garrett wrote: > On Saturday, July 11, 2015 04:48:10 pm Viktor Dukhovni wrote: > > Largely close enough. Feel free to borrow any text from the below > > that you find to be an improvement. > > > > > > > > Whenever possible, all certificates provided by the server > > SHOULD be signed by a hash/signature algorithm pair indicated > > by the client's supported algorithms extension (or the defaults > > assumed in its absence). If the server cannot produce a > > certificate chain that is signed via only the indicated supported > > pairs, then it SHOULD continue the handshake by sending the > > client a certificate chain of its choice that may include > > hash/signature algoriths that are not known to be supported by > > the client. > > > > The public key in the leaf certificate must of course be > > compatible with the chosen cipher-suite, and the subsequent > > ServerKeyExchange message must be signed via a mutually supported > > hash/signature algorithm pair. > > > > If the client cannot construct a satisfactory chain using the > > provided certificates and decides to abort the handshake, then > > it MUST send an "unsupported_certificate" alert message and > > close the connection. > > > > > > The middle bit is already in existing text above the section in question. > > New version with a little rewording and a typo fix. > > \/ > > > All certificates provided by the server SHOULD be signed by a > hash/signature algorithm pair indicated by the client's > "signature_algorithms" extension (or the defaults assumed in > its absence), where possible. If the server cannot produce a > certificate chain that is signed only via the indicated supported > pairs, then it SHOULD continue the handshake by sending the > client a certificate chain of its choice that may include algorithms > that are not known to be supported by the client. If the client > cannot construct an acceptable chain using the provided certificates > and decides to abort the handshake, then it MUST send an > "unsupported_certificate" alert message and close the connection. > ==== What about the cert chain offered by client to server as a response to Certificate Request message? They are also under the limitation of using just the signature algorithms advertised as supported by server. -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] I-D Action: draft-ietf-tls-curve25519-01.txt
On Tuesday 14 July 2015 17:23:44 Simon Josefsson wrote: > Hubert Kario writes: > > On Sunday 12 July 2015 16:39:37 Simon Josefsson wrote: > >> Hubert Kario writes: > >> > As is described in secion 5.1. of RFC 4492, and then reiterated in > >> > section 2.2. of this draft - the elliptic_curves (a.k.a. > >> > supported_groups) > >> > guides both the ECDH curves and curves understandable by peer for ECDSA > >> > signatures. > >> > > >> > As Curve25519 and Curve448 can only be used for ECDHE, maybe they > >> > should > >> > be > >> > > >> > defined/named in the registry as such, to remove any ambiguity[1]: > >> > enum { > >> > > >> > Curve25519_ecdh(TBD1), > >> > Curve448_ecdh(TBD2), > >> > > >> > } NamedCurve; > >> > >> I don't care strongly. One disadvantage with this is that if we decide > >> to reuse these NamedCurve allocations to have something to do with > >> Ed25519, the naming above will be confusing. However, I believe it is > >> already likely that Ed25519 will have its own NamedCurve. > > > > Given that there certainly will be implementations that support ecdh > > and not the signatures, we certainly *don't* want to reuse this > > codepoint for anything else. > > > > So unless the PKIX and TLS parts are defined at the same time, in the same > > document, we definitely need to keep them apart. > > It is conceivable to reuse the NamedCurve values for TLS authentication > without affecting the ECHDE use, nor delaying the Curve25519 ECDHE work. > > Compare how we "reuse" the ECDHE ciphersuite values to refer to > Curve25519 (instead of defining new ciphersuites for Curve25519), and > how we are "reusing" the "uncompressed" code point to refer to > Curve25519-compressed code points (instead of defining new > ECPointFormat). the point is, that if Ed25519 for signatures is defined, an implementation that doesn't understand it[1] can't advertise that fact 1 - be it because it wasn't updated yet, or because the programmers don't consider it important enough yet - that doesn't matter -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] I-D Action: draft-ietf-tls-curve25519-01.txt
On Monday 20 July 2015 14:39:03 Ilari Liusvaara wrote: > On Mon, Jul 20, 2015 at 12:55:37PM +0200, Hubert Kario wrote: > > On Tuesday 14 July 2015 17:23:44 Simon Josefsson wrote: > > > Compare how we "reuse" the ECDHE ciphersuite values to refer to > > > Curve25519 (instead of defining new ciphersuites for Curve25519), and > > > how we are "reusing" the "uncompressed" code point to refer to > > > Curve25519-compressed code points (instead of defining new > > > ECPointFormat). > > > > the point is, that if Ed25519 for signatures is defined, an implementation > > that doesn't understand it[1] can't advertise that fact > > Are you thinking about 1.0/1.1? In 1.2 it can: signature_algorithms > (I'm not confident new signature algorithm would work without either > that nor new ciphersuites). > > > There are other shortcomings tho: > - If Ed25519 is supported, one also needs to support Curve25519. > - If Ed25519 and Curve448 are supported, one needs to support > Curve25519 and Ed448. > - And the cross case from previous. > > So with the same, in TLS 1.2, the following combinations would > be possible: > - None at all. > - Curve25519 > - Curve448 > - Curve25519 & Curve448 > - Curve25519 & Ed25519 > - Curve448 & Ed448 > - Curve25519 & Curve448 & Ed25519 & Ed448. if we define separate codepoints for Curve25519 and Ed25519, yes -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] sect571r1
On Wednesday 15 July 2015 22:42:54 Dave Garrett wrote: > On Wednesday, July 15, 2015 09:42:51 pm Dan Brown wrote: > > What about sect571k1, a Koblitz curve, aka NIST curve K-571? (By the way > > it has no unexplained constants...). Has it been removed already, or does > > the question also refer K-571 too? > Already dropped. That's obviously not irreversible, but it's unambiguously > in the virtually unused camp. The initial goal was to drop all largely > unused curves. > > This question is just about sect571r1, which is far closer to secp384r1 & > secp521r1 in terms of usage, though still notably less. If you want to > argue for going with sect571k1 and not sect571r1, I don't think the WG is > on-board with that. Even if we continued to allow it, I doubt much would > add support for it to be worthwhile. This is likely just an artefact of use of OpenSSL curve order, if K-571 was first, the servers would likely select it over B-571 more often > The scan I linked to found one; literally a single server on the entire > Internet, _not_ a single server in the Internet, a single server among Alexa top 1 million websites - the scan is checking only a set of popular _websites_, not even all popular services that use TLS, let alone the whole Internet > that actually supports sect571k1 for ECDHE. The stats also show > 1575 "support" it, so I'm not sure what's going on there specifically. (if > someone can explain this bit of those stats, please do) The "Supported PFS" section describes what the server selects if the client advertises default OpenSSL order of all defined curves. The "Prefer" lines, means that the ciphersuite selected by server by default uses this key exchange. IOW, if server supports FFDHE 2048 and ECDHE P-256 and prefers ECDHE, then the server will be counted in three lines: DH,2048bits ECDH,P-256,256bits Prefer ECDH,P-256,256bits The "Supported ECC curves" section describes what curves the server will use for ECDHE key exchange if its preferred one is not advertised by client (in most cases that means what happens if the client doesn't advertise P-256 curve). Then that curve is removed and the process repeated until the server picks a ciphersuite that doesn't use ECDHE or aborts connection. feel free to ask more questions about the scans if something is still unclear -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] A la carte concerns from IETF 93
On Wednesday 22 July 2015 16:10:27 Dave Garrett wrote: > Consensus was my current WIP proposal is not viable, for some of the > following main reasons: > > 1) cost/benefit analysis doesn't seem to be worth it > 2) backwards compatibility handling > 3) some argue harder to implement; others argue easier > > cost: > - change has risks of mistake at various points (implementation, deployment, > admin, client config, etc.) and server/client config is a huge cost vast swaths of web servers are misconfigured; introducing a more complex mechanism to server configuration when the existing situation is incomprehensible to many administrators won't help (and even many people that write the various blog posts about "how to configure SSL [sic] in httpd" clearly haven't read openssl ciphers(1) man page) any changes like this will require new APIs for configuration, that in turn means that not only libraries will need to be modified to add support for TLS1.3 configuration but applications too - that will slow adoption -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Relative vs absolute ServerConfiguration.expiration_date
On Wednesday 22 July 2015 19:55:58 Blake Matheny wrote: > One of the topics of discussion at the WG discussion was whether > ServerConfiguration.expiration_date should be an absolute or relative > value. Subodh (CC) dug into our production data and found that nearly half > of the TLS errors we see in production (end user to edge/origin) are due to > date mismatch. This often occurs when people intentionally reset the clock > on their phone, or for other various reasons. > Due to the high rate of date mismatch errors we see, my preference would be > that ServerConfiguration.expiration_date be a relative value instead of an > absolute one. This provides the client an opportunity to correctly use a > monotonic (or other similar) clock to minimizing exposure, without losing > the value of the ServerConfiguration. Using an absolute value means that > ServerConfiguration, for clients with invalid clocks, would essentially > never be cacheable. These clients wouldn’t benefit from > ServerConfiguration. the hint on tickets is already relative, so +1 on relative in server configuration too -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] ban more old crap
On Thursday 23 July 2015 18:06:04 Stephen Farrell wrote: > On 23/07/15 16:43, Dave Garrett wrote: > > We should just get more serious about banning old crap entirely to > > make dangerous misconfiguration impossible for TLS 1.3+ > > implementations. > > > > Right now, the restrictions section prohibits: RC4, SSL2/3, & > > EXPORT/NULL entirely (via min bits) and has "SHOULD" use TLS 1.3+ > > compatible with TLS 1.2, if available > > A suggestion - could we remove mention of anything that > is not a MUST or SHOULD ciphersuite from the TLS1.3 document > and then have someone write a separate draft that adds a > column to the registry where we can mark old crap as > deprecated? > > Not sure if it'd work though. https://tools.ietf.org/html/rfc7525 lists 4 RECOMMENDED ciphers, 6 if you include ECDSA versions -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] ban more old crap (was: A la carte concerns from IETF 93)
On Thursday 23 July 2015 11:43:45 Dave Garrett wrote: > On Thursday, July 23, 2015 07:09:49 am Hubert Kario wrote: > > vast swaths of web servers are misconfigured; introducing a more complex > > mechanism to server configuration when the existing situation is > > incomprehensible to many administrators won't help (and even many people > > that write the various blog posts about "how to configure SSL [sic] in > > httpd" clearly haven't read openssl ciphers(1) man page) > > We should just get more serious about banning old crap entirely to make > dangerous misconfiguration impossible for TLS 1.3+ implementations. there are valid use cases for both aNULL and eNULL at the same time 3.5% of Alexa top 1 million will negotiate AECDH, somehow I doubt this many use it knowingly when ADH has just 0.2% market share TLS is a universal protocol, that means that something that is a dangerous misconfiguration in one threat model is entirely valid and good configuration in other IoT and cloud computing will create a market for an implementation that is compatible with many threat models > Right now, the restrictions section prohibits: > RC4, SSL2/3, & EXPORT/NULL entirely (via min bits) > and has "SHOULD" use TLS 1.3+ compatible with TLS 1.2, if available > > How about we stop being fuzzy? I'd like to make it "MUST" use AEAD with all > TLS 1.2+ connections, or abort with a fatal error. Plus, "MUST" use DHE or > ECDHE for ALL connections, even back to TLS 1.0, or abort with a fatal > error. (the wrench in this is plain PSK, which should be restricted to > resumption within a short window; IoT people who want to use intentionally > weak security can write their own known weak spec) yes, it would make situation better, thing is, nobody would implement this and nobody would deploy it (certainly not Red Hat). People care more for availability of data than for confidentiality or integrity. > By the way, even IE6 on XP supports DHE. Windows XP, however, appears to be > badly configured to only allow it with DSS, because missing combos from the > cipher suite nonsense happen. If we actually have to care about IE on XP, > we could state an exception that the only non-PFS cipher suite to be > permitted on servers for backwards compatibility is > TLS_RSA_WITH_3DES_EDE_CBC_SHA. and that would prevent the server from never selecting DHE+RSA or client aborting the connection when server selects DHE+RSA how exactly? > Also add a requirement that all config provided by the admin must be > validated to meet the TLS 1.3 requirements and auto-corrected if not, with > a warning if there's an issue. > > This doesn't have to be a mess for admins to sort out. but it is, and for histerical reasons it will remain like this so given choice I prefer my mess to be at least consistent between versions -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] ban more old crap
On Thursday 23 July 2015 14:21:15 Dave Garrett wrote: > On Thursday, July 23, 2015 01:10:30 pm Eric Rescorla wrote: > > On Thu, Jul 23, 2015 at 7:06 PM, Stephen Farrell > > > > > wrote: > > > A suggestion - could we remove mention of anything that > > > is not a MUST or SHOULD ciphersuite from the TLS1.3 document > > > and then have someone write a separate draft that adds a > > > column to the registry where we can mark old crap as > > > deprecated? > > > > I'm starting to lean towards this. I don't generally think of TLS 1.3 as a > > vehicle for telling people how to configure use of TLS 1.2, and I think > > it might be better to move all that stuff out. > > If we've learned one thing from the past year of high-profile > vulnerabilities with names and logos, it's that TLS is not really secure if > you don't take into account its weakest/oldest feature that's still > possible. I don't think any responsible TLS 1.3 spec can afford to not > acknowledge this. And I completely agree. FREAK and Logjam wouldn't happen at all if we didn't drag with us stuff that was considered legacy 10 years ago. But stuff like "server MUST abort handshake if it sees export grade ciphers in Client Hello" (or anything similar) will just get ignored. For a user a bad connection is better than no connection. One works and the other doesn't, the details are voodoo witchcraft. The way to remove all this legacy junk is to work towards sensible defaults in libraries (RFC7568, RFC7465 style), not by putting antifeatures in protocol specifications. -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Let's review: draft-ietf-tls-tls13-07 (abridged)
On Thursday 16 July 2015 18:09:38 Ilari Liusvaara wrote: > > > > 6.3.1.2. (Server Hello) > > > > > > Well, at least it wouldn't be backward compatiblity hazard to remove > > > session_id_len, since it comes after server version. > > > > I'm sold. > > However, that would change ServerHello parsing. > > Thinking about it, if one decides to be careful with message parsing, one > needs to assign all new/modified TLS 1.3 handshake messages new IDs. > > Currently there is only one offending message type w.r.t. that: 14, which > is server_configuration in 1.3 and server_hello_done in 1.2. > > Some other messages share IDs, but I think those are compatible (even > CertificateVerify, as it is just one digital signature in both). Server Key Exchange for DHE, ADH, SRP, etc. not to mention TLS1.1 vs TLS1.2 it's not uncommon to have different parsers for "same" message type > > > Also, the record protection used for early handshake messages should be > > > indicated. > > > > Can you expand on that? > > How does the client know what record protection algorithms are valid > for 0RTT transmission for that server? And how does the client know that the algorithms came from the server. We should have a "client MUST wait for the full handshake to finish before recording this information" or we will have a very nice cipher downgrade. Just having it signed is likely not a good idea, as they may depend on ciphersuites advertised by client. > > > Also, with regards to complications of DSA, just dump it? :-) > > > > I'm fine with that if the chairs declare consensus on it. > > As datapoint, either the scan that was used as basis of that curve > pruning doesn't support DSA, or there are no servers that even have > DSA certs. > > I think I heard some time back that there are only 4 (or some other very > small number) valid DSA SSL certs in the entiere public Internet. that scan uses Mozilla trust roots and reports only trusted servers (to weed out unmaintained ones out), Microsoft list is a bit bigger -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] ban more old crap
On Friday 24 July 2015 12:57:42 Dave Garrett wrote: > On Friday, July 24, 2015 06:43:17 am Hubert Kario wrote: > > And I completely agree. FREAK and Logjam wouldn't happen at all if we > > didn't drag with us stuff that was considered legacy 10 years ago. > > > > But stuff like "server MUST abort handshake if it sees export grade > > ciphers in Client Hello" (or anything similar) will just get ignored. For > > a user a bad connection is better than no connection. One works and the > > other doesn't, the details are voodoo witchcraft. > > To be clear, the wording I have in the PR is not this broad. It only > requires aborting if export ciphers were offered by a TLS 1.3+ client, not > just any client. and how a server can tell that the client is TLS1.3 only and not TLS1.0-up-to- TLS1.3? > The point is to ensure that all TLS 1.3 implementations > cut this out and don't regress due to error or exploit. Applying it to > everything would, unfortunately, be a mess. In particular, search engine > spiders actually have a legitimate reason to have export ciphers actually > still enabled. and not only them, opportunistic encryption in SMTP is another example technically it's already in the draft, isn't it? - TLSv1.3 supports only AEAD, all export ciphers were either CBC or stream mode - if you intend to accept only TLS1.3 reply from server there's no point in including them, moreover negotiating them is a clear bug and protocol violation anyway but if you want a "clients SHOULD NOT advertise support for ciphersuites incompatible with TLS1.3 if it will not accept a TLS1.2 or lower protocol reply from server" as a reminder/idea for implementers then it certainly won't hurt -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
[TLS] No cypher overlap (was: ban more old crap)
I see one possible problem with TLS1.3 not being a superset of TLS1.2. Consider the following: Server which supports TLSv1.3 but is configured to accept only AES256 ciphers. Client which advertises TLSv1.3, but no support for AES256-GCM. The client advertises also CBC ciphers (both AES128 and AES256) as it wants to be able to connect to legacy servers too. Should such a connection end up with TLS1.2 with AES-CBC ciphersuite, or should it be aborted? I think we should go for continue connection with downgraded protocol, but explicitly say that it may not happen if the negotiated ciphersuite would be DES, RC4, export grade... That would allow us to reiterate in the TLS1.3 spec that they are a big no-no, and that if you claim support for TLS1.3 you should never negotiate them with a similarly modern peer. -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] No cypher overlap (was: ban more old crap)
On Tuesday 28 July 2015 16:01:55 Viktor Dukhovni wrote: > On Tue, Jul 28, 2015 at 05:41:58PM +0200, Hubert Kario wrote: > > I see one possible problem with TLS1.3 not being a superset of TLS1.2. > > > > Consider the following: > > Server which supports TLSv1.3 but is configured to accept only AES256 > > ciphers. > > > > Client which advertises TLSv1.3, but no support for AES256-GCM. The client > > advertises also CBC ciphers (both AES128 and AES256) as it wants to be > > able > > to connect to legacy servers too. > > > > Should such a connection end up with TLS1.2 with AES-CBC ciphersuite, or > > should it be aborted? > > We already see a similar dilemma with clients that (artificially) > support only SSLv2 ciphersuites, but advertise TLS protocol versions. > OpenSSL will first choose a common protocol (TLS 1.x) and then fail > to find any shared ciphers. To complete the connection, the client > must explicitly request only SSL 2.0. There is at present no > filtering out of protocol choices for lack of compatible ciphers. > > > I think we should go for continue connection with downgraded protocol, but > > explicitly say that it may not happen if the negotiated ciphersuite would > > be DES, RC4, export grade... > > In that case, it should be said that a client MUST NOT advertise > TLS 1.3 unless it offers at least one of the TLS 1.3 MTI ciphers > (or perhaps less restrictive at least one TLS 1.3 compatible cipher). MTI does not mean Mandatory To Enable > Otherwise, there'll be lots of clients whose TLS libraries advertise > 1.3 (just because they implement the protocol), but whose cipher > configuration includes only TLS 1.2 (or lower) suites (because the > application configuration has not been updated). yes, that's what I'm afraid of > Punishing those clients by having servers abort the handshake is > a bad idea. The right outcome is use of TLS 1.2, whether because > client implementations of 1.3 need to check that adequate cipher > suites are available, or because servers negotiate 1.2 when 1.3 > is impossible. neither clients nor servers are required to support (have them enabled) all ciphersuites defined for TLS1.3, so there is always chance for no common cipher between them -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] No cypher overlap
On Saturday 01 August 2015 23:16:42 Florian Weimer wrote: > * Hubert Kario: > > On Tuesday 28 July 2015 16:01:55 Viktor Dukhovni wrote: > >> In that case, it should be said that a client MUST NOT advertise > >> TLS 1.3 unless it offers at least one of the TLS 1.3 MTI ciphers > >> (or perhaps less restrictive at least one TLS 1.3 compatible cipher). > > > > MTI does not mean Mandatory To Enable > > Are you sure? That's extremely surprising. yes, I'm sure: per https://tools.ietf.org/html/rfc5246#page-65 >9. Mandatory Cipher Suites > > In the absence of an application profile standard specifying > otherwise, a TLS-compliant application MUST implement the cipher > suite TLS_RSA_WITH_AES_128_CBC_SHA (see Appendix A.5 for the > definition). -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] TLS Handshake message length too long
On Sunday 09 August 2015 16:41:19 dott...@gmail.com wrote: > I have a question regarding the handshake message length. > > The 'decode_error' alert in TLS 1.2 is defined as: > >decode_error > A message could not be decoded because some field was out of the > specified range or the length of the message was incorrect. (...) > > It says that the message "could not be decoded". What should happen > if the specified message length is longer than needed? I.e. the message > was successfully decoded, but the length of the message was incorrect: > there is still some unknown data after the defined structure. that is definitely error for the case of a "length of a field is longer than expected" or "there's more data in message than the length specifies" > For example, a Finished message has a length of 40 bytes, > but the 'verify_data' array has 32 bytes and there are 8 unknown bytes > remaining in the received message. The 40 bytes I talk about here > is the length specified in the Handshake message header. > > Is this also a fatal error? yes, always > Should the implementation just drop those bytes and proceed? definitely not, it should send a fatal alert, close connection and mark session as non resumable > On the other hand, there is the 'illegal_parameter' alert: > >illegal_parameter > A field in the handshake was out of range or inconsistent with > other fields. This message is always fatal. > > Is this alert suitable for the described scenario? no, it's for values that are explicitly bound to some specific range (e.g. client_random length tag always needs to be 32) verify_data has no range specified (it's opaque data), the negotiatied ciphersuite defines what length it has -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] TLS 1.3 comments
On Monday 17 August 2015 15:02:46 Ilari Liusvaara wrote: > On Mon, Aug 17, 2015 at 06:22:04AM -0400, Yaron Sheffer wrote: > > Below a long list of comments, generally minor. The document is > > already very good - we're making great progress! > > > > The record length field is limited by encrypted-length+2048. > > > > Shouldn't it be 1024? - "Each AEAD cipher MUST NOT produce an > > expansion of greater than 1024 bytes". > > Actually, I think both should be 256 (256-byte expansion from AEAD > is already quite much). > > (This was proposed a while back). I don't think this adds anything while introducing requirement of adding additional "if" to implementations that support also TLS1.2 and lower > > D.1.2: do we really need to worry about version rollback to > > > > SSLv2? I suggest to remove this section. > > Well, it is not possible to even try to negotiate TLS 1.3 using SSLv2 > compat. hello, since that can't transmit extensions, but at least one > extension is REQUIRED in order to successfully negotiate TLS 1.3. > > And the second paragraph seems to be about RSA key exchange, which > isn't supported anymore in TLS 1.3. > > Yes, the section looks like it could be removed. OTOH, TLS1.3 is not a superset of TLS1.2 so we need to think about downgrade to TLS1.2. > But that isn't the only way rollback attacks can occur, also some > clients can be coaxed to downgrade by selectively blocking connections. > Unfortunately the intolerance to TLS 1.3 is so bad that many clients > will likely be willing to perform unauthenticated downgrade to TLS > 1.2 (and FALLBACK_SCSV is useless here). how is it useless? A server which support at most TLS1.2 that receives a TLS1.2 client hello with FALLBACK_SCSV MUST continue the connection with TLS1.2 -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] DSA support in TLS 1.3.
On Friday 28 August 2015 20:17:11 Geoffrey Keating wrote: > Jeffrey Walton writes: > > > Also, if DSA was to be supported, one would need to specify how to > > > determine the hash function (use of fixed SHA-1 doesn't fly). And > > > 1024-bit prime is too small. > > > > FIPS186-4 > > (http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf) > > partially remediates the issue. DSA now includes 2048 and 3072 > > sizes. It still doesn't say exactly which hash should be used with which sizes. and unlike RSA, the signature itself doesn't specify it either so hash truncation attacks are not impossible > This is true, but if TLS 1.3 was to specify DSA, it should require the > 2048 or 3072 sizes (since 1024 is last century's crypto), and > existing implementations do not necessarily support those today. those sizes are not really interoperable: https://bugzilla.redhat.com/show_bug.cgi?id=1238369 because of the above (GnuTLS takes the conservative approach which is incompatible with NSS implementation) > Which really highlights the question: who would actually use it? Since 1024 bit is too weak and 2048 bit and 3072 bit is underspecified for TLS 1.2 it already isn't recommended for use (which means that the biggest deployment of DSA - US Gov - can't really use those bigger sizes, and in fact the Common Access Card already transitioned to RSA with the change to 2048 bit). -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Should we require implementations to send alerts?
On Wednesday 16 September 2015 12:53:53 Brian Smith wrote: > Thus, the empirical evidence from Mozilla's > widely-deployed implementation shows that (a) the requirement to send > alerts is difficult to conform to, and (b) it is unimportant in > practice to send alerts. and yet Firefox depends on them to report human-readable errors to users when it can't connect to a server... Making the alerts more predictable and with more pinned down meanings will only _help_ the opportunistic HTTPS and HTTPS-by-default campaigns. yes, we need to be careful about alerts that provide information about secret data, but there's very little of such data during handshaking, where the vast majority of alerts apply and where they are most useful -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] TLS Provfiles (Was: Call for consensus to remove anonymous DH)
On Thursday 17 September 2015 03:27:22 Peter Gutmann wrote: > Viktor Dukhovni writes: > >Explicit profiles make some sense. They need not be defined by the > >TLS WG per-se, it might be enough for the TLS specification to > >reference an IANA profile registry, with the TLS-WG defining a > >"base" profile. Then other WGs (including the[ TLS WG) can define > >additional profiles. > That would be good, so the base spec could contain text like "This > document describes every possible option that the protocol can > support. It is not expected that TLS applications implement every > one of these options, since many will be inappropriate or unnecessary > in many situations. Profiles for specific situations like web > browsing, secure tunnels, IoT, embedded devices, and SCADA use can be > found at ...". You can count on one hand the Mandatory-to-Implement ciphersuites. It's quite obvious that if you don't support anything but non-export RSA key exchange, you don't need to be able to parse Server Key Exchange messages... -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Should we require implementations to send alerts?
On Thursday 17 September 2015 15:30:12 Brian Smith wrote: > Martin Thomson wrote: > > We're not sure where we stand with version fallback and 1.3. We > > don't > > know how much version intolerance 1.3 will generate. > > That at least > > might not depend on alerts, though we don't know just yet. > > A conformant TLS 1.3 implementation cannot be version intolerant. If > it were version intolerant then it would not be a conformant TLS 1.3 > implementation. So, conformance requirements for TLS .1.3 servers > don't matter as far as version intolerance is concerned. except that a TLS1.3 version intolerant implementation won't show its ugly head until TLS1.4 gets deployed "non conformant" TLS1.2 is in same boat. Just because it can interoperate (the *only* thing PHBs care about) doesn't mean it is conformant (that's the stuff we care about because that means backwards and *forwards* compatibility) > > I don't see much support for the notion that forbidding alerts is a > > good idea. We use alerts quite a bit for basic diagnosis. Bad > > configurations are pretty commonplace, the most common being one > > where there is no common cipher suite. Being able to isolate the > > error that is pretty useful. > > I still think it is better to recommend to never send alerts. But, at > least there are good reasons (which I gave much earlier in the > thread) for why a server would choose not to send alerts, e.g. out of > an abundance of caution. So, "MUST send" is clearly too far. Sorry, but there are no good reasons why not to send them. Not sending them may cause interoperability issues in the future, so an implementation, if at all possible, should send them. That makes them a MUST. -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Should we require implementations to send alerts?
On Friday 18 September 2015 00:58:19 Martin Rex wrote: > Easier troubleshooting is IMO a sufficient rationale to justify > existence of the alert mechanism and a "SHOULD send the alert before > closing the network connection". > > A "MUST send fatal alert" requirement, however, would be silly (and > will be void in face of rfc2119 section 6 anyway). What would be > the semantics of such a requirement anyway? That's true only if you ignore the situation when TLS 1.4 or TLS 2.0 is deployed. So yes, it's no a direct interoperability issue, but it will become one in the future. The same way as TLS protocol version in Client Hello -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Should we require implementations to send alerts?
On Friday 18 September 2015 15:13:37 Bill Frantz wrote: > On 9/18/15 at 4:27 AM, hka...@redhat.com (Hubert Kario) wrote: > >except that a TLS1.3 version intolerant implementation won't > >show its ugly head until TLS1.4 gets deployed > > Is there a reason a test suite can't offer TLS 1.4, even if we > don't know what it is? There is no reason. In fact, any test suite should basically start with this (it being one of the very first fields the server needs to handle). > The TLS implementation under test should > gracefully step back to TLS 1.3. correct -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Should we require implementations to send alerts?
On Friday 18 September 2015 13:24:33 Brian Smith wrote: > On Fri, Sep 18, 2015 at 4:36 AM, Hubert Kario wrote: > > On Friday 18 September 2015 00:58:19 Martin Rex wrote: > > > Easier troubleshooting is IMO a sufficient rationale to justify > > > existence of the alert mechanism and a "SHOULD send the alert > > > before > > > closing the network connection". > > > > > > A "MUST send fatal alert" requirement, however, would be silly > > > (and > > > will be void in face of rfc2119 section 6 anyway). What would be > > > the semantics of such a requirement anyway? > > > > That's true only if you ignore the situation when TLS 1.4 or TLS 2.0 > > is deployed. > > > So yes, it's no a direct interoperability issue, but it will become > > one > > in the future. > > Given a *conformant* TLS 1.3 implementation, that kind of > interoperability problem could only happen if the TLS working group > specifically designed it to happen. In particular, a conformant TLS > 1.3 implementation must accept larger values of > ClientHello.client_version. Given that there is no *conformant* TLS 1.2 implementation that is widely deployed[1], I won't hold my breath for there being many TLS1.3 ones either. We don't live in ideal world, lets build protocols that can handle breakage. Lets make specifications that have the sticks that we can hit developers with when they do wrong. 1 - NSS, SChannel and OpenSSL, all ignore some MUSTs in TLS1.2 -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fwd: New Version Notification for draft-whyte-qsh-tls13-01.txt
On Monday 21 September 2015 00:20:21 Dave Garrett wrote: > On Sunday, September 20, 2015 10:59:58 pm William Whyte wrote: > > might be worth increasing the maximum extension size to 2^24-1 for > > TLS 1.3. > No, I don't think the limit can be raised. The general ClientHello > format has to stay frozen for interoperability with other versions, > and unless I'm misreading things, the size of the length of a vector > can't change. A separate message seems like what would be needed to > have a larger first-flight payload. (and any new messages would need > to be signaled via an extension, though it could have a 0-length > payload) we still would need to wait for server to reply before we could send them, so no way to do 1RTT > > Is there a strong reason for keeping the maximum size at 2^24-1, > > other than saving one byte on all the relevant length fields? > > Typo? Did you mean "keeping the maximum size at 2^16-1"? > > A strong reason is it not being possible to change due to the need for > TLS 1.3 clients to be able to connect to TLS 1.2 servers that won't > understand a format change. Even if it were technically possible, I > wouldn't expect all implementations to safely handle it. the TLS1.2 standard says that the ClientHello MUST match either extension-less or an extension-present format and server MUST check that the overall length of message matches the processed data, so we can't have extensions-after-extensions (which theoretically could have 3 byte length field). That limitation is present since RFC 3546 [Extensions], which explicitly says: This overrides the "Forward compatibility note" in [TLS]. -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fwd: New Version Notification for draft-whyte-qsh-tls13-01.txt
On Monday 21 September 2015 15:04:17 Dave Garrett wrote: > On Monday, September 21, 2015 07:22:03 am Hubert Kario wrote: > > On Monday 21 September 2015 00:20:21 Dave Garrett wrote: > > > A strong reason is it not being possible to change due to the need > > > for TLS 1.3 clients to be able to connect to TLS 1.2 servers that > > > won't understand a format change. Even if it were technically > > > possible, I wouldn't expect all implementations to safely handle > > > it. > > > > the TLS1.2 standard says that the ClientHello MUST match either > > extension-less or an extension-present format and server MUST check > > that the overall length of message matches the processed data, so > > we can't have extensions-after-extensions (which theoretically > > could have 3 byte length field). > > Yeah, adding a second extensions vector in addition to the existing > one was my other thought, but it's too messy for me to think it'd be > worth trying. Looks like that's not even possible. I think we're > stuck with the current TLS extension size limits forever. > > I doubt anyone would really want to use any keys in the megabyte range > anyway. Post-quantum crypto research/experimentation for TLS & other > network protocols should really focus on systems with smaller keys. > Even if a giant-key scheme was ideal, you'll have a very hard time > convincing people to actually use it, no matter how much they might > need it. :/ true, that being said, I can see 64KiB total being limiting for different stuff in the future and while sending 2MiB packets as "just a hello" is unlikely, I can see us sending 64KiB or 128KiB packets... maybe we should reintroduce the forwards compatibility clause for client hello? it won't help us now, but when TLS1.2 gets broken then we'll be able to move forward with higher sizes for extensions (whatever that happens) -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] TRON workshop
On Thursday 08 October 2015 22:20:42 Stephen Farrell wrote: > Hiya, > > First, thanks all for all your ongoing work on TLS1.3. I'm sure we're > all aware that this is important stuff that needs to be, and is being, > done carefully with due attention to security analysis. > > Early in the process we had some brief discussion of pausing towards > the end of the work to give folks a chance to do analyses of the > security and other properties of TLS1.3 just before publication of > the RFC. > > Chatting with the chairs in Prague and with various others since, we > think we've reached the point where we need to start executing that > bit of the plan, since doing such analyses also takes time and we > don't want to add a big delay if we can avoid it. So we're organising > a workshop on just that topic to be co-located with NDSS in San Diego > in late February 2016. aren't we still missing the 0-RTT mode? -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] banning SHA-1 in TLS 1.3, a new attempt
On Sunday 11 October 2015 19:13:03 Dave Garrett wrote: > On Sunday, October 11, 2015 05:58:59 pm Viktor Dukhovni wrote: > > Pointless restrictions lead to fallback to even worse choices. > > And no restrictions lead to horrible fallback choices. Running under > the assumption that some population of implementors is willing to do > something stupid to maintain inertia (e.g. insecure version fallback > dance), sending SHA1 certs is a risk. Yes, the server doesn't know > whether the client can or cannot deal with it safely, but we, from > the perspective of the spec, should be assuming bad scenarios when > designing things. so what we need is: 1. servers SHOULD NOT send certificates with SHA-1 signatures, except as a last resort (maybe even add a recommendation that implementers should warn user when such certificates are configured) 2. clients MUST NOT trust certificates which derive their authenticity though SHA-1 (or weaker) signatures but saying that the server MUST NOT send SHA-1 (or other certs) is, as Victor said, an overreach -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fwd: Clarification on interleaving app data and handshake records
On Wednesday 14 October 2015 16:06:00 Martin Thomson wrote: > On 14 October 2015 at 15:43, Matt Caswell wrote: > > "highly dangerous idea" > > Wrong Martin. I agree that there is a need for caution, but in > reality, it's not like you can use renegotiation to hand-off to > someone else entirely. The person you are talking to hasn't changed. > What is dangerous is making assertions about *new* things that the > renegotiation introduces. Also, we're talking with a peer that does implement RFC 5746, so we can be *sure* that we're talking to the same peer still. So the problem happens when application is querying the library for connection information (certificates mainly) and getting info from new connection while still actually receiving application data from the old context. The problem is, that we can verify the handshake only after we receive Finished message, until then, the server can present any certificate it wants and client has no way of verifying if it (for *DH it can even receive information sent by client after its Finished message). For server it's nicer, as the certificate can be verified much quicker (in the same flight), but the window still exists. That makes it dangerous when going from low to high security context, not so much other way round. -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fwd: Clarification on interleaving app data and handshake records
On Friday 16 October 2015 09:16:01 Watson Ladd wrote: > On Thu, Oct 15, 2015 at 9:12 AM, Matt Caswell wrote: > > On 15/10/15 14:00, Martin Rex wrote: > >> Is the particular interop problem that you want to address > >> caused by a necessity to really process application data and > >> handshake data with arbitrary interleave, > >> > >> or is it rather a problem of getting back into half-duplex > >> operation, > >> i.e. a client being able to continue receiving application data > >> up to a ServerHello when it has sent out ClientHello, or a server > >> being able to continue receiving application data up to a > >> ClientHello (or warning level no-renegotiation alert) after the > >> server has sent a ClientHelloRequest? > > > > The former. The existing code should cope with the half-duplex > > issue. In the reported problem we (OpenSSL) are running as a server > > and we have received application data from the Client *after* we > > have sent our ServerHelloDone. > > After thinking about this a bit this should be okay so long as you > properly present the authentication state associated with the data. > The hypothetical problem is using this to evade the protection of the > secure renegotiation extension. As a solution the new authentication > state should only be made visible to application code after receiving > a CSS/Finished. This is supposed to have exactly the same semantics as > pretending that the application data was sent before any handshake > data. > > Unfortunately I don't know how to verify this. Can miTLS cover this > case? you mean, you want an implementation that can insert application data in any place of the handshake? we've been using my project for that: https://github.com/tomato42/tlsfuzzer the specific test cases are: https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-interleaved-application-data-and-fragmented-handshakes-in-renegotiation.py https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-interleaved-application-data-in-renegotiation.py https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-openssl-3712.py you can run them by: pip install tlslite-ng git clone https://github.com/tomato42/tlsfuzzer.git cd tlsfuzzer PYTHONPATH=. python scripts/test-openssl-3712.py (they do expect a HTTP server on the other side) -- Regards, Hubert Kario Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] RFC 7685 on A Transport Layer Security (TLS) ClientHello Padding Extension
On Wednesday 21 October 2015 20:17:31 Dave Garrett wrote: > Congrats on releasing an RFC that has day one 100% server support. :p oh, I'm sure there's at least one server out there that is intolerant to this one specific extension ]:-> -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Controlling use of SHA-1
On Thursday 22 October 2015 14:49:47 Bill Frantz wrote: > On 10/23/15 at 2:02 PM, ynir.i...@gmail.com (Yoav Nir) wrote: > >That is true only if your application’s client component and > >server component are using the same library. That is not > >guaranteed in a protocol. Specifically that is not the case > >with the web. > > > >There are some version intolerant servers out there that will > >choke on seeing a TLS 1.3 ClientHello. If the client uses some > >library (like OpenSSL) and you upgrade to OpenSSL 1.2.0 that > >has TLS 1.3. All of the sudden your application is broken. On > >the web this means that some websites don’t work. > > This incompatibility cuts both ways. Another way of looking at > it is that all of a sudden your website has lost viewers and you > should fix your problem. Perhaps I am unusual, but if I go the a > website that doesn't work, I usually conclude that I don't need > to see that web site. My problem is too little time, meaning I > don't want to bleep with things that don't work, not extra time > to futz with different browsers to get things working. Until you have to get a refund on a $500 purchase through such broken web server... -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] I-D Action: draft-ietf-tls-rfc4492bis-05.txt
On Tuesday 03 November 2015 19:05:11 internet-dra...@ietf.org wrote: > There's also a htmlized version available at: > https://tools.ietf.org/html/draft-ietf-tls-rfc4492bis-05 typo: MUST still be included, and contain exactly one value: the uncomptessed point format (0). is "uncomptessed" should be "uncompressed" -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Application data during renegotiation handshake
On Wednesday 11 November 2015 18:39:51 Mike Bishop wrote: > Per the TLS 1.2 spec, that's permitted, but if > it's not been done before, I'm afraid we may be hitting less-tested > code paths. It's also something that Java does and what NSS supports. But indeed it is problematic: https://rt.openssl.org/Ticket/Display.html?id=3712&user=guest&pass=guest -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] PR#345: IANA Considerations
On Monday 16 November 2015 15:16:50 Eric Rescorla wrote: > PR: https://github.com/tlswg/tls13-spec/pull/345 > > Per discussion in Yokohama, I have rewritten the IANA considerations > section so that the 16-bit code spaces are "Specification Required" > and they have a "Recommended" column. > > 1. The Cipher Suites "Recommended" column was populated based on > the Standards Track RFCs listed in the document (and I removed the > others). > > 2. The Extensions "Recommended"column was populated by taking all > the Standards Track RFCs and marking them "Yes" and marking > others "No". I recognize that this probably marks a bunch of > extensions which we actually don't love as "Yes" (and perhaps others > as "No") and if people want to move some from one column to another, > that seems like a great mailing list discussion which I will let the > chairs drive. why max_fragment_length [RFC6066] is not to be supported in TLSv1.3? https://tools.ietf.org/html/draft-ietf-dice-profile-17#section-15 states that this is a MUST for IoT TLS profile -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Extensions "supported_groups" and "key_share" in TLS 1.3
On Friday 27 November 2015 10:50:40 Xuelei Fan wrote: > > On Thursday, November 26, 2015 09:12:14 pm Xuelei Fan wrote: > > > Can key_share offers two shares for the same group? > > > > It's currently worded "Clients MUST NOT offer multiple KeyShareEntry > > values for the same parameters", which is a little ambiguous, but I > > interpret this as one share per group. I don't know why you'd need > > to offer more than one, anyway. > > > Need no more than one. Then, it may be more simple that key_share > does > not define the preference order. The preference order is covered by > supported_groups. What would then be the expected behaviour of the server if the first group in the supported_groups does not have a associated key share? that is, I advertise support for secp384r1, secp256r1 or ffdhe2048, but I provide only secp256r1 key share as it's the one that's most widely supported Should the server ask me to provide a secp384r1 key share or should it just proceed with secp256r1? I think that specifying *both* in preference order, and recommending the servers to first inspect key shares and then supported_groups (if no intersect between what server supports and what key shares client provided) would end up with more predictable behaviour and cleaner code. That being said, we probably should say that clients MUST advertise support for all groups for which they send key shares and servers MUST abort connection with something like illegal_parameter if that happens -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Extensions "supported_groups" and "key_share" in TLS 1.3
On Friday 27 November 2015 20:33:46 Xuelei Fan wrote: > On Fri, Nov 27, 2015 at 8:12 PM, Hubert Kario wrote: > > On Friday 27 November 2015 10:50:40 Xuelei Fan wrote: > > > > On Thursday, November 26, 2015 09:12:14 pm Xuelei Fan wrote: > > > > > Can key_share offers two shares for the same group? > > > > > > > > It's currently worded "Clients MUST NOT offer multiple > > > > KeyShareEntry > > > > values for the same parameters", which is a little ambiguous, > > > > but I > > > > interpret this as one share per group. I don't know why you'd > > > > need > > > > to offer more than one, anyway. > > > > > > Need no more than one. Then, it may be more simple that key_share > > > does > > > not define the preference order. The preference order is covered > > > by > > > supported_groups. > > > > What would then be the expected behaviour of the server if the first > > group in the supported_groups does not have a associated key share? > > > Try the next group in the supported_groups until find an associated > key > share. > > > I think that specifying *both* in preference order, and recommending > > the servers to first inspect key shares and then supported_groups > > (if no intersect between what server supports and what key shares > > client provided) would end up with more predictable behaviour and > > cleaner code. > > > But if the orders are not consistent, the logic get annoyed. It's a > good > practice to keep the order consistent, but it would be better if the > preference order is unique and specified in one place. that means that the code needs to keep references to two arrays at the same time and either create a hash table for lookups in key shares or iterate over key shares for every try - this makes code and logic more complex, not less > > That being said, we probably should say that clients MUST advertise > > support for all groups for which they send key shares and servers > > MUST abort connection with something like illegal_parameter if that > > happens > This adds additional checking on both client and server. Personally, > I would prefer to use one preference order in order to avoid any > order conflict. not the first, and certainly not the last checks that need to be done to implement TLS securely... -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Extensions "supported_groups" and "key_share" in TLS 1.3
On Friday 27 November 2015 21:20:24 Xuelei Fan wrote: > > > > I think that specifying *both* in preference order, and > > > > recommending > > > > the servers to first inspect key shares and then > > > > supported_groups > > > > (if no intersect between what server supports and what key > > > > shares > > > > client provided) would end up with more predictable behaviour > > > > and > > > > cleaner code. > > > > > > But if the orders are not consistent, the logic get annoyed. It's > > > a > > > good > > > practice to keep the order consistent, but it would be better if > > > the > > > preference order is unique and specified in one place. > > > > that means that the code needs to keep references to two arrays at > > the same time and either create a hash table for lookups in key > > shares or iterate over key shares for every try - this makes code > > and logic more complex, not less > > I did not get the idea, can the complex above be avoided if keeping > both? Does one preference order just get ignored? the idea is that if there is a key share acceptable for the server, the supported_groups can be ignored but to make sure that clients don't start putting complete garbage there, we need to tell servers to check key shares against supported_groups > If the orders are not consistent, if I can choose from two options: > continue or alter, I would choose the continue option. alter what? > Alert message > is expensive in practice. Note that this alert will never be sent to a client that is behaving according to specification unless the packets were modified by the network. It's a sanity check. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Encrypting record headers: practical for TLS 1.3 after all?
On Monday 30 November 2015 10:58:48 Bryan A Ford wrote: > On 11/30/15 2:40 AM, Peter Gutmann wrote: > > Nikos Mavrogiannopoulos writes: > >> I believe your proposal is a nice example of putting the cart > >> before the horse. Before proposing something it should be clear > >> what do you want to protect from, what is the threat? > > > > Exactly. If you want to thwart traffic analysis, you need to do > > something like what's done by designs like Aqua ("Towards Efficient > > Traffic-analysis Resistant Anonymity Networks"), or ideas from any > > of the other anti-traffic- analysis work that's emerged in the past > > decade or two. > > I'm well aware of Aqua and "the other anti-traffic-analysis work > that's emerged in the past decade or two": in fact I led one of the > major recent systematic projects in that space. See for example: > > http://dedis.cs.yale.edu/dissent/ > http://cacm.acm.org/magazines/2015/10/192387-seeking-anonymity-in-an-> > internet-panopticon/fulltext > > You get traffic > > analysis resistance by, for example, breaking data into fixed-length > > packets, using cover traffic, and messing with packet timings, not > > by > > encrypting TLS headers. > > Packet padding and header encryption are both important, complementary > security measures: you get security benefits from each that you don't > get from the other. Yes, you need padding to obtain systematic > protection from traffic analysis - when for whatever reason not all > implementations are always padding to the exact same standardized > record length, header encryption makes padded streams less trivially > distinguishable from unpadded streams, and makes streams with > different record sizes less trivially distinguishable from each > other. the header contains only one piece of information, and it is public already - the amount of data transmitted* If you want to hide how much data was transmitted, you need to establish a tunnel that transmits data constantly, at the exact same rate for the whole duration of connection. that means that you need to know a). what bandwidth the client has, b). what bandwidth the server can spare and c). how much data the user wants to get or send to the server (I really don't want to transmit 1GiB of data over a 100KiB/s stream if I have a 100Mbps link...). this goes well past the TLS WG charter, if only because it requires very close cooperation with the application layer so while the padding mechanism should be there, we really can't describe how it needs to be used, as it can't be made universal nor is it necessary for all use cases * - sure, the record layer boundaries can tell something about the data being transmitted, but so can the presence of data transmission taking place in the first place (think of a station sending reports only when it detects something while keeping connection open the whole time) > One thing that would greatly help Tor and all similar, > padded protocols is if they could "blend in" even just a little bit > better with the vast bulk of ordinary TLS-encrypted Web traffic, and > that's one of the big opportunities we're talking about here. the initial message in handshake in TLS MUST stay the same thus it is impossible to make it look like Tor. Not to thwart the Pervasive Monitoring threat of TLA agencies. > If you think it is practical for the TLS 1.3 standard to specify a > single, fixed record size that all implementations of TLS 1.3 must use > (i.e., explicitly freeze not only the version field but the length > field), then that would be great for traffic analysis protection and > on that basis I would support that proposal. But that frankly seems > to me likely a bit too much to ask given the diversity of TLS > implementations and use-cases. Tell me if you believe otherwise. That will just round up to a multiple of 256 bytes the data sizes transmitted. Hardly an improvement over the current 16 byte blocks. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Encrypting record headers: practical for TLS 1.3 after all?
On Tuesday 01 December 2015 00:14:14 Jacob Appelbaum wrote: > On 11/30/15, Hubert Kario wrote: > > On Monday 30 November 2015 10:58:48 Bryan A Ford wrote: > >> On 11/30/15 2:40 AM, Peter Gutmann wrote: > >> > Nikos Mavrogiannopoulos writes: > >> >> I believe your proposal is a nice example of putting the cart > >> >> before the horse. Before proposing something it should be clear > >> >> what do you want to protect from, what is the threat? > >> > > >> > Exactly. If you want to thwart traffic analysis, you need to do > >> > something like what's done by designs like Aqua ("Towards > >> > Efficient > >> > Traffic-analysis Resistant Anonymity Networks"), or ideas from > >> > any > >> > of the other anti-traffic- analysis work that's emerged in the > >> > past > >> > decade or two. > >> > >> I'm well aware of Aqua and "the other anti-traffic-analysis work > >> that's emerged in the past decade or two": in fact I led one of the > >> > >> major recent systematic projects in that space. See for example: > >>http://dedis.cs.yale.edu/dissent/ > >>http://cacm.acm.org/magazines/2015/10/192387-seeking-anonymity-in-> >> > >> an->>> > >> internet-panopticon/fulltext > >> > >> > You get traffic > >> > analysis resistance by, for example, breaking data into > >> > fixed-length > >> > packets, using cover traffic, and messing with packet timings, > >> > not > >> > by > >> > encrypting TLS headers. > >> > >> Packet padding and header encryption are both important, > >> complementary security measures: you get security benefits from > >> each that you don't get from the other. Yes, you need padding to > >> obtain systematic protection from traffic analysis - when for > >> whatever reason not all implementations are always padding to the > >> exact same standardized record length, header encryption makes > >> padded streams less trivially distinguishable from unpadded > >> streams, and makes streams with different record sizes less > >> trivially distinguishable from each other. > > > > the header contains only one piece of information, and it is public > > already - the amount of data transmitted* > > I'm pretty sure TLS has a lot more data... in TLS v1.3 no, it doesn't. All encrypted packets must have the plaintext record type set to Application Data[1], same for version - it's frozen now at 3.1 (TLS1.0)[2], both are used just as a magic value. > > this goes well past the TLS WG charter, if only because it requires > > very close cooperation with the application layer > > > > so while the padding mechanism should be there, we really can't > > describe how it needs to be used, as it can't be made universal nor > > is it necessary for all use cases > > I think it should be described how it needs to be used... Yes, I misspoke. What I meant is that we can't mandate its use as it is an application layer issue. We definitely should describe how it needs to be used on TLS level. > > * - sure, the record layer boundaries can tell something about the > > data> > > being transmitted, but so can the presence of data transmission > > taking place in the first place (think of a station sending reports > > only when it detects something while keeping connection open the > > whole time) > Yes, they tell something and that something is better removed. then we need Best Current Practice for applications describing to them how TLS needs to be used, e.g. make sure that they are doing writes as big as possible, checking if timing of responses doesn't leak much information, etc. Forcing TLS implementation to combine writes will easily cause serious problems with interactivity of sessions... > >> One thing that would greatly help Tor and all similar, > >> padded protocols is if they could "blend in" even just a little bit > >> better with the vast bulk of ordinary TLS-encrypted Web traffic, > >> and > >> that's one of the big opportunities we're talking about here. > > > > the initial message in handshake in TLS MUST stay the same thus it > > is > > impossible to make it look like Tor. Not to thwart the Pervasive > > Monitoring threat of TLA agencies. > > That Tor claim is strange and seemingly false in any case. Also, I've > said it before quoting t
Re: [TLS] Encrypting record headers: practical for TLS 1.3 after all?
On Wednesday 02 December 2015 12:59:12 Jacob Appelbaum wrote: > On 12/2/15, Yoav Nir wrote: > >> On 2 Dec 2015, at 1:38 PM, Jacob Appelbaum > >> wrote:>> > >> On 12/1/15, Yoav Nir wrote: > >>>> Which would those be? And what is the definition of > >>>> capital-intensive > >>>> for those watching on the sidelines? > >>> > >>> Firewall, IPS/IDS devices. Boxes that attempt to perform > >>> sanity-check on protocols to make sure that the stuff going over > >>> TCP port 443 is really HTTPS rather than an attempt at tunneling. > >>> There are some attacks such the > >>> the code that protects against them needs to follow TLS record > >>> sizes. > >>> For > >>> the most part these are not-so-interesting attacks, causing > >>> certain > >>> versions > >>> of certain browsers to hang, and they are expensive for the > >>> firewall to protect against, so for the most part these > >>> protections are turned off. But > >>> it’s not everywhere. > >> > >> Could you be more specific? Which devices are we saying will break? > >> Do you have model numbers? Are those vendors on this list? Do they > >> agree that this will break and do we agree that they are a > >> relevant stakeholder who has a user's security in mind? > > > > I am no expert on middleboxes. I know a little about those that my > > employer (Check Point) makes. I only know a little, because I’m on > > the VPN side of things, not the IDS/IPS/next generation firewall > > side. > > I don't think we should worry about breaking poor little Check Point's > traffic analysis devices. Allow me to shift the overton window: their > device is a problem and we should treat it as a problem on the > network. TLS should mitigate as many of the advantages that they use > to harm end users. We should make those devices use as much RAM and > as much disk space and as much CPU time as possible. In the words of > a Google engineer who discovered the NSA had been doing traffic > analysis on his backbone... Problem is that users care for the cat macros and wedding pictures on their social network of choice. If the old version of browser works or an other browser works then it /obviously/ is the new browsers fault that the connection fails so it's the /new/ browser that is broken. So the browser vendor implements out-of-protocol fallback to old protocol version so that it continues to work. That's a Bad Thing. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fwd: Clarification on interleaving app data and handshake records
On Friday 16 October 2015 22:36:10 Kurt Roeckx wrote: > On Fri, Oct 16, 2015 at 04:05:34PM +0200, Hubert Kario wrote: > > On Friday 16 October 2015 09:16:01 Watson Ladd wrote: > > > Unfortunately I don't know how to verify this. Can miTLS cover > > > this > > > case? > > > > you mean, you want an implementation that can insert application > > data in any place of the handshake? > > Have you tried running any of your tests against miTLS? Yes, I finally did miTLS does accept Application Data when it is send between Client Hello and Client Key Exchange and rejects it when it is sent between Change Cipher Spec and Finished. Though I will need to modify tlsfuzzer a bit more before I will be able to publish an automated test case for that* * - miTLS writes HTTP responses in a line-by-line basis, making handling of its responses a bit more complex -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fresh results
On Friday 04 December 2015 00:52:08 Hanno Böck wrote: > On Thu, 3 Dec 2015 18:45:14 -0500 > > Watson Ladd wrote: > > On Tue, Dec 1, 2015 at 3:02 PM, Hanno Böck wrote: > > > So as long as you make sure you implement all the proper > > > countermeasures against that you should be fine. (Granted: This is > > > tricky, as has been shown by previous results, even the OpenSSL > > > implementation was lacking proper countermeasures not that long > > > ago, > > > but it's not impossible) > > > > Can you describe the complete set of required countermeasures, and > > prove they work comprehensively? What if the code is running on > > shared hosting, where much better timing attacks are possible? > > What's shocking is that this has been going on for well over a > > decade: the right solution is to use robust key exchanges, and yet > > despite knowing that this is possible, we've decided to throw patch > > onto patch on top of a fundamentally broken idea. There is no fix > > for PKCS 1.5 encryption, just dirty hacks rooted in accidents of > > TLS. > > No disagreement here. > > The thing is, we have a bunch of difficult options to choose from: > > * Fully deprecate RSA key exchange. > The compatibility costs of this one are high. They are even higher > considering the fact that chrome wants to deprecate dhe and use rsa as > their fallback for hosts not doing ecdhe. ecdhe implementations > weren't widespred until quite recently. A lot of patent foo has e.g. > stopped some linux distros from shipping it. Then maybe Chrome should reconsider. I think we're overstating the compatibility costs. very few widely deployed implementations (with the exception of the long deprecated Windows XP) lack support for DHE_RSA *and* ECDHE_RSA at the same time -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fwd: Clarification on interleaving app data and handshake records
On Saturday 05 December 2015 23:54:25 Peter Gutmann wrote: > Hubert Kario writes: > >miTLS does accept Application Data when it is send between Client > >Hello and Client Key Exchange and rejects it when it is sent between > >Change Cipher Spec and Finished. > > Given that miTLS is a formally verified implementation, would this > imply that there's a problem with the verification? "Beware of bugs > in the above code; I have only proved it correct, not tried it"? This behaviour is dictated by the TLS 1.2 RFC, although partially indirectly: - the acceptance of application data during subsequent handshakes is explicit - the no application data between CCS and Finished is implicit, as it is only stated that the Finished MUST be the next message directly following CCS. And since CCS and Finished have different content types, that means that the limitation is cross-content type, unlike for other handshake messages So on the face of it, behaviour of miTLS is correct. Now, as we've discussed on the OpenSSL bug tracker. This does cause problems if we have certificate based client authentication and the TLS library returns client authentication data from *new* handshake while it still has not received and processed Finished message. If that is the case, then the attacker may force the server to process messages under authority it still didn't verify. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fwd: Clarification on interleaving app data and handshake records
On Saturday 05 December 2015 19:20:11 Watson Ladd wrote: > On Sat, Dec 5, 2015 at 6:54 PM, Peter Gutmann wrote: > > Hubert Kario writes: > >>miTLS does accept Application Data when it is send between Client > >>Hello and Client Key Exchange and rejects it when it is sent > >>between Change Cipher Spec and Finished. > >> > > Given that miTLS is a formally verified implementation, would this > > imply that there's a problem with the verification? "Beware of > > bugs in the above code; I have only proved it correct, not tried > > it"? > > Are you saying there is a security flaw with the behavior described? > Because I don't believe there is after one adopts Extended Master > Secret. (Someone more familiar with the security should check this) Extended Master Secret doesn't come into play here at all. The attack requires just a passive observation of a legitimate exchange for the attacker to have enough information to fake its identity, provided that TLS library returns data to application from a new handshake in renegotiation before the renegotiation finished. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fwd: Clarification on interleaving app data and handshake records
On Sunday 06 December 2015 02:48:33 Peter Gutmann wrote: > Watson Ladd writes: > >please cite the sentence of the TLS RFC which prohibits accepting > >application data records during the handshake. > > Please cite the sentence of the TLS RFC which prohibits accepting SSH > messages during the handshake. > > Please cite the sentence of the TLS RFC which prohibits executing > /usr/games/hack during the handshake. > > Please cite the sentence of the TLS RFC which prohibits reformatting > the user's hard drive during the handshake. > > (This debate is pointless and probably annoying everyone else, so I'll > bow out now). Peter, I think you should go back to the beginning of the thread. (I'm sorry for the necromancy making it a bit hard, but there was a direct question aimed at me that I didn't have time to answer earlier and I don't think we arrived at conclusion before) To summarise: RFC 5246 Section 6.2.1 states: Recipients MUST receive and process interleaved application layer traffic during handshakes subsequent to the first one on a connection. At the same time, sections like 7.4.7 state: It [Client Key Exchange message] MUST immediately follow the client certificate message, if it is sent. or, at section 7.4.9: A Finished message is always sent immediately after a change cipher spec message The question is, which one takes precedence? -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fwd: Clarification on interleaving app data and handshake records
On Sunday 06 December 2015 02:33:39 Peter Gutmann wrote: > > No matter how you colour it, accepting > Application Data after a Client Hello is wrong. Is there any random, > non-formally-verified implementation that would do that? The discussion is about renegotiated handshakes, and yes there is one. Java implementation of TLS can send Application Data during subsequent handshakes. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] chacha/poly interop?
On Thursday 10 December 2015 01:02:49 Salz, Rich wrote: > OpenSSL just landed our chacha/poly implementation into master. We > pass the RFC test vectors, looking for other implementations to test > against. Thanks. I have implementation in pure Python here: https://github.com/tomato42/tlslite-ng/blob/master/tlslite/utils/chacha20_poly1305.py There's also support for the obsolete draft-00 of the TLS ciphersuites. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Data volume limits
On Tuesday 15 December 2015 20:01:58 Bill Frantz wrote: > So we have to trade off the risks of too much data vs. the risks > of a complex rekey protocol vs. the risks having the big data > applications build new connections every 2**36 or so bytes. > > If we don't have rekeying, then the big data applications are > the only ones at risk. If we do, it may be a wedge which can > compromise all users. if the rekey doesn't allow the application to change authentication tokens (as it now stands), then rekey is much more secure than renegotiation was in TLS <= 1.2 so if we include rekeying in TLS, I'd suggest to set its limit to something fairly low for dig data transfers, that is gigabytes, not terabytes, otherwise we'll be introducing code that is simply not tested for interoperability (with AES-NI you can easily transfer gigabytes in just few minutes) -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] PRF digest function for ChaCha20-Poly1305 cipher suites
On Monday 21 December 2015 14:54:23 Brian Smith wrote: > Eric Rescorla wrote: > > Sorry, I'm still confused TLS 1.2 uses a specific PRF. TLS 1.3 uses > > HKDF. Are you suggesting TLS 1.2 use the TLS 1.2 PRF with SHA-512 > > and that TLS 1.2 use SHA-512 with HKDF, or something different? > > I mean that TLS 1.2 should use SHA-512 with the TLS 1.2 PRF and that > TLS 1.3 should use SHA-512 with HKDF. > > > Nobody should pay attention to what the MTI cipher suite for TLS 1.2 > > is,> > >> because it's obsolete; in fact, one would be making a huge mistake > >> to > >> deploy it now if one's application didn't have legacy backward > >> compatibility concerns. And, we should change the MTI cipher suite > >> for TLS 1.3 to the ChaCha20-Poly1305 ones, because they solve a > >> lot of problems. For example, they remove any question of any need > >> to implement rekeying, they avoid the weird IV construction hacks > >> that are necessary for 128-bit cipher suites like AES-GCM, and > >> they can be implemented efficiently in a safe way, unlike AES-GCM. > > > > This seems like a separate question. > > You are the one that brought the MTI stuff into this, not me. > > > SHA-256-using cipher suites are widely deployed and not going away > > any time soon, so what resource are you trying to conserve here? > I'm trying to minimize the number of algorithms (amount of code) > necessary to implement ChaCha20-Poly1305 using x25519 for key > agreement and Ed25519 for signatures. The different between needing > or not needing SHA-256 matters most for very small computers (AVR and > Cortex-M0), but doesn't really matter much for larger computers where > SHA-256 has an advantage. > > In particular, since there seems to be a notable amount of hardware > that is or will soon be released that optimized for > ChaCha20-Poly1305+x25519+Ed25519, because of Apple HomeKit, it would > be nice to take advantage of that for TLS. > > Besides that, the inconsistency regarding why these new > 256-bit-encryption-key cipher suites are currently defined to use > SHA-256 in the PRF whereas all the existing 256-bit-encryption-key > cipher suites use SHA-384 seems strange. Even if an application wants > to use AES-GCM cipher suites, it would be able to avoid needing > SHA-256 if it implemented the AES256-GCM cipher suites instead of > AES128-GCM. I'm not convinced about SHA-512, but yes, they probably should use SHA-384 at the very least. And given that the algorithm for SHA-384 and SHA-512 is essentially the same, using just different IVs, that should be usable for highly restricted hardware, wouldn't it? I would be against SHA-512 as that would be the very first cipher that uses SHA-512 PRF in TLS1.2, making its addition/implementation much more invasive to the underlying library, OTOH, we have multiple ciphers which use SHA-384 PRF. I think I just need to remind the delay after which NSS added support for SHA-384 compared to introduction to AES-128-GCM TLS ciphers... -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Data volume limits
On Monday 28 December 2015 21:08:10 Florian Weimer wrote: > On 12/21/2015 01:41 PM, Hubert Kario wrote: > > if the rekey doesn't allow the application to change authentication > > tokens (as it now stands), then rekey is much more secure than > > renegotiation was in TLS <= 1.2 > > You still have the added complexity that during rekey, you need to > temporarily switch from mere sending or receiving to at least > half-duplex interaction. this situation already happens in initial handshake so the implementation needs to support that I don't see how rekey adds complexity here... -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Data volume limits
On Monday 04 January 2016 13:02:57 Florian Weimer wrote: > On 01/04/2016 12:59 PM, Hubert Kario wrote: > > On Monday 28 December 2015 21:08:10 Florian Weimer wrote: > >> On 12/21/2015 01:41 PM, Hubert Kario wrote: > >>> if the rekey doesn't allow the application to change > >>> authentication > >>> tokens (as it now stands), then rekey is much more secure than > >>> renegotiation was in TLS <= 1.2 > >> > >> You still have the added complexity that during rekey, you need to > >> temporarily switch from mere sending or receiving to at least > >> half-duplex interaction. > > > > this situation already happens in initial handshake so the > > implementation needs to support that > > But after and the handshake and without real re-key, sending and > receiving operations exactly match what the application requests. If > you need to switch directions against the application's wishes, you > end up with an API like OpenJDK's SSLEngine (or a callback variant > which is equivalent in complexity). for renegotiation, yes but for rekey it doesn't need any input from application so there is no need for any callbacks > Dealing with this during the initial handshake is fine. But > supporting direction-switching after that is *really* difficult. yes, this is a bit more problematic, especially for one-sided transfers. For example, when one side is just sending a multi-gigabyte transfer as a reply to a single command - there may be megabytes transferred before the other side reads our request for rekey and then our "CCS" message I thought you just meant the need to keep two cipher contexts in memory at the same time (current and currently negotiated). -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] A small detail in HMAC key generation for Finished message
On Thursday 24 December 2015 01:04:59 Christian Huitema wrote: > On Wednesday, December 23, 2015 3:05 PM, Eric Rescorla wrote: > >> Similarly, in the HKDF-Expand-Label, do we assume a final null byte > >> for the "label"?> > > No. I wonder if we should instead add the '\0' explicitly in the > > 4.8.1 for maximal clarity. > Either that, or just remove the trailing 00 from the binary > description. the 0-byte is a C-ism that looks like a wart to me neither of the previous TLS versions used null-terminated C-style strings so why TLS1.3 should? Especially in just one place? -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] A small detail in HMAC key generation for Finished message
On Monday 04 January 2016 09:44:57 Eric Rescorla wrote: > On Mon, Jan 4, 2016 at 9:22 AM, Hubert Kario wrote: > > On Thursday 24 December 2015 01:04:59 Christian Huitema wrote: > > > On Wednesday, December 23, 2015 3:05 PM, Eric Rescorla wrote: > > > >> Similarly, in the HKDF-Expand-Label, do we assume a final null > > > >> byte > > > >> for the "label"?> > > > > > > > > No. I wonder if we should instead add the '\0' explicitly in the > > > > 4.8.1 for maximal clarity. > > > > > > Either that, or just remove the trailing 00 from the binary > > > description. > > > > the 0-byte is a C-ism that looks like a wart to me > > > > neither of the previous TLS versions used null-terminated C-style > > strings so why TLS1.3 should? Especially in just one place > > The idea is to make this prefix-free. I added it as an explicit byte > but would > be ok with a different separator as long as we banned it from the > context strings. Calling it explicitly a separator would be less confusing. Advising implementers to check other values passed in for it and aborting if detected would be even better -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] MD5 diediedie (was Re: Deprecating TLS 1.0, 1.1 and SHA1 signature algorithms)
On Tuesday 12 January 2016 05:32:08 Viktor Dukhovni wrote: > On Mon, Jan 11, 2016 at 10:42:45PM -0500, Dave Garrett wrote: > > No sane person disputes that MD5 needs to be eradicated ASAP. We're > > keeping MD5||SHA1 in old TLS for compatibility and we are well > > aware that needs to go eventually too. Thus, I suggest we publish > > an MD5 diediedie standards track RFC to prohibit ALL standalone MD5 > > use in ALL IETF > > protocols/standards. > > With some exceptions, for example: > > * As you note in your last comment, X.509 self-signatures via > MD5 may continue to be ignored, once MD5 is "banned" in the same > way that they should have been ignored before it was "banned". > > * S/MIME parsers may continue to parse old S/MIME messages with > MD/5 signatures. More generally, Encrypted data at rest may > need support for MD5 for the lifetime of the data (until > re-encrypted, ...). in case of digital signatures, that means "lifetime of the data", you can't expect them being possible to re-sign so it must not completely forbid use of MD-5 in implementations of stuff like PAdES-A. Though it should strongly recommend allowing its use in only *very* specific circumstances. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Deprecating TLS 1.0, 1.1 and SHA1 signature algorithms
On Tuesday 12 January 2016 14:24:31 Martin Rex wrote: > Tony Arcieri wrote: > [ Charset UTF-8 unsupported, converting... ] > > > Peter Gutmann wrote: > >> The vulnerabilities shown in the SLOTH paper were based on the fact > >> that implementations still allow MD5 for authentication/integrity > >> protection, even if (for example) it's explicitly disabled in the > >> config. So the problem wasn't a fault in the protocol, it's buggy > >> implementations (as it was for ones that allowed 512-bit keys, > >> non-prime primes,>> > >> and so on). Throwing out TLS 1.1 based on this seems rather > >> premature. > Actually no, the TLSv1.2 made a few terribly braindead design choices > - newly introduce raw md5RSA digital signatures into TLSv1.2 in 2008 > where all prior TLS protocol versions, including SSLv3 had been using > the concatenation SHA-1||MD5 > - making the sha1RSA rather than sha256RSA digital signature > algorithm the default and mandatory-to-implement algorithm for use > with TLSv1.2(!!) although it was well-known weaker than the algorithm > (SHA-1||MD5) in all earlier TLS protocol versions, including SSLv3, > and in spite of SHA-1 already being officially scheduled for > end-of-life 2 years later (NIST, SP800-57 pt.1 rev2) > This is ridiculous considering that SHA-256 is mandatory-to-use > in the TLSv1.2 PRF. > - failing to adjust the truncation of the HMAC output in the > TLSv1.2 Finished handshake message to be at least half the size of > the underlying hash function (SHA-256), see RFC 2104 Section 5: > > https://tools.ietf.org/html/rfc2104#section-5 the problem stems from the fact that the same field is used for announcing support for signatures in ServerKeyExchange *and* for certificates provided by server. while SKE signatures could have easily been made mandatory to SHA-256 at least, the depreciation of SHA-1 signatures for certificates certainly wasn't possible at the time - only now we are closing on migration from them so, it was a _bad_ decision, but calling it a "braindead" one is a bit over the top, sorry -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Deprecating TLS 1.0, 1.1 and SHA1 signature algorithms
On Monday 11 January 2016 17:28:33 Bill Frantz wrote: > On 1/11/16 at 4:32 PM, watsonbl...@gmail.com (Watson Ladd) wrote: > >Do the RFCs require the relevant checks or not? And given that > >implementations frequently get these sorts of things wrong, how do we > >make the standard robust against it? > > The best way I can think of is to test to see if the checks are > being done. For example, if a implementation is supposed to > check if a number is prime, send a non-prime and see if it takes > the correct action. > > Publicly available test suites would be a good step toward > implementing this strategy. shameful plug: https://github.com/tomato42/tlsfuzzer and the underlying https://github.com/tomato42/tlslite-ng -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fixing TLS
On Tuesday 12 January 2016 15:14:13 Dave Garrett wrote: > On Tuesday, January 12, 2016 03:03:42 pm Andrei Popov wrote: > > On Tuesday, January 12, 2016 02:39:15 pm Dave Garrett wrote: > > > I hope that Google's efforts to get QUIC as-is specced out go > > > quickly and smoothly, and that it can be used as a basis to > > > develop an official total TCP/TLS replacement.> > > If this were the path forward (and I doubt that it is), I would very > > much prefer Peter Gutman's evolutionary TLS 1.3. > I was just chatting a bit off-list, and apparently I wasn't aware of > QUIC's latest plans, so it's not as clear as I previously said. > Unfortunately, it seems that they have yet to actually write anything > down (a too frequent pattern with QUIC), so I can't really comment on > what I'd like to see happen in this realm anymore. > > In any case, ~whatever~ comes after TLS 1.3 will hopefully have some > major changes. I have no idea what that will be, but TLS 1.3 comes > first. That's a discussion for a future time. I remember this one quote, not sure who is it attributed to, but it goes something like this: "I don't know what will replace Ethernet, but I'm sure it also will be called Ethernet" -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fixing TLS
On Tuesday 12 January 2016 17:31:34 Watson Ladd wrote: > On Tue, Jan 12, 2016 at 5:12 PM, Peter Gutmann > > wrote: > > Yoav Nir writes: > > To expand on this, I'll take Ilari Liusvaara's comments: > >>Bleeding edge ideas? They essentially re-invented SIGMA, which is > >>over 10 years old. The basic framework for doing 0-RTT is the > >>obvious one. The only new algorithm prsent since TLS 1.2 is HKDF, > >>which is just 5 years old. > >> > >>So I don't see anything "experimential" ideas, mechanisms or > >>algorithms in there > >> > > When SSLv3 was introduced, it also used ideas that were 10-20 years > > old (DH, RSA, DES, etc, only SHA-1 was relatively new). They were > > mature algorithms, lots of research had been published on them, and > > yet we're still fixing issues with them 20 years later (DH = 1976, > > SSLv3 = 1996, Logjam = 2015). > We all understand that the security of a protocol is not a function > not of the primitives but of the way the protocol works. The confusion > between export and nonexport DH shares was noted almost immediately > in SSLv3. Furthermore, 512 bit DH is weak: I don't know how this is a > discovery in 2015, given that the reasons for this were all worked > out in the early 90's. So no, Logjam is not a result of unknown > issues appearing after 20 years, but ignoring known issues. > > > TLS 2.0-called-1.3 will roll back the 20 years of experience we have > > with all the things that can go wrong and start again from scratch. > > SIGMA, at ten years old, is a relative newcomer to DH's 20 years > > when it was used in SSLv3, but in either case we didn't discover > > all the problems with it until after the protocol that used it was > > rolled out. We currently have zero implementation and deployment > > experience with 2.0-called-1.3 [0], which means we're likely to > > have another 10-20 years of patching holes ahead of us. This is > > what I meant by "experimental, bleeding-edge". > > There is an old joke about the resume with one years experience > repeated 20 times. All of the problems in TLS have been known for > decades, as I've repeatedly demonstrated on this list. All of them > were known to cryptographers at the time TLS was being designed and > deployed. It does not take deployment to trigger analysis. Exactly this: BEAST and Lucky 13 "possible" problem was described in the RFC itself. Same thing for the "new" Bicycle attack - described in the RFC for TLS 1.0 and repeated in each version since. So lets not repeat those mistakes - if there are possible issues, lets fix those, now. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fixing TLS
On Wednesday 13 January 2016 15:11:47 Dmitry Belyavsky wrote: > Hello Hubert, > > On Wed, Jan 13, 2016 at 2:52 PM, Hubert Kario wrote: > > On Tuesday 12 January 2016 17:31:34 Watson Ladd wrote: > > > On Tue, Jan 12, 2016 at 5:12 PM, Peter Gutmann > > > > > > wrote: > > > > Yoav Nir writes: > > > > > > > > To expand on this, I'll take Ilari Liusvaara's comments: > > > >>Bleeding edge ideas? They essentially re-invented SIGMA, which > > > >>is > > > >>over 10 years old. The basic framework for doing 0-RTT is the > > > >>obvious one. The only new algorithm prsent since TLS 1.2 is > > > >>HKDF, > > > >>which is just 5 years old. > > > >> > > > >>So I don't see anything "experimential" ideas, mechanisms or > > > >>algorithms in there > > > >> > > > > When SSLv3 was introduced, it also used ideas that were 10-20 > > > > years > > > > old (DH, RSA, DES, etc, only SHA-1 was relatively new). They > > > > were > > > > mature algorithms, lots of research had been published on them, > > > > and > > > > yet we're still fixing issues with them 20 years later (DH = > > > > 1976, > > > > SSLv3 = 1996, Logjam = 2015). > > > > > > We all understand that the security of a protocol is not a > > > function > > > not of the primitives but of the way the protocol works. The > > > confusion between export and nonexport DH shares was noted almost > > > immediately in SSLv3. Furthermore, 512 bit DH is weak: I don't > > > know how this is a discovery in 2015, given that the reasons for > > > this were all worked out in the early 90's. So no, Logjam is not > > > a result of unknown issues appearing after 20 years, but ignoring > > > known issues. > > > > > > > TLS 2.0-called-1.3 will roll back the 20 years of experience we > > > > have > > > > with all the things that can go wrong and start again from > > > > scratch. > > > > > > > > SIGMA, at ten years old, is a relative newcomer to DH's 20 > > > > years > > > > > > > > when it was used in SSLv3, but in either case we didn't discover > > > > all the problems with it until after the protocol that used it > > > > was > > > > rolled out. We currently have zero implementation and > > > > deployment > > > > experience with 2.0-called-1.3 [0], which means we're likely to > > > > have another 10-20 years of patching holes ahead of us. This is > > > > what I meant by "experimental, bleeding-edge". > > > > > > There is an old joke about the resume with one years experience > > > repeated 20 times. All of the problems in TLS have been known for > > > decades, as I've repeatedly demonstrated on this list. All of them > > > were known to cryptographers at the time TLS was being designed > > > and > > > deployed. It does not take deployment to trigger analysis. > > > > Exactly this: BEAST and Lucky 13 "possible" problem was described in > > the RFC itself. Same thing for the "new" Bicycle attack - described > > in the RFC for TLS 1.0 and repeated in each version since. > > > > So lets not repeat those mistakes - if there are possible issues, > > lets fix those, now. > > But we should leave the description of the fixed problems somewhere to > avoid them in future. yes, decisions and recommendations should have rationale attached to them. And especially for recommendations, I don't see why we couldn't incorporate them in the RFC - at least for one, if the rationale is proven wrong it will be easier to explain why the recommendation should be disregarded. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Fixing TLS
On Wednesday 13 January 2016 12:32:05 Peter Gutmann wrote: > Hubert Kario writes: > >So lets not repeat those mistakes > > Exactly, there are more than enough new ones for 2.0-called-1.3 to > make that we don't (necessarily) have to repeat existing ones > (although I'm sure we will in some cases). > > And that's exactly my point, we're throwing away 20 years of refining > TLS 1.x and more or less starting again with 2.0-called-1.3, with a > whole new set of mistakes to make. I really don't want to spend the > next 20 years patching all the holes that will be found in > 2.0-called-1.3, I've already had enough of that for the 1.x version. The only thing I saw in the "TLS 1.2.1" proposal that isn't already available is the longer Finished hash and a new signature type. Something that an extension can easily fix, rest is just a matter of setting a policy *and following it* with respect to used extensions and settings. If you want to patch it up like this, please do. But TLS 1.3 fixes more problems. > TLS needs an LTS version that you can just push out and leave to its > own devices, for the same reason that other products also have LTS > versions, that lots of people have better things to do with their > life than playing bugfix whack-a-mole for the duration of it. You're asking for impossible. The problems mentioned were not introduced into the protocols intentionally to make them obsolete, they are there because they weren't seen as big enough to fix. That's the mistake I say we should not repeat - "no issue left behind, no matter how small". -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Simplifying signature algorithm negotiation
On Friday 15 January 2016 17:13:29 Brian Smith wrote: > David Benjamin wrote: > > (Whether such certificates exist on the web is probably answerable > > via CT logs, but I haven't checked.) > > Me neither, and I think that's the key thing that would need to be > checked to see if my suggestion is viable. > > 3. You get better interoperability with TLS 1.2's NSA Suite B profile > [1]. > >> (I don't have any particular affinity for that profile other than > >> it seems to have made choices that have historically been shown to > >> be above average, and it might be a good idea to avoid interop > >> failure with other implementations that might have a special > >> affinity for it.) > > > > What interop faliures are you worried about here? > > The way I proposed things to work for TLS 1.3 is what the Suite B > profile does for TLS 1.2. A Suite B client cannot describe the Suite > B profile policy with the signature_algorithms extension as-is, so in > theory if a Suite B profile client even exists, it would work better > if servers assumed that ecdsa_sha256 implies P-256 and ecdsa_sha384 > implies P-384. I don't know if any such "Suite B client" actually > exists, though. OpenSSL since version 1.0.2 has a setting to enforce strict Suite B compliance -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Simplifying signature algorithm negotiation
On Friday 15 January 2016 20:45:34 David Benjamin wrote: > Hi folks, > > This is a proposal for revising SignatureAlgorithm/HashAlgorithm. In > TLS 1.2, signature algorithms are spread across the handshake. We > have SignatureAlgorithm, NamedGroup/Curve (for ECDSA), and > HashAlgorithm, all in independent registries. NamedGroup is sent in > one list, also used for (EC)DH, while the other two are sent as a > pair of (HashAlgorithm, SignatureAlgorithm) tuples but live in > separate registries. > > This is a lot of moving parts. Signature negotiation in TLS 1.2 tends > to be messy to implement. Client certificate keys may be in > smartcards via OS-specific APIs, so a lot of time is spent transiting > new preference shapes across API boundaries in order to discover > smartcard bugs. Sometimes I think people deploy client certs because > they hate me and want to cause me pain… :-) > > Anyway, the new CFRG curves also bind signature curve and hash > together. The current draft represents this as eddsa_ed25519 and > eddsa_ed448 NamedGroups and eddsa SignatureAlgorithm. But this > doesn’t capture that EdDSA + Ed25519 + SHA-256 is illegal. (Or ECDSA > + FF3072.) > > I propose we fold the negotiable parameters under one name. Think of > how we’ve all settled on AEADs being a good named primitive with a > common type signature[1]. Specifically: > > 1. Drop eddsa_ed25519(31) and eddsa_ed448(32) from NamedGroup. From > now on, NamedGroup is only used for (EC)DH. > > 2. Remove HashAlgorithm, SignatureAlgorithm, SignatureAndHashAlgorithm > as they are. Introduce a new SignatureAlgorithm u16 type and > negotiate that instead. (Or maybe a different name to not collide.) > u8 is a little tight to allocate eddsa_ed25519 and eddsa_ed448 > separately, but u16 is plenty. > > 3. Allocate values for SignatureAlgorithm wire-compatibly with TLS 1.2 > by (ab)using the old (HashAlgorithm, SignatureAlgorithm) tuples. > 0x0401 becomes rsa_pkcs1_sha256, etc. Reserve ranges consistently > with HashAlgorithm from TLS 1.2. Note this does not introduce new > premultiplications on the wire. Just in the spec and registry. > > 4. Deprecate ecdsa_sha256, etc., in favor of new > ecdsa_{p256,p384,p521}_{sha256,sha384,sha512} allocations. The old > ecdsa_* values are for TLS 1.2 compatibility but ignored in TLS 1.3. > Although this introduces new premultiplications, it’s only 9 values > with the pruned TLS 1.3 lists. I think this is worth 9 values to keep > NamedGroups separate. > > 5. Add new allocations for eddsa_ed25519, eddsa_ed448, and > rsapss_{sha256,sha384,sha512}. These come with the signature algorithm > and curve pre-specified. (See [2] at the bottom for full list of > allocations.) > > Thoughts? > > David > > [1] We’re stuck with RSA-PSS's generality, so that'll need some > mapping to a subset of X.509's RSA-PSS. We'll just not bother with > RSA-PSS with hashAlgorithm SHA-256, maskGenAlgorithm > MGF-7-v3.0-SHA-334-saltLengthQuotient-5/7, saltLength 87, trailerField > 14. And RSA key generation still has size parameter. Hopefully future > things can look more like Ed25519. > > [2] > 0x-0x06ff - Reserved range for TLS 1.2 compatibility values. Note > this is wire-compatible with TLS 1.2. > - 0x0101 - rsa_pkcs1_md5 > - 0x0201 - rsa_pkcs1_sha1 > - 0x0301 - rsa_pkcs1_sha224 > - 0x0401 - rsa_pkcs1_sha256 > - 0x0501 - rsa_pkcs1_sha334 > - 0x0601 - rsa_pkcs1_sha512 > - 0x{01-06}02 - dsa_md5, etc. Ignored in TLS 1.3. > - 0x{01-06}03 - ecdsa_md5, etc. Advertised for TLS 1.2 compatibility > but ignored in TLS 1.3. > > 0x0700-0xfdff - Allocate new values here. Optionally avoid 0x??0[0-3] > to avoid colliding with existing signature algorithms, but I don’t > think that’s necessary[3]. > - rsapss_sha256 > - rsapss_sha384 > - rsapss_sha512 > - ecdsa_p256_sha256 > - ecdsa_p256_sha384 > - ecdsa_p256_sha512 > - ecdsa_p384_sha256 > - ecdsa_p384_sha384 > - ecdsa_p384_sha512 > - ecdsa_p521_sha256 > - ecdsa_p521_sha384 > - ecdsa_p521_sha512 > - eddsa_ed25519 > - eddsa_ed448 Then what ECDHE share gets signed? if the same as the curve, what about FFDHE, what about ECDHE-RSA? why no - rsapss_dh2048_sha256 - rsapss_dh3072_sha256 - rsapss_dh4096_sha384 - (etc.) - rsapss_p256_sha256 - rsapss_p384_sha384 - (etc.) If it does not specify the DH share signed, it doesn't really change anything... -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] chacha/poly for http/2
On Wednesday 13 January 2016 17:48:37 Salz, Rich wrote: > We (OpenSSL) have already tested interop of chacha/poly with other > browsers and TLS stacks, and now it all works. (The official IETF > version, not the QUIC version). I was able to confirm interoperability between tlslite-ng[1] and current OpenSSL master (0e76014e584ba7), using draft-ietf-tls-chacha20- poly1305-04 implementation. 1 - https://github.com/tomato42/tlslite-ng/tree/chacha-ecdhe -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Simplifying signature algorithm negotiation
On Tuesday 19 January 2016 16:50:18 David Benjamin wrote: > On Mon, Jan 18, 2016 at 6:48 AM Hubert Kario wrote: > > On Friday 15 January 2016 20:45:34 David Benjamin wrote: > > > Hi folks, > > > > > > This is a proposal for revising SignatureAlgorithm/HashAlgorithm. > > > In > > > TLS 1.2, signature algorithms are spread across the handshake. We > > > have SignatureAlgorithm, NamedGroup/Curve (for ECDSA), and > > > HashAlgorithm, all in independent registries. NamedGroup is sent > > > in > > > one list, also used for (EC)DH, while the other two are sent as a > > > pair of (HashAlgorithm, SignatureAlgorithm) tuples but live in > > > separate registries. > > > > > > This is a lot of moving parts. Signature negotiation in TLS 1.2 > > > tends > > > to be messy to implement. Client certificate keys may be in > > > smartcards via OS-specific APIs, so a lot of time is spent > > > transiting > > > new preference shapes across API boundaries in order to discover > > > smartcard bugs. Sometimes I think people deploy client certs > > > because > > > they hate me and want to cause me pain… :-) > > > > > > Anyway, the new CFRG curves also bind signature curve and hash > > > together. The current draft represents this as eddsa_ed25519 and > > > eddsa_ed448 NamedGroups and eddsa SignatureAlgorithm. But this > > > doesn’t capture that EdDSA + Ed25519 + SHA-256 is illegal. (Or > > > ECDSA > > > + FF3072.) > > > > > > I propose we fold the negotiable parameters under one name. Think > > > of > > > how we’ve all settled on AEADs being a good named primitive with a > > > common type signature[1]. Specifically: > > > > > > 1. Drop eddsa_ed25519(31) and eddsa_ed448(32) from NamedGroup. > > > From > > > now on, NamedGroup is only used for (EC)DH. > > > > > > 2. Remove HashAlgorithm, SignatureAlgorithm, > > > SignatureAndHashAlgorithm as they are. Introduce a new > > > SignatureAlgorithm u16 type and negotiate that instead. (Or maybe > > > a different name to not collide.) u8 is a little tight to > > > allocate eddsa_ed25519 and eddsa_ed448 separately, but u16 is > > > plenty. > > > > > > 3. Allocate values for SignatureAlgorithm wire-compatibly with TLS > > > 1.2 by (ab)using the old (HashAlgorithm, SignatureAlgorithm) > > > tuples. 0x0401 becomes rsa_pkcs1_sha256, etc. Reserve ranges > > > consistently with HashAlgorithm from TLS 1.2. Note this does not > > > introduce new premultiplications on the wire. Just in the spec > > > and registry. > > > > > > 4. Deprecate ecdsa_sha256, etc., in favor of new > > > ecdsa_{p256,p384,p521}_{sha256,sha384,sha512} allocations. The old > > > ecdsa_* values are for TLS 1.2 compatibility but ignored in TLS > > > 1.3. > > > Although this introduces new premultiplications, it’s only 9 > > > values > > > with the pruned TLS 1.3 lists. I think this is worth 9 values to > > > keep > > > NamedGroups separate. > > > > > > 5. Add new allocations for eddsa_ed25519, eddsa_ed448, and > > > rsapss_{sha256,sha384,sha512}. These come with the signature > > > algorithm and curve pre-specified. (See [2] at the bottom for > > > full list of allocations.) > > > > > > Thoughts? > > > > > > David > > > > > > [1] We’re stuck with RSA-PSS's generality, so that'll need some > > > mapping to a subset of X.509's RSA-PSS. We'll just not bother with > > > RSA-PSS with hashAlgorithm SHA-256, maskGenAlgorithm > > > MGF-7-v3.0-SHA-334-saltLengthQuotient-5/7, saltLength 87, > > > trailerField 14. And RSA key generation still has size parameter. > > > Hopefully future things can look more like Ed25519. > > > > > > [2] > > > 0x-0x06ff - Reserved range for TLS 1.2 compatibility values. > > > Note > > > this is wire-compatible with TLS 1.2. > > > - 0x0101 - rsa_pkcs1_md5 > > > - 0x0201 - rsa_pkcs1_sha1 > > > - 0x0301 - rsa_pkcs1_sha224 > > > - 0x0401 - rsa_pkcs1_sha256 > > > - 0x0501 - rsa_pkcs1_sha334 > > > - 0x0601 - rsa_pkcs1_sha512 > > > - 0x{01-06}02 - dsa_md5, etc. Ignored in TLS 1.3. > > > - 0x{01-06}03 - ecdsa_md5, etc. Advertised for TLS 1.2 > > > compatibility > > > but ignored in TLS 1.3. > > > > > > 0x0700-0xfdff - Allocate new v
Re: [TLS] Case for negotiation of PKCS#1.5 RSASSA-PKCS1-v1_5 in TLS 1.3
On Thursday 21 January 2016 18:25:00 Andrey Jivsov wrote: > Current draft of TLS 1.3 [1] mandates RSA-PSS in TLS handshake by the > following language in sec 4.8.1 > > > In RSA signing, the opaque vector contains the signature > > generated > > using the RSASSA-PSS signature scheme defined in [RFC3447 > > <http://tools.ietf.org/html/rfc3447>] with MGF1. The digest > > used in the mask generation function MUST be the same as the > > digest which is being signed (i.e., what appears in > > algorithm.signature). The length of the salt MUST be equal to > > the > > length of the digest output. Note that previous versions of TLS > > used > > RSASSA-PKCS1-v1_5, not RSASSA-PSS. > > The > > >struct { > > > > SignatureAndHashAlgorithm algorithm; > > opaque signature<0..2^16-1>; > > > >} DigitallySigned; > > defines RSA PKCS#1 1.5 and RSA PSS as "rsa" and "rsapss", see sec A.3.1.1: > >enum { > > > >rsa(1), > >dsa(2), > >ecdsa(3), > >rsapss(4), > >eddsa(5), > >(255) > > > >} SignatureAlgorithm; > > since draft -09 (posted Oct 2015). "rsa" applies to X.509 certificates > only. > > > Many implementers of TLS 1.3 expressed desire for the TLS 1.3 to be as > frictionless as possible regarding the upgrade of existing TLS > installations to TLS 1.3. We should expect that all TLS 1.3 servers > and clients will have support for older versions of TLS on the same > node. Ideally, it should be possible to upgrade the software / > firmware to add TLS 1.3 support on existing hardware with minimal > penalty. The transition to TLS 1.3 is not urgent matter. Making sure that it is as robust as possible is of higher importance than "making it easy to implement for existing TLS1.2 implementations". That's right there in the charter: https://datatracker.ietf.org/wg/tls/charter/ > The current list of FIPS 140 products that support RSA shows twice as > many products that support RSASSA-PKCS1_V1_5 than these that support > RSASSA-PSS [4]. There is greater than 50% chance to lose FIPS > certification with TLS 1.3, factoring client auth and servers. You also need a FIPS certified implementation of HKDF. So yes, it most likely will require new certifications. > The only solution that's available at this point is conditioning TLS > 1.3 support on appropriate hardware. For this reason TLS 1.3 it > probably won't be enabled by default in the product I work on. I > would prefer for TLS 1.3 to be enabled by default and write the code > to decide whether it does PSS or falls back to RSA PKCS1 1.5. Yes, it would be nice. But PKCS#1 v1.5 had it long coming. Not cutting it off now would be negligent. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Case for negotiation of PKCS#1.5 RSASSA-PKCS1-v1_5 in TLS 1.3
On Friday 22 January 2016 10:39:26 Andrey Jivsov wrote: > On 01/22/2016 03:14 AM, Hubert Kario wrote: > >> The only solution that's available at this point is conditioning > >> TLS > >> 1.3 support on appropriate hardware. For this reason TLS 1.3 it > >> probably won't be enabled by default in the product I work on. I > >> would prefer for TLS 1.3 to be enabled by default and write the > >> code > >> to decide whether it does PSS or falls back to RSA PKCS1 1.5. > > > > Yes, it would be nice. But PKCS#1 v1.5 had it long coming. Not > > cutting it off now would be negligent. > > You mean for HS only, while leaving it for X.509 certs? If we don't do it for HS in TLS first, we'll never get rid of it in X.509 certs. We need to start somewhere, and it's more reasonable to expect that hardware with support for new protocols will get updated for RSA-PSS handling than that libraries and hardware will suddenly start implementing it in droves just in anticipation of the time when CAs _maybe_ will start issuing certificates signed with RSA-PSS. > More importantly, note that while I understand the intent to increase > security by mandating PSS in TLS 1.3, in practice it doesn't work. We had to wait for XP SP2 to be over 10 years old for the CA's to even _consider_ using anything but SHA-1 for signatures. And most of them did that only on explicit request when requesting certificates. If new signature types can't be deployed for new protocols they will never get deployed. There always will be implementations that do the absolute minimum to get interoperability *right now* and not a single extra step. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Require deterministic ECDSA
On Sunday 24 January 2016 02:04:28 Dave Garrett wrote: > On Saturday, January 23, 2016 07:47:11 pm Michael StJohns wrote: > > 1) A receiver of an deterministic ECDSA signature verifies it > > EXACTLY > > like they would a non-deterministic signature. > > 2) A receiver of an ECDSA signature cannot determine whether or not > > the signer did a deterministic signature. > > 3) A TLS implementation has no way (absent repeating signatures over > > identical data) of telling whether or not a given signature using > > the > > client or server private key is deterministic. > > > > All that suggests that this is a completely unenforceable > > requirement > > with respect to TLS. > > We can have unverifiable & unenforceable MUSTs. A SHOULD might be more > appropriate, however, if we want to acknowledge this limitation to > some degree. a MUST is only necessary if you are not sure or simply know that your RNG is broken, if you're doing a HSM implementation you know that your RNG is good so you can just use it and while we can have unverifiable MUSTs, it just looks silly if you do, especially if the other way of doing things is just as interoperable, and just as secure (if implemented properly) as the mandated one... SHOULD with explanation why it's there is definitely better approach -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Case for negotiation of PKCS#1.5 RSASSA-PKCS1-v1_5 in TLS 1.3
On Monday 25 January 2016 10:29:18 Benjamin Kaduk wrote: > On 01/22/2016 01:14 PM, Hubert Kario wrote: > > On Friday 22 January 2016 10:39:26 Andrey Jivsov wrote: > >> On 01/22/2016 03:14 AM, Hubert Kario wrote: > >>>> The only solution that's available at this point is conditioning > >>>> TLS > >>>> 1.3 support on appropriate hardware. For this reason TLS 1.3 it > >>>> probably won't be enabled by default in the product I work on. I > >>>> would prefer for TLS 1.3 to be enabled by default and write the > >>>> code > >>>> to decide whether it does PSS or falls back to RSA PKCS1 1.5. > >>> > >>> Yes, it would be nice. But PKCS#1 v1.5 had it long coming. Not > >>> cutting it off now would be negligent. > >> > >> You mean for HS only, while leaving it for X.509 certs? > > > > If we don't do it for HS in TLS first, we'll never get rid of it in > > X.509 certs. > > > > We need to start somewhere, and it's more reasonable to expect that > > hardware with support for new protocols will get updated for RSA-PSS > > handling than that libraries and hardware will suddenly start > > implementing it in droves just in anticipation of the time when CAs > > _maybe_ will start issuing certificates signed with RSA-PSS. > > Isn't it more a matter of TLS being a consumer of external PKIX > infrastructure, the web PKI, etc.? They are out of the reach of the > IETF TLS working group; any requirements we attempted to impose would > be unenforceable, even if there was an Internet Police (which there > is not). TLS will happily use PKCS#1 v1.5 signed X.509 certificates, so how exactly is creating a side effect of increasing the deployment rate of RSA-PSS _in TLS implementations_ an "overreach"?! -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Case for negotiation of PKCS#1.5 RSASSA-PKCS1-v1_5 in TLS 1.3
On Monday 25 January 2016 19:32:57 Andrey Jivsov wrote: > On 01/25/2016 03:11 PM, Russ Housley wrote: > > On Jan 25, 2016, at 2:43 PM, Hubert Kario wrote: > >> On Monday 25 January 2016 10:29:18 Benjamin Kaduk wrote: > >>> On 01/22/2016 01:14 PM, Hubert Kario wrote: > >>>> On Friday 22 January 2016 10:39:26 Andrey Jivsov wrote: > >>>>> On 01/22/2016 03:14 AM, Hubert Kario wrote: > >>>>>>> The only solution that's available at this point is > >>>>>>> conditioning > >>>>>>> TLS > >>>>>>> 1.3 support on appropriate hardware. For this reason TLS 1.3 > >>>>>>> it > >>>>>>> probably won't be enabled by default in the product I work on. > >>>>>>> I > >>>>>>> would prefer for TLS 1.3 to be enabled by default and write > >>>>>>> the > >>>>>>> code > >>>>>>> to decide whether it does PSS or falls back to RSA PKCS1 1.5. > >>>>>> > >>>>>> Yes, it would be nice. But PKCS#1 v1.5 had it long coming. Not > >>>>>> cutting it off now would be negligent. > >>>>> > >>>>> You mean for HS only, while leaving it for X.509 certs? > >>>> > >>>> If we don't do it for HS in TLS first, we'll never get rid of it > >>>> in > >>>> X.509 certs. > >>>> > >>>> We need to start somewhere, and it's more reasonable to expect > >>>> that > >>>> hardware with support for new protocols will get updated for > >>>> RSA-PSS > >>>> handling than that libraries and hardware will suddenly start > >>>> implementing it in droves just in anticipation of the time when > >>>> CAs > >>>> _maybe_ will start issuing certificates signed with RSA-PSS. > >>> > >>> Isn't it more a matter of TLS being a consumer of external PKIX > >>> infrastructure, the web PKI, etc.? They are out of the reach of > >>> the > >>> IETF TLS working group; any requirements we attempted to impose > >>> would > >>> be unenforceable, even if there was an Internet Police (which > >>> there > >>> is not). > >> > >> TLS will happily use PKCS#1 v1.5 signed X.509 certificates, so how > >> exactly is creating a side effect of increasing the deployment rate > >> of RSA-PSS _in TLS implementations_ an "overreach"?! > > > > I have been a supporter of PSS for a very long time -- see RFC 4055. > > > > We have many algorithm transition issues, but this is one place > > where we have seen very little progress. I would like to see > > support for PSS in the protocol, even if we need to support PKCS > > v1.5 for certificate signatures for a long time. > Is there evidence that hard-wiring {PSS} in HS and {PSS, PKCS#1 1.5} > with X.509 certs will lead to better PSS adoption than if {PSS, PKCS#1 > 1.5} were available in both HS and X.509 certs? Because if PKCS#1 1.5 is available for HS, many implementations still won't implement PSS and we won't move one step. OTOH, if the PSS is mandatory, they have a clear need to add this support. 8% of servers in Alexa top 1 million websites still won't sign the Server Key Exchange with anything but SHA-1[1], despite the fact that you need to have an implementation of SHA-256 to implement TLSv1.2 in the first place. > The underlying reasons why CAs can't sign with PSS v.s. TLS server or > client are probably overlapping in many cases: FIPS 140, HSM, > hardware. The all-or-nothing approach to PSS sin HS eems inconsistent > with traditional feature negotiation in TLS HS. PSS in X.509 is not usable now because only fraction of clients and servers support it. That's why CA's don't sign certificates with it - to minimize support costs and reissue rates (in cases the customer finds out that he needs the "legacy" certificate). 1 - https://securitypitfalls.wordpress.com/2015/12/07/november-2015-scan-results/ -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic signature.asc Description: This is a digitally signed message part. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Possible TLS 1.3 erratum
On Monday, 19 July 2021 14:06:41 CEST, Peter Gutmann wrote: Ilari Liusvaara writes: Actually, I think this is quite messy issue: It certainly is. Signature schemes 0x0403, 0x0503 and 0x0603 alias signature algoritm 3 hash 4, 5 and 6. However, those two things are not the same, because the former have curve restriction, but the latter do not. That and the 25519/448 values are definitely the weirdest of the lot. In particular the value 0x03 means P256 when used with SHA256, P384 when used with SHA384, and P521 when used with SHA512. So one algorithm one could use is: - Handle anything with signature 0-3/224-255 and hash 0-6/224-255 as signature/hash pair. - Display schemes 0x0840 and 0x0841 specially. - Handle anything else as signature scheme. I think an easier, meaning with less special cases, way to handle it is for a TLS 1.2 implementation to treat the values defined in 5246 as { hash, signature } pairs and for TLS 1.3 and newer implementations to treat all values as 16-bit cipher suites, combined with a reworking of the definitions, e.g. to define the "ed25519" suite in terms of the curve and hash algorithm, not just "Ed25519 and you're supposed to know the rest". The reason is that some TLS implementations have very hard time supporting RSA-PSS certificates. But why should the TLS layer care about what OID is used to represent an RSA key in a certificate? The signature at the TLS level is either a PSS signature or it isn't, it doesn't matter which OID is used in the certificate that carries the key. It only doesn't matter if you don't want to verify the certificate... It's one thing to be able to be able to verify an RSA-PSS signature on TLS level, it's entirely another to be able to properly handle all the different RSA-PSS limitations when using it in SPKI in X.509. More to the point, the TLS layer may have no way to determine which OID is used in the certificate, it's either an RSA key or not, not "it's an RSA key with OID A" or "it's an RSA key with OID B". So I think for bis the text should rename rsa_pss_rsae_xxx to just rsa_pss_xxx and drop rsa_pss_pss_xxx, which I assume has never been used anyway because I don't know of any public CA that'll issue a certificate with a PSS OID. That's because browsers don't have the code to handle RSA-PSS certificates. But that doesn't mean that there is no code that can do that. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Possible TLS 1.3 erratum
On Monday, 19 July 2021 21:37:08 CEST, Peter Gutmann wrote: Hubert Kario writes: It only doesn't matter if you don't want to verify the certificate... It's one thing to be able to be able to verify an RSA-PSS signature on TLS level, it's entirely another to be able to properly handle all the different RSA-PSS limitations when using it in SPKI in X.509. Is there anything that's jumped through all the hoops to implement the complex mess that is PSS but then not added the few lines of code you need do verify it in certificates? And if so, why? I suggest you go back to the RFCs and check exactly what is needed for proper handling of RSA-PSS Subject Public Key type in X.509. Specifically when the "parameters" field is present. You definitely won't be able to implement it in just "few lines". In any case it's still encoding a minor implementation artefact of the certificate library being used into the TLS protocol, where it has absolutely no place. You either do PSS or you don't, and the TLS layer doesn't need to know what magic number you use to identify it in certificates. 1. It's not minor 2. "What certificates can peer accept" is totally within the purview of TLS. It's like that for Raw keys, it's like that for GPG certificates, it's like that for RSA vs ECDSA vs DSA certificates, and now it's also for RSA-PSS. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Possible TLS 1.3 erratum
On Tuesday, 20 July 2021 16:18:38 CEST, Peter Gutmann wrote: Hubert Kario writes: I suggest you go back to the RFCs and check exactly what is needed for proper handling of RSA-PSS Subject Public Key type in X.509. Specifically when the "parameters" field is present. Looking at the code I'm using, it's four lines of extra code for PSS when reading sigs and four lines extra when writing (OK, technically seven if you include the "if" statement and curly braces lines). And that code will reject a SHA-512 signature if it was made by a certificate with hash algorithm of SHA-256? What about MGF? Salt length? Will it reject PKCS#1 v1.5 signatures made with such a key? It's one thing to be able to read a certificate with those parameters, it's completely different to actually implement the standard. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] I-D Action: draft-ietf-tls-md5-sha1-deprecate-08.txt
On Friday, 3 September 2021 18:00:12 CEST, internet-dra...@ietf.org wrote: A New Internet-Draft is available from the on-line Internet-Drafts directories. This draft is a work item of the Transport Layer Security WG of the IETF. Title : Deprecating MD5 and SHA-1 signature hashes in (D)TLS 1.2 Authors : Loganaden Velvindron Kathleen Moriarty Alessandro Ghedini Filename: draft-ietf-tls-md5-sha1-deprecate-08.txt Pages : 6 Date: 2021-09-03 Abstract: The MD5 and SHA-1 hashing algorithms are increasingly vulnerable to attack and this document deprecates their use in TLS 1.2 digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection. This document updates RFC 5246. The IETF datatracker status page for this draft is: https://datatracker.ietf.org/doc/draft-ietf-tls-md5-sha1-deprecate/ There is also an htmlized version available at: https://datatracker.ietf.org/doc/html/draft-ietf-tls-md5-sha1-deprecate-08 Servers SHOULD NOT include MD5 and SHA-1 in CertificateRequest messages. Clients MUST NOT include MD5 and SHA-1 in CertificateVerify messages. If a server receives a CertificateVerify message with MD5 or SHA-1 it MUST abort the connection with handshake_failure or insufficient_security alert. As written, this would make already existing implementations not RFC compliant when they are configured to not support SHA-1. RFC5246 requires the server to abort with illegal_parameter if the CV included an algorithm that wasn't advertised in CR. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] tls-flags: abort on malformed extension
On Thursday, 7 October 2021 20:37:22 CEST, Yoav Nir wrote: OK, so now my response: I agree with the first and second comments. About the third, what I meant was that a supported flag that is supposed to appear only in CH appears instead and CR, or more likely, a flag that should appear in EE apears in SH instead. But I think the best way to resolve the issue is to remove the bullet point list and the last sentence before them, IOW: remove the examples. Clients aborting on unrecognised flags in SH or EE is expected, that's what already happens for normal extensions, but yes, it's too strong for CH. Maybe also NST. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [Uta] OCSP in RFC7525bis
On Friday, 21 January 2022 05:51:22 CET, Ryan Sleevi wrote: On Thu, Jan 20, 2022 at 10:31 PM Daniel Kahn Gillmor wrote: This sounds a lot like a "SHOULD BUT WE KNOW YOU WONT". Why would a client deliberately fail a connection when the problem might be a flaw with an unrelated network service or a client-specific routing failure? I think we can definitely explicitly recommend: A) clients MUST require valid stapled OCSP response when encountering a certificate with "must staple" extension. (this is just following the specs, but i don't think it's as widely supported as it should be; maybe we need some public naming/shaming?) Isn't this also a "MUST, BUT WE KNOW YOU WON'T AND PROBABLY SHOULDN'T"? There are good reasons that clients have not, and potentially will never, support Must-Staple, whether it be for the technical reasons that many servers are unfit to support it, or for policy reasons, such as wanting to be careful about the security policies of their products, and how much of that is outsourced to CAs. The choice about whether to require stapling or not _is_ a policy decision relevant not only to server operators, but also relying parties, and can be easily abused by CAs if given that lever. Given the concerning practices already seen with respect to revocation, which are detrimental to the security goals of both server operators and end users, a full-throated MUST seems a bit incompatible with the notion of allowing policy flexibility. For example, in a world where a client delivers revocation information out of band, as nearly every major web browser does today (as one example), "must staple" is of questionable benefit. Browsers are the only software that use browser's implementation of certificate verification and revocation. And while they are significant users of TLS, they're definitely not the only important users of TLS. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [Uta] OCSP in RFC7525bis
On Monday, 31 January 2022 21:18:52 CET, Ryan Sleevi wrote: On Mon, Jan 31, 2022 at 12:08 PM Hubert Kario wrote: Browsers are the only software that use browser's implementation of certificate verification and revocation. And while they are significant users of TLS, they're definitely not the only important users of TLS. In the context of the thread, it’s hopefully clear I was not trying to argue they are the only important user, but rather, a demonstration of a practical alternative to deliver this information. That said, on platforms like Apple’s *OS family (mac/i/tv), and, to a lesser extent, Windows and Android, such distribution _is_ system wide, and TLS-using applications, including non-browser, don’t need to take any special action. I'm not aware of any OneCRL-like functionality in Windows... Do you have some pointers for that? Or are you talking just about the fact that Windows downloads and stores CRLs system wide? It’s really only in Linux that there isn’t some form of system-wide capability available, and although Linux remains a significant in this space, it shouldn’t be used to preclude more holistic approaches. The CA store used by OpenSSL as the -CAdir or X509_LOOKUP_hash_dir[1] can store CRLs too, making sort-of system-wide certificate revocation without need of OCSP possible too (NSS also supports a system-wide CRL store, I think only GnuTLS doesn't). 1 - https://www.openssl.org/docs/man1.1.1/man3/X509_load_cert_crl_file.html -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] OCSP and browsers
On Friday, 16 September 2022 17:42:08 CEST, Salz, Rich wrote: I think this is of general interest, so I’m posting here rather than poking friends I know. Browsers are phasing out doing OCSP queries themselves. The common justification, which makes sense to me, is that there are privacy concerns about leaking where a user is surfing. My question is, what are browsers doing, and planning, on doing about OCSP stapled responses? I think there are three possibilities: No stapled response A stapled, valid, “good” response A stapled, expired or “bad” response I can imagine two possibilities, proceeding or popping up a warning page. I haven’t seen the warning when there is no OCSP response, but maybe that does happen. We’re still going to staple good responses, when we have them, but I am wondering if long-term we should still bother? 1. there is the RFC 7633 2. as long as certificates with long life-times (year or more) are common, I think it's useful The problem is that OCSP and OCSP stapling is not a feature making headlines, next to nobody will be deploying a self-compiled NGINX or Apache just to get support for OCSP stapling. So in practice, for OCSP stapling to become common, the implementations of those need to filter down to long-term supported distributions... -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] OCSP and browsers
On Sunday, 2 October 2022 15:13:31 CEST, Salz, Rich wrote: Now we have ACME, why not move to 3 day certs issued daily and avoid the need for revocation entirely? Not all CA's in use on the WebPKI support ACME. Automating a single-host to renew every 48 hours (have to allow for faults and retries) is okay, as long as you are confident your site will not be done during the "get new cert" window. As you scale up to millions of sites and/or thousands of locations, it's much less simple. But I'm still looking for an answer about what browsers and OCSP see as their future. The same thing they did for the past 30 years: try to ignore it. It's just that we now have the OneCRL for the "Too Big To Fail" websites (/s). -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
I oppose deprecation. Given that we're still a ways off from standardised post-quantum key exchanges, use of FFDHE with large key sizes is the best protection against store-and-decrypt-later attacks (buying likely years of additional protection) I think the deprecation is premature. While FFDHE is far from perfect, in practical deployments none of the proposed attacks against it are practical (yes, static FFDH is vulnerable in TLSv1.2 but it's still a harder attack than against static RSA with Bleichenbacher-like attacks). Thus the deprecation of it is a matter of taste, not cryptographic necessity. If anything, RSA key exchange should be deprecated first. RFC 8446 deprecated only the DSA ciphersuites, not RSA. On Tuesday, 13 December 2022 15:46:29 CET, Sean Turner wrote: During the tls@IETF 115 session topic covering draft-ietd-tls-deprecate-obsolete-kex, the sense of the room was that there was support to deprecate all FFDHE cipher suites including well-known groups. This message starts the process to judge whether there is consensus to deprecate all FFDHE cipher suites including those well-known groups. Please indicate whether you do or do not support deprecation of FFDHE cipher suites by 2359UTC on 6 January 2023. If do not support deprecation, please indicate why. NOTE: We had an earlier consensus call on this topic when adopting draft-ietd-tls-deprecate-obsolete-kex, but the results were inconclusive. If necessary, we will start consensus calls on other issues in separate threads. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
On Tuesday, 20 December 2022 19:37:14 CET, Rob Sayre wrote: On Tue, Dec 20, 2022 at 4:53 AM Hubert Kario wrote: Thus the deprecation of it is a matter of taste, not cryptographic necessity. I'm sorry if I'm being dense here, but isn't all of this a SHOULD NOT in RFC 9325? https://www.rfc-editor.org/rfc/rfc9325.html#name-recommendations-cipher-suit Maybe I'm misreading that RFC, but given that it's a BCP, it seems like deprecation is a natural step that reflects IETF consensus. that RFC marks both TLS_RSA_* and TLS_DHE_* as "SHOULD NOT". Given that the former is still being exploited close to 25 years after the Bleichenbacher attack was discovered, while the latter is basically unexploitable with properly behaving hosts in TLSv1.2, I don't think it's correct to consider them at the same level. Yes, if you have ECDHE available, you SHOULD NOT use DHE in TLSv1.2. But if everything you have is either TLS_RSA_* and TLS_DHE_*, then you're far better of with TLS_DHE_*. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
On Tuesday, 20 December 2022 23:56:22 CET, Martin Thomson wrote: On Tue, Dec 20, 2022, at 23:52, Hubert Kario wrote: use of FFDHE with large key sizes is the best protection against store-and-decrypt-later attacks This doesn't deprecate use of FFDHE in TLS 1.3, for which we have some ludicrously large named groups. Is that not enough? Not everybody has migrated to TLS 1.3 yet. Not everybody has migrated to ECC. For the people that still have the only options of RSA key exchange or FFDHE key exchange, both in TLS 1.2, we need to be crystal clear that they should pick FFDHE. Telling people that they shouldn't use the only things they can use means that the advice is unactionable, thus will be ignored. If anything, RSA key exchange should be deprecated first. RFC 8446 deprecated only the DSA ciphersuites, not RSA. This is an odd statement. TLS 1.3 ciphersuites no longer include the concept of key exchange or signing. Ciphersuites, yes. Protocol itself, no. It still performs a key exchange. And TLS 1.3 explicitly deprecates DSA, see below. If you are talking about the signing part, both were sort of deprecated. RSASSA-PKCS1_v1.5 (ugh, I hate typing that) is only usable within the certificate chain, not in the protocol. PSS was added back. There's a difference between saying that a TLS 1.3 client MUST NOT advertise client hello with TLS_RSA_* ciphersuites listed, and just having TLS 1.3 not supporting RSA key exchange. Both of them can be called "deprecated", but one is a clearer and stronger condemnation than the other. DSA is effectively treated with the former "deprecation": RFC8446 Section 4.2.3: They MUST NOT be offered or negotiated by any implementation. In particular, MD5 [SLOTH], SHA-224, and DSA MUST NOT be used. RSA key exchange has nothing like it. For me "deprecated" means "You really shouldn't use it", not "You should stop using it at the earliest convenience". I.e. MUST NOT vs SHOULD NOT. However, for key exchange, which is more relevant to this conversation, RSA was indeed removed. And the draft we're discussing does indeed say that RSA key exchange in TLS 1.2 is deprecated. Can you help me better understand the scope of your objection? I guess my primary objection is with the subject of this thread: "deprecate all FFDHE cipher suites". That I don't agree with. As far as the "draft-ietf-tls-deprecate-obsolete-kex-01" text goes, I would tweak some things, but the general description of FFDHE state I do agree with: "that you shouldn't use it in TLSv1.2, but if you have to, there are simple things to do to make sure you're relatively secure". -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] sslkeylogfile
On Thursday, 24 November 2022 11:37:02 CET, John Mattsson wrote: Hi, Two high level comments: - OLD: "though use of earlier versions is strongly discouraged [RFC8996]" That is not what RFC 8996 says. RFC 8996 says - "TLS 1.1 MUST NOT be used." - "TLS 1.1 MUST NOT be used." Please change to something that aligns with RFC 8996 such as NEW: "though use of earlier versions is forbidden [RFC8996]" - "Access to the content of a file in SSLKEYLOGFILE allows an attacker to break the confidentiality protection on any TLS connections that are included in the file." This is true but does not at all reflect the implications of the existence of a file for long-term storage of keys like this. Storing any of the keying material like this completely breaks the stated forward secrecy property of TLS 1.3 as it creates new long-term keys. It does not matter how well the file is protected i.e., "Ensuring adequate access control on these files therefore becomes very important." is not enough. The theoretical security properties are still broken badly. I think this draft is problematic, but I can understand the need to standardize this existing format. I think the fact that SSLKEYLOGFILE breaks the security properties of TLS 1.3 needs to very clearly described. As a consequence, I think the only allowed use case standardized by TLS WG should be limited to non-production debugging. If governments and companies wanting visibility do other things, that would be outside of IETFs control. This file doesn't have any extra information than what would be in a serialised session data used for session resumption. Something plenty of software already does. Cheers, John From: TLS on behalf of Martin Thomson Date: Wednesday, 26 October 2022 at 02:18 To: Peter Gutmann , tls@ietf.org Subject: Re: [TLS] sslkeylogfile On Tue, Oct 25, 2022, at 16:48, Peter Gutmann wrote: But it's not the same thing, it only seems to cover some TLS 1.3 extensions. Thus my suggestion to call it "Extensions to the SSLKEYLOGFILE Format for TLS 1.3". That's not the intent. Section 3.2 covers all you need for TLS 1.2. I did not describe the (deprecated) "RSA" key, is that in common usage? Or, are there things that I have missed? I got everything from https://firefox-source-docs.mozilla.org/security/nss/legacy/key_log_format/index.html but maybe that is no longer the best reference. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
On Wednesday, 21 December 2022 19:13:36 CET, Rob Sayre wrote: On Wed, Dec 21, 2022 at 5:59 AM Hubert Kario wrote: Telling people that they shouldn't use the only things they can use means... Well, I'd be curious to know what the use cases are. The stuff Uri Blumenthal already mentioned: software and hardware that has lifetimes measured in decades. But I would also say this might be enough: https://www.rfc-editor.org/rfc/rfc9325#name-cipher-suites-for-tls-12 The IETF already says using this is not best current practice, so that's enough for me. A deprecation draft (which I do favor) would just be another document that makes the point. Rough consensus, as they say. I'm fine with "SHOULD NOT", I'm opposed to "MUST NOT". I also have no problems with saying that "servers MUST NOT reuse key shares" and with "servers MUST NOT use parameters smaller than 2048 bit". I even don't have a problem with "servers SHOULD use well known parameters or safe primes as FFDHE parameters". What I'm against is blanket forbidding of FFDHE in TLSv1.2. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
On Thursday, 22 December 2022 23:26:26 CET, Carrick Bartle wrote: the latter is basically unexploitable with properly behaving hosts in TLSv1.2 Well, right, that's the trick. The issue that people have pointed out with FFDHE is that it's very easy to have a host that is not properly behaving (see RFC 7919, which is referenced in our draft). It's also easy and quick to verify that the server *is* behaving correctly and thus is not exploitable. On Wed, Dec 21, 2022 at 5:14 AM Hubert Kario wrote: On Tuesday, 20 December 2022 19:37:14 CET, Rob Sayre wrote: On Tue, Dec 20, 2022 at 4:53 AM Hubert Kario wrote: Thus the deprecation of it is a matter of taste, not cryptographic necessity. I'm sorry if I'm being dense here, but isn't all of this a SHOULD NOT in RFC 9325? https://www.rfc-editor.org/rfc/rfc9325.html#name-recommendations-cipher-suit Maybe I'm misreading that RFC, but given that it's a BCP, it seems like deprecation is a natural step that reflects IETF consensus. that RFC marks both TLS_RSA_* and TLS_DHE_* as "SHOULD NOT". Given that the former is still being exploited close to 25 years after the Bleichenbacher attack was discovered, while the latter is basically unexploitable with properly behaving hosts in TLSv1.2, I don't think it's correct to consider them at the same level. Yes, if you have ECDHE available, you SHOULD NOT use DHE in TLSv1.2. But if everything you have is either TLS_RSA_* and TLS_DHE_*, then you're far better of with TLS_DHE_*. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
On Saturday, 24 December 2022 02:10:08 CET, Rob Sayre wrote: Maybe it would help if the chairs could clarify the difference between "deprecated" and "prohibited" / "forbidden". I think these words have straightforward definitions, and I find many responses to be disrespectful in insisting that "deprecated" means something it does not. But maybe this is an honest misunderstanding, and just down to translations and different first languages. The problem is not the language, but how the word "deprecated" is used by different regulatory bodies. We have RFC2119, so I think we should stick to it. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
On Monday, 2 January 2023 17:32:26 CET, Nimrod Aviram wrote: It's also easy and quick to verify that the server *is* behaving correctly and thus is not exploitable. Could you please help me understand how you propose to verify this? For example, assuming an SMTP server that presents a (presumably) custom-generated safe prime. My understanding is that this would require two primality tests, for the modulus p and for (p-1)/2. Several folks here have argued that these primality tests would be prohibitively expensive for TLS clients to perform per-handshake (and this is also my general understanding of the cost of primality tests). For every connection? Yes, it would probably be prohibitively expensive. As part of an audit of networking infrastructure, at the same time as you test which ciphersuites the server has actually enabled, if the certificates don't expire soon, etc.? No, it would be a simple test and just another checkbox or two on the report ("uses safe primes", "doesn't reuse public key shares"). That being said, NIST has required for FIPS mode that only well known groups have to be supported, and the USA didn't stop. So having deployments where the clients actually do verify that the group used by the server is actually secure aren't just possible, but actual day to day reality. Could you please elaborate what client behavior you propose, and how you envision clients to bear the cost? (To concede a point in advance: It is obviously possible to _assume_ that if the server presents a modulus, then it "must" be a safe prime, or it meets some desired security notion that the server operator has deemed sufficient for the connection. However, experience shows that this is not necessarily the case in practice; see e.g. the "Small Subgroups" paper referenced in the draft. Since you used the word "verify", I'm assuming that's not what you meant?) At the end of the day the client has to trust the server to some degree. There's nothing in the protocol that will stop the server from sending the master secret straight to KGB^W GRU in Moscow. Irrespective of the TLS version and key exchange parameters used. On Mon, 2 Jan 2023 at 15:52, Hubert Kario wrote: On Thursday, 22 December 2022 23:26:26 CET, Carrick Bartle wrote: the latter is basically unexploitable with properly behaving hosts in TLSv1.2 Well, right, that's the trick. The issue that people have pointed out with FFDHE is that it's very easy to have a host that is not properly behaving (see RFC 7919, which is referenced in our draft). It's also easy and quick to verify that the server *is* behaving correctly and thus is not exploitable. On Wed, Dec 21, 2022 at 5:14 AM Hubert Kario wrote: On Tuesday, 20 December 2022 19:37:14 CET, Rob Sayre wrote: On Tue, Dec 20, 2022 at 4:53 AM Hubert Kario wrote: Thus the deprecation of it is a matter of taste, not cryptographic necessity. I'm sorry if I'm being dense here, but isn't all of this a SHOULD NOT in RFC 9325? https://www.rfc-editor.org/rfc/rfc9325.html#name-recommendations-cipher-suit Maybe I'm misreading that RFC, but given that it's a BCP, it seems like deprecation is a natural step that reflects IETF consensus. that RFC marks both TLS_RSA_* and TLS_DHE_* as "SHOULD NOT". Given that the former is still being exploited close to 25 years after the Bleichenbacher attack was discovered, while the latter is basically unexploitable with properly behaving hosts in TLSv1.2, I don't think it's correct to consider them at the same level. Yes, if you have ECDHE available, you SHOULD NOT use DHE in TLSv1.2. But if everything you have is either TLS_RSA_* and TLS_DHE_*, then you're far better of with TLS_DHE_*. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
On Tuesday, 3 January 2023 11:33:39 CET, Peter Gutmann wrote: Hubert Kario writes: It's also easy and quick to verify that the server *is* behaving correctly and thus is not exploitable. It's also a somewhat silly issue to raise, if we're worried about a server using deliberately broken FFDHE parameters then why aren't we worried about the server leaking its private key through the server random, or posting it to Pastebin, or sending a copy of the session plaintext to virusbucket.ru? If the server's broken it's broken and there's not much a client can do about it. Because there are software stacks that allow configuration of arbitrary parameters for FFDH (see GnuTLS, OpenSSL), and there are software stacks that generate one public key share and reuse it for a long time, or allow configuration of this kind of behaviour (see old OpenSSL, NSS for ECDHE). So this kind of server behaviour may be a reason of misconfiguration, not malicious behaviour. Misconfiguration that might have been caused by bad advice or desire to optimise performance and then just cargo-culted to this day. In short: because this kind of behaviour may be a result of an error rather than malice. So it's worth checking for when auditing server configuration. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Merkle Tree Certificates
Why not rfc7924? On Friday, 10 March 2023 23:09:10 CET, David Benjamin wrote: Hi all, I've just uploaded a draft, below, describing several ideas we've been mulling over regarding certificates in TLS. This is a draft-00 with a lot of moving parts, so think of it as the first pass at some of ideas that we think fit well together, rather than a concrete, fully-baked system. The document describes a new certificate format based on Merkle Trees, which aims to mitigate the many signatures we send today, particularly in applications that use Certificate Transparency, and as post-quantum signature schemes get large. Four signatures (two SCTs, two X.509 signatures) and an intermediate CA's public key gets rather large, particularly with something like Dilithium3's 3,293-byte signatures. This format uses a single Merkle Tree inclusion proof, which we estimate at roughly 600 bytes. (Note that this proposal targets certificate-related signatures but not the TLS handshake signature.) As part of this, it also includes an extensibility and certificate negotiation story that we hope will be useful beyond this particular scheme. This isn't meant to replace existing PKI mechanisms. Rather, it's an optional optimization for connections that are able to use it. Where they aren't, you negotiate another certificate. I work on a web browser, so this has browsers and HTTPS over TLS in mind, but we hope it, or some ideas in it, will be more broadly useful. That said, we don't expect it's for everyone, and that's fine! With a robust negotiation story, we don't have to limit ourselves to a single answer for all cases at once. Even within browsers and the web, it cannot handle all cases, so we're thinking of this as one of several sorts of PKI mechanisms that might be selected via negotiation. Thoughts? We're very eager to get feedback on this. David On Fri, Mar 10, 2023 at 4:38 PM wrote: A new version of I-D, draft-davidben-tls-merkle-tree-certs-00.txt has been successfully submitted by David Benjamin and posted to the IETF repository. Name: draft-davidben-tls-merkle-tree-certs Revision: 00 Title: Merkle Tree Certificates for TLS Document date: 2023-03-10 Group: Individual Submission Pages: 45 URL: https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-00.txt Status: https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/ Html: https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-00.html Htmlized: https://datatracker.ietf.org/doc/html/draft-davidben-tls-merkle-tree-certs Abstract: This document describes Merkle Tree certificates, a new certificate type for use with TLS. A relying party that regularly fetches information from a transparency service can use this certificate type as a size optimization over more conventional mechanisms with post- quantum signatures. Merkle Tree certificates integrate the roles of X.509 and Certificate Transparency, achieving comparable security properties with a smaller message size, at the cost of more limited applicability. The IETF Secretariat -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Merkle Tree Certificates
On Monday, 20 March 2023 19:54:24 CET, David Benjamin wrote: I don't think flattening is the right way to look at it. See my other reply for a discussion about flattening, and how this does a bit more than that. (It also handles SCTs.) As for RFC 7924, in this context you should think of it as a funny kind of TLS resumption. In clients that talk to many servers[0], the only plausible source of cached information is a previous TLS exchange. Cached info is then: if I previously connected to you and I am willing to correlate that previous connection to this new one, we can re-connect more efficiently. It's a bit more flexible than resumption---it doesn't replace authentication, so we could conceivably use larger lifetimes. But it's broadly the same w.r.t. when it can be used. It doesn't help the first connection to a service, or a service that was connected long enough ago that it's fallen off the cache. And it doesn't help across contexts where we don't want correlation. Within a web browser, things are a bit more partitioned these days, see https://github.com/MattMenke2/Explainer---Partition-Network-State/blob/main/README.md and https://github.com/privacycg/storage-partitioning. Sorry, but as long as the browsers are willing to perform session resumption I'm not buying the "cached info is a privacy problem". It also completely ignores the encrypted client hello Browser doesn't have to cache the certs since the beginning of time to be of benefit, a few hours or even just current boot would be enough: 1. if it's a page visited once then all the tracking cookies and javascript will be an order of magnitude larger download anyway 2. if it's a page visited many times, then optimising for the subsequent connections is of higher benefit anyway In comparison, this design doesn't depend on this sort of per-destination state and can apply to the first time you talk to a server. it does depend on complex code instead, that effectively duplicates the functionality of existing code David [0] If you're a client that only talks to one or two servers, you could imagine getting this cached information pushed out-of-band, similar to how this document pushes some valid tree heads out-of-band. But that doesn't apply to most clients, certainly not a web browser. web browser could get a list of most commonly accessed pages/cert pairs, randomised to some degree by addition of not commonly accessed pages to hide if the connection is new or not, and make inference about previous visits worthless On Tue, Mar 14, 2023 at 9:46 AM Kampanakis, Panos wrote: Hi Hubert, I am not an author of draft-davidben-tls-merkle-tree-certs, but I had some feedback on this question: RFC7924 was a good idea but I don’t think it got deployed. It has the disadvantage that it allows for connection correlation and it is also challenging to demand a client to either know all its possible destination end-entity certs or be able to have a caching mechanism that keeps getting updated. Given these challenges and that CAs are more static and less (~1500 in number) than leaf certs, we have proposed suppressing the ICAs in the chain (draft-kampanakis-tls-scas-latest which replaced draft-thomson-tls-sic ) , but not the server cert. I think draft-davidben-tls-merkle-tree-certs is trying to achieve something similar by introducing a Merkle tree structure for certs signed by a CA. To me it seems to leverage a Merkle tree structure which "batches the public key + identities" the CA issues. Verifiers can just verify the tree and thus assume that the public key of the peer it is talking to is "certified by the tree CA". The way I see it, this construction flattens the PKI structure, and issuing CA's are trusted now instead of a more limited set of roots. This change is not trivial in my eyes, but the end goal is similar, to shrink the amount of auth data. -Original Message- From: TLS On Behalf Of Hubert Kario Sent: Monday, March 13, 2023 11:08 AM To: David Benjamin Cc: ; Devon O'Brien Subject: RE: [EXTERNAL][TLS] Merkle Tree Certificates CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. Why not rfc7924? On Friday, 10 March 2023 23:09:10 CET, David Benjamin wrote: Hi all, I've just uploaded a draft, below, describing several ideas we've been mulling over regarding certificates in TLS. This is a draft-00 with a lot of moving parts, so think of it as the first pass at some of ideas that we think fit well together, rather than a concrete, fully-baked system. The document describes a new certificate format based on Merkle Trees, which aims to mitigate the many signatures we send today, particularly in applications that use Cert
Re: [TLS] Merkle Tree Certificates
On Tuesday, 21 March 2023 17:06:54 CET, David Benjamin wrote: On Tue, Mar 21, 2023 at 8:01 AM Hubert Kario wrote: On Monday, 20 March 2023 19:54:24 CET, David Benjamin wrote: I don't think flattening is the right way to look at it. See my other reply for a discussion about flattening, and how this does a bit more than that. (It also handles SCTs.) As for RFC 7924, in this context you should think of it as a funny kind of TLS resumption. In clients that talk to many ... https://github.com/MattMenke2/Explainer---Partition-Network-State/blob/main/README.md and https://github.com/privacycg/storage-partitioning. Sorry, but as long as the browsers are willing to perform session resumption I'm not buying the "cached info is a privacy problem". I'm not seeing where this quote comes from. I said it had analogous properties to resumption, not that it was a privacy problem in the absolute. I meant it as a summary not as a quote. The privacy properties of resumption and cached info on the situation. If you were okay correlating the two connections, both are okay in this regard. If not, then no. rfc8446bis discusses this: https://tlswg.org/tls13-spec/draft-ietf-tls-rfc8446bis.html#appendix-C.4 In browsers, the correlation boundaries (across *all* state, not just TLS) were once browsing-profile-wide, but they're shifting to this notion of "site". I won't bore the list with the web's security model, but roughly the domain part of the top-level (not the same as destination!) URL. See the links above for details. That equally impacts resumption and any hypothetical deployment of cached info. So, yes, within those same bounds, a browser could deploy cached info. Whether it's useful depends on whether there are many cases where resumption wouldn't work, but cached info would. (E.g. because resumption has different security properties than cached info.) The big difference is that tickets generally should be valid only for a day or two, while cached info, just like cookies, can be valid for many months if not years. Now, a privacy focused user may decide to clear the cookies and cached info daily, while others may prefer the slightly improved performance on first visit after a week or month break. It also completely ignores the encrypted client hello ECH helps with outside observers correlating your connections, but it doesn't do anything about the server correlating connections. In the context of correlation boundaries within a web browser, we care about the latter too. How's that different from cookies? Which don't correlate, but cryptographically prove previous visit? Browser doesn't have to cache the certs since the beginning of time to be of benefit, a few hours or even just current boot would be enough: 1. if it's a page visited once then all the tracking cookies and javascript will be an order of magnitude larger download anyway 2. if it's a page visited many times, then optimising for the subsequent connections is of higher benefit anyway I don't think that's quite the right dichotomy. There are plenty of reasons to optimize for the first connection, time to first bytes, etc. Indeed, this WG did just that with False Start and TLS 1.3 itself. (Prior to those, TLS 1.2 was 2-RTT for the first connection and 1-RTT for resumption.) In my opinion time to first byte is a metric that's popular because it's easy to measure, not because it's representative. Feel free to point me to double blind studies with representative sample sizes showing otherwise. Yes, reducing round-trips is important as latency of connection is not correlated with bandwidth available. But when a simple page is a megabyte of data, and anything non trivial is multiple megabytes, looking at when the first byte arrives it is completely missing the bigger picture. Especially when users are trained to not interact with the page until it fully loads (2018 Hawaii missile alert joke explanation): https://gfycat.com/queasygrandiriomotecat Neither cached data nor Merkle tree certificates reduce round-trips I suspect a caching for a few hours would not justify cached info because you may as well use resumption at that point. In comparison, this design doesn't depend on this sort of per-destination state and can apply to the first time you talk to a server. it does depend on complex code instead, that effectively duplicates the functionality of existing code David [0] If you're a client that only talks to one or two servers, you could imagine getting this cached information pushed out-of-band, similar to how this document pushes some valid tree heads out-of-band. But that doesn't apply to most clients, ... web browser could get a list of most commonly accessed pages/cert pairs, randomised to some degree by addition of not commonly accessed pages to hide if the conne
Re: [TLS] Merkle Tree Certificates
On Thursday, 23 March 2023 03:00:53 CET, Kampanakis, Panos wrote: Hi Hubert, I totally agree on your points about time-to-first-byte vs time-to-last-byte. We (some of my previous work too) have been focusing on time-to-first byte which makes some of these handshakes look bad for the tails of the 80-95th percentiles. But in reality, the time-to-last-byte or time-to-some-byte-that-makes-the-user-think-there-is-progress would be the more accurate measurement to assess these connections. Neither cached data nor Merkle tree certificates reduce round-trips Why is that? Assuming Dilithium WebPKI and excluding CDNs, QUIC sees 2 extra round-trips (amplification, initcwnd) and TLS sees 1 (initcwnd). Trimming down the "auth data" will at least get rid of the initcwnd extra round-trip. I think the Merkle tree cert approach fits in the default QUIC amplification window too so it would get rid of that round-trip in QUIC as well. I meant it on TLS level. Sure, on TCP level the less data you need to send the less problem you have with congestion window. But even there, I don't see insurmountable problems with it; even with 3 Dilithium certs, with 2 SCTs each, we're talking about 22kB of data; that's half of what cloudflare found to be an inflection point for extra data: https://blog.cloudflare.com/sizing-up-post-quantum-signatures/ So I'm very unconvinced that for good general web browsing experience Merkle Tree Certs will be qualitatively better than cached info. -----Original Message- From: Hubert Kario Sent: Wednesday, March 22, 2023 8:46 AM To: David Benjamin Cc: Kampanakis, Panos ; ; Devon O'Brien Subject: RE: [EXTERNAL][TLS] Merkle Tree Certificates CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. On Tuesday, 21 March 2023 17:06:54 CET, David Benjamin wrote: On Tue, Mar 21, 2023 at 8:01 AM Hubert Kario wrote: On Monday, 20 March 2023 19:54:24 CET, David Benjamin wrote: ... I'm not seeing where this quote comes from. I said it had analogous properties to resumption, not that it was a privacy problem in the absolute. I meant it as a summary not as a quote. The privacy properties of resumption and cached info on the situation. If you were okay correlating the two connections, both are okay in this regard. If not, then no. rfc8446bis discusses this: https://tlswg.org/tls13-spec/draft-ietf-tls-rfc8446bis.html#appendix-C.4 In browsers, the correlation boundaries (across *all* state, not just TLS) were once browsing-profile-wide, but they're shifting to this notion of "site". I won't bore the list with the web's security model, but roughly the domain part of the top-level (not the same as destination!) URL. See the links above for details. That equally impacts resumption and any hypothetical deployment of cached info. So, yes, within those same bounds, a browser could deploy cached info. Whether it's useful depends on whether there are many cases where resumption wouldn't work, but cached info would. (E.g. because resumption has different security properties than cached info.) The big difference is that tickets generally should be valid only for a day or two, while cached info, just like cookies, can be valid for many months if not years. Now, a privacy focused user may decide to clear the cookies and cached info daily, while others may prefer the slightly improved performance on first visit after a week or month break. It also completely ignores the encrypted client hello ECH helps with outside observers correlating your connections, but it doesn't do anything about the server correlating connections. In the context of correlation boundaries within a web browser, we care about the latter too. How's that different from cookies? Which don't correlate, but cryptographically prove previous visit? ... I don't think that's quite the right dichotomy. There are plenty of reasons to optimize for the first connection, time to first bytes, etc. Indeed, this WG did just that with False Start and TLS 1.3 itself. (Prior to those, TLS 1.2 was 2-RTT for the first connection and 1-RTT for resumption.) ... In my opinion time to first byte is a metric that's popular because it's easy to measure, not because it's representative. Feel free to point me to double blind studies with representative sample sizes showing otherwise. Yes, reducing round-trips is important as latency of connection is not correlated with bandwidth available. But when a simple page is a megabyte of data, and anything non trivial is multiple megabytes, looking at when the first byte arrives it is completely missing the bigger picture. Especially when users are trained to not interact with the page until it fully loads (2018 Hawaii missile alert joke explanati
Re: [TLS] Consensus call on codepoint strategy for draft-ietf-tls-hybrid-design
On Saturday, 1 April 2023 03:50:04 CEST, Krzysztof Kwiatkowski wrote: I would pair secp384r1 with Kyber768 for completely different reasons: Kyber768 is what the team kyber recommends. Agreed. I don't think there are very good reasons for NIST curves here outside wanting CNSA1 compliance, and for that you need secp384r1 classical part. And for that, I would pick secp384r1_kyber768. From my perspective, the two reasons for including a NIST curves are: 1. To have an option for those who require FIPS compliance. In a short term at least one key agreement scheme should be FIPS-approved. In the long term both of them should be fips-approved. That way, in case security of Kyber768 falls below 112-bits or simply implementation is broken, one can still run key agreement in FIPS compliant manner. In the end, the ultimate goal of hybrid-tls draft is to ensure that at least one of the schemes provides security if the other gets broken. Would be good to use this in FIPS context also. 2. NIST curves are more often implemented in HW than Curve25519. When working with chips like ATECC608B, one ideally only adds SW-based Kyber and can reuse existing HW-based ECDH. Such migration is simpler, less risky and time-consuming than adding SW-based X25519. there's a third reason: the public CAs that support ECDSA almost exclusively support just P-256 and P-384, so if somebody implements ECDSA for the public internet, they have to support those two curves at the very least. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Tricking TLS library into crypto primitives library
On Sunday, 25 June 2023 19:45:53 CEST, Soni L. wrote: Pure-python forbids using the cryptography package. Only python code and the python stdlib are allowed. The fact that TLS uses AES at all means it might be possible to trick the python ssl module to do arbitrary AES, with some effort. At the end of the day, the TLS protocol is also part of the ssl module's API surface. It's not the API surface you'd usually interact with, but nothing really stops you from doing so. Even if you did force the python ssl socket to use AES keys you want, you wouldn't be able to encrypt more than 2^14 Bytes at a time. And those bytes would still be encrypted in TLS specific way, so, no, you can't use ssl module for generic AES encryption. Please use pyca-cryptography for it. While it's not pure python it now also works with stuff like pypy, so you're better of figuring how to get it running in a semi-pure python way than jumping through hoops just to get to code that is vulnerable to side channel attacks (as any crypto written in pure python will be). On 6/25/23 14:31, Eric Rescorla wrote: I believe https://cryptography.io/en/latest/ is what you want. TLS does not use AES in a way that is consistent with what you would get if you just used a typical AES library. -Ekr On Sun, Jun 25, 2023 at 10:21 AM Soni L. wrote: Python doesn't expose raw AES, etc. But it does expose a fairly rich TLS library. Wondering if it would be possible to just connect a TLS socket to a raw TCP socket and somehow write bytes into TLS and get ciphertext out or write bytes into the raw TCP socket and get plaintext out. The point is to use AES for non-TLS protocols. On 6/25/23 14:15, Eric Rescorla wrote: I'm not aware of any. Why would you want to do this? Most such libraries I am aware of expose low-level primitives or are built on libraries which do. -Ekr On Sun, Jun 25, 2023 at 6:28 AM Soni L. wrote: Has anyone done any work towards tricking a TLS library into providing cryptographic primitives? We know of similar work with regards to javacard https://arxiv.org/abs/1810.01662 but not sure if it can be applied to TLS. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Abridged Certificate Compression (server participation)
On Wednesday, 12 July 2023 06:02:09 CEST, Kampanakis, Panos wrote: Hi Dennis, One more topic for general discussion. The abridged certs draft requires a server who participates and fetches dictionaries in order to make client connections faster. As Bas has pointed out before, this paradigm did not work well with OSCP staples in the past. Servers did not chose to actively participate and go fetch them. The problem with OCSP staples is that it has little immediate benefit for the server operator, so there was no strong push to: 1. get it implemented in the TLS libraries 2. have it implemented in the web servers 3. backport those changes to stable branches (of both libraries and web servers) 4. either rebase or backport the changes to long-term support Linux distributions It takes years for such changes to trickle down. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WG Last Call for draft-ietf-tls-deprecate-obsolete-kex
On Wednesday, 12 July 2023 19:13:02 CEST, Viktor Dukhovni wrote: On Wed, Jul 12, 2023 at 12:40:13PM -0400, Sean Turner wrote: On Jul 11, 2023, at 13:52, Salz, Rich wrote: ... This appears in s2: Note that TLS 1.0 and 1.1 are deprecated by [RFC8996] and TLS 1.3 does not support FFDH [RFC8446]. And section 3: https://www.ietf.org/archive/id/draft-ietf-tls-deprecate-obsolete-kex-02.html#section-3 Clients MUST NOT offer and servers MUST NOT select FFDHE cipher suites in TLS 1.2 connections. This includes all cipher suites listed in the table in Appendix C. (Note that TLS 1.0 and 1.1 are deprecated by [RFC8996].) FFDHE cipher suites in TLS 1.3 do not suffer from the problems presented in Section 1; see [RFC8446]. Therefore, clients and servers MAY offer FFDHE cipher suites in TLS 1.3 connections. Note that at least in Postfix (opportunistic STARTTLS), this advice will be ignored. FFDHE will remain supported in TLS 1.2, with ECDHE preferred when offered by the client: https://tools.ietf.org/html/rfc7435 The default group used by the server is either a compiled-in 2048-bit group or one of the groups from appendix of RFC7919 built-in to OpenSSL. There are zero reports of clients that can't handle 2048-bit groups (as opposed to 1024). Point "3" in the introduction may be outdated w.r.t. to current practice. And in general, it's far better to use FFDHE kex with legacy client than RSA. Getting RSA right is very hard, using ephemeral secrets for FFDHE is trivial and recommended practice already. also Therefore, clients and servers MAY offer FFDHE cipher suites in TLS 1.3 connections. There are no ECDHE or FFDHE cipher suites in TLS 1.3. Cipher suites specify just the bulk encryption, PRF, and integrity protection mechanism. The key exchange is fully controlled by supported_groups and key_share extensions. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WG Last Call for draft-ietf-tls-deprecate-obsolete-kex
On Friday, 14 July 2023 09:01:30 CEST, Peter Gutmann wrote: Viktor Dukhovni writes: What benefit do we expect from forcing weaker security (RSA key exchange or cleartext in the case of SMTP) on the residual servers that don't do either TLS 1.3 or ECDHE? This already happens a lot in wholesale banking, the admins have dutifully disabled DH because someone said so and so all keyex falls back to RSA circa 1995, and worst possible situation to be in. There needs to be clear text in there to say that if you can't do ECC then do DH but never RSA, or even just "keep using DH because it's still vastly better than the alternative of RSA". At the moment the blanket "don't do DH" is in effect saying "use RSA keyex" to a chunk of the market. Yes, what the text should say, is "MUST NOT use RSA key exchange, SHOULD NOT support ephemeral FFDHE, and if it does support FFDHE the key shares MUST be ephemeral and never reused." There _needs_ to be a clear preference for FFDHE over RSA, as otherwise people will end up using RSA because "it's faster", or "it's more interoperable", completely missing the part that it's also vastly less secure. Frankly, I find the interoperability issues of TLS 1.2 FFDHE overblown, the FIPS requires to support only well known groups (all of them 2048 bit or larger), and we've received hardly any customer issues after implementing that as hard check (connection will fail if the key exchange uses custom DH parameters) good few years ago now. -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WG Last Call for draft-ietf-tls-deprecate-obsolete-kex
On Friday, 14 July 2023 18:03:25 CEST, Peter Gutmann wrote: Hubert Kario writes: FIPS requires to support only well known groups (all of them 2048 bit or larger), and we've received hardly any customer issues after implementing that as hard check (connection will fail if the key exchange uses custom DH parameters) good few years ago now. Interesting, so you're saying that essentially no-one uses custom groups? My code currently fast-tracks the known groups (RFC 3526 and RFC 7919) but also allows custom groups (with additional checking) to be on the safe side because you never know what weirdness is out there, do you have an idea of what sort of magnitude "hardly any" represents? I wouldn't go as far as "nobody uses them", it's more like "people that use them, either have them configured unknowingly or can change configuration to use well known groups". So while it may cause interoperability issues, for people that really do care about interoperability with old systems, they are fine with tweaking DH configuration to make it happen (or simply end up using ECDHE and are completely unaware of the whole issue). One more side note: in FIPS mode we also disable RSA ciphersuites, so the FFDHE and ECDHE, both with only well known groups, are the only two key exchanges that do work. And can something similar be said about SSH implementations? There's fixed DH groups and then the Swiss-army-knife diffie-hellman-group-exchange-*, but AFAIK the only groups that ever get exchanged there are the RFC 3526/7919 ones. nope, for OpenSSH those will be the safe-primes from /etc/ssh/moduli, though in FIPS mode we do ignore that file and indeed use RFC 3526 or 7919 groups of at least 2048 bits (don't remember what we default to, but we will accept either) -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [EXT] WG Last Call for draft-ietf-tls-deprecate-obsolete-kex
I'm not claiming that I know about all users, I can just say that of all our customers that do care about working in FIPS mode (which is not limited to people that fall under US Federal regulation) none have complained intensively about accepting only well known groups in FIPS mode. SHA-1 deprecation was more impactful. And while the company you mention may not want to change to a widely-known group, they still can choose a group known to then as secure. I'm not saying that we should mandate use of well known groups for FFDHE, I'm saying that not allowing use of RSA and allowing use of FFDHE under very specific conditions is workable to a large set of users. On Friday, 14 July 2023 18:48:27 CEST, Blumenthal, Uri - 0553 - MITLL wrote: Hubert, I’m aware of at least one company (using the term loosely) that uses custom group, and probably understands FFDH(E) better than you or me. Since they had their reasons for choosing custom, “can change … to use well-known groups” (obviously) does not apply. Regards, Uri On Jul 14, 2023, at 12:33, Hubert Kario wrote: !---| This Message Is From an External Sender This message came from outside the Laboratory. |---! On Friday, 14 July 2023 18:03:25 CEST, Peter Gutmann wrote: Hubert Kario writes: ... I wouldn't go as far as "nobody uses them", it's more like "people that use them, either have them configured unknowingly or can change configuration to use well known groups". So while it may cause interoperability issues, for people that really do care about interoperability with old systems, they are fine with tweaking DH configuration to make it happen (or simply end up using ECDHE and are completely unaware of the whole issue). One more side note: in FIPS mode we also disable RSA ciphersuites, so the FFDHE and ECDHE, both with only well known groups, are the only two key exchanges that do work. And can something similar be said about SSH implementations? There's fixed DH groups and then the Swiss-army-knife diffie-hellman-group-exchange-*, but AFAIK the only groups that ever get exchanged there are the RFC 3526/7919 ones. nope, for OpenSSH those will be the safe-primes from /etc/ssh/moduli, though in FIPS mode we do ignore that file and indeed use RFC 3526 or 7919 groups of at least 2048 bits (don't remember what we default to, but we will accept either) -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
[TLS] New approach to timing attacks against RSA key exchange - the Marvin Attack
Hello, Today we made public the new approach for attacking RSA key exchange in TLS, and RSA based encryption in general (many multiple bugs we discovered were caused by side channels in numerical library, which makes OAEP implementations also vulnerable). As usual, the recommendation is not to use PKCS#1 v1.5 padding. All the details can be found on the vulnerability page: https://people.redhat.com/~hkario/marvin/ -- Regards, Hubert Kario Principal Quality Engineer, RHEL Crypto team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls