Re: [TLS] Merkle Tree Certificates
On Tuesday, 21 March 2023 17:06:54 CET, David Benjamin wrote: On Tue, Mar 21, 2023 at 8:01 AM Hubert Kario wrote: On Monday, 20 March 2023 19:54:24 CET, David Benjamin wrote: I don't think flattening is the right way to look at it. See my other reply for a discussion about flattening, and how this does a bit more than that. (It also handles SCTs.) As for RFC 7924, in this context you should think of it as a funny kind of TLS resumption. In clients that talk to many ... https://github.com/MattMenke2/Explainer---Partition-Network-State/blob/main/README.md and https://github.com/privacycg/storage-partitioning. Sorry, but as long as the browsers are willing to perform session resumption I'm not buying the "cached info is a privacy problem". I'm not seeing where this quote comes from. I said it had analogous properties to resumption, not that it was a privacy problem in the absolute. I meant it as a summary not as a quote. The privacy properties of resumption and cached info on the situation. If you were okay correlating the two connections, both are okay in this regard. If not, then no. rfc8446bis discusses this: https://tlswg.org/tls13-spec/draft-ietf-tls-rfc8446bis.html#appendix-C.4 In browsers, the correlation boundaries (across *all* state, not just TLS) were once browsing-profile-wide, but they're shifting to this notion of "site". I won't bore the list with the web's security model, but roughly the domain part of the top-level (not the same as destination!) URL. See the links above for details. That equally impacts resumption and any hypothetical deployment of cached info. So, yes, within those same bounds, a browser could deploy cached info. Whether it's useful depends on whether there are many cases where resumption wouldn't work, but cached info would. (E.g. because resumption has different security properties than cached info.) The big difference is that tickets generally should be valid only for a day or two, while cached info, just like cookies, can be valid for many months if not years. Now, a privacy focused user may decide to clear the cookies and cached info daily, while others may prefer the slightly improved performance on first visit after a week or month break. It also completely ignores the encrypted client hello ECH helps with outside observers correlating your connections, but it doesn't do anything about the server correlating connections. In the context of correlation boundaries within a web browser, we care about the latter too. How's that different from cookies? Which don't correlate, but cryptographically prove previous visit? Browser doesn't have to cache the certs since the beginning of time to be of benefit, a few hours or even just current boot would be enough: 1. if it's a page visited once then all the tracking cookies and javascript will be an order of magnitude larger download anyway 2. if it's a page visited many times, then optimising for the subsequent connections is of higher benefit anyway I don't think that's quite the right dichotomy. There are plenty of reasons to optimize for the first connection, time to first bytes, etc. Indeed, this WG did just that with False Start and TLS 1.3 itself. (Prior to those, TLS 1.2 was 2-RTT for the first connection and 1-RTT for resumption.) In my opinion time to first byte is a metric that's popular because it's easy to measure, not because it's representative. Feel free to point me to double blind studies with representative sample sizes showing otherwise. Yes, reducing round-trips is important as latency of connection is not correlated with bandwidth available. But when a simple page is a megabyte of data, and anything non trivial is multiple megabytes, looking at when the first byte arrives it is completely missing the bigger picture. Especially when users are trained to not interact with the page until it fully loads (2018 Hawaii missile alert joke explanation): https://gfycat.com/queasygrandiriomotecat Neither cached data nor Merkle tree certificates reduce round-trips I suspect a caching for a few hours would not justify cached info because you may as well use resumption at that point. In comparison, this design doesn't depend on this sort of per-destination state and can apply to the first time you talk to a server. it does depend on complex code instead, that effectively duplicates the functionality of existing code David [0] If you're a client that only talks to one or two servers, you could imagine getting this cached information pushed out-of-band, similar to how this document pushes some valid tree heads out-of-band. But that doesn't apply to most clients, ... web browser could get a list of most commonly accessed pages/cert pairs, randomised to some degree by addition of not commonly accessed pages to hide if the connection is new or not, and make inference about previous visits worthless True, we could preload cached info for a global list of co
Re: [TLS] Merkle Tree Certificates
> > Unpopular pages are much more likely to deploy a solution that doesn't > require > a parallel CA infrastructure and a cryptographer on staff. > CAs, TLS libraries, certbot, and browsers would need to make changes, but I think we can deploy this without webservers or relying parties having to make any changes if they're already using an ACME client except upgrading their dependencies, which they would need to do anyway to get plain X.509 PQ certs. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Merkle Tree Certificates
On Wed, Mar 22, 2023 at 01:54:22PM +0100, Bas Westerbaan wrote: > > > > Unpopular pages are much more likely to deploy a solution that > > doesn't require a parallel CA infrastructure and a cryptographer > > on staff. I don't think the server-side deployment difficulties with this have anything to do with parallel CA infrastructure or admins having to understand cryptography. > CAs, TLS libraries, certbot, and browsers would need to make changes, > but I think we can deploy this without webservers or relying parties > having to make any changes if they're already using an ACME client > except upgrading their dependencies, which they would need to do > anyway to get plain X.509 PQ certs. I don't agree. I think deploying this is much much harder than deploying X.509 PQ certificates. X.509 PQ certificates are mostly dependency update. This looks to require some nontrivial configuration work that can not be done completely automatically. And then in present form, this could be extremely painful for ACME clients to implement (on level of complete rewrite for many). -Ilari ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Resurrect AuthKEM?
Hi Uri, I'm afraid that like you I am not going to Yokohama, as I am attending RWC and HACS in Tokyo that week instead. While the AuthKEM draft has been sitting idle, I have been very busy, pretty much writing the book on it — my PhD thesis. I am sitting on a large pile of tables and benchmark results that show the impact of putting Kyber, Dilithium and Falcon at all NIST security levels into TLS (and KEMTLS/AuthKEM). Once I manage to get everything written up and submitted to the reading committee, I will see if I can extract some numbers for the TLS working group that are hopefully interesting, also independent of AuthKEM. Hopefully I have time to do so next month. Otherwise, I think what I wrote in January [thread] still applies: > To me, right now, most of the "homework" behind the AuthKEM/KEMTLS proposal feels pretty "finished"; I'd argue we have some form of running code (as in the various KEMTLS experimental implementations we did for the academic work are pretty close to AuthKEM). We also have proofs, both pen-and-paper and two Tamarin models. If anyone has suggestions for concrete next steps, perhaps in which AuthKEM solves a problem that they're seeing, let us know. > > But in the end, AuthKEM, as any IETF WG proposal, can't get pushed over the line by some ivory tower academic like myself --- we will need people coming out and saying they want to have this. Your email clearly indicates that you are in favor of AuthKEM. If others want to voice their support and/or have suggestions for the draft, a plan of attack, whatever; feel free to also let me know: maybe for IETF 117 can we dust off the draft, relate it to the NIST selections for signature schemes, and also see if the idea has support for adoption. Cheers, Thom PS. As mentioned, I will be in Tokyo for RWC and HACS, so I hope to be able to meet some of the TLS folks face-to-face again there :-) [thread]: https://mailarchive.ietf.org/arch/msg/pqc/AsLh6qEtJfn1EE1TTtSEXoZwZAE/ Op di 21 mrt 2023 om 21:55 schreef Blumenthal, Uri - 0553 - MITLL < u...@ll.mit.edu>: > Richard, yes, you git it right. > > > > *From: *Richard Barnes > *Date: *Tuesday, March 21, 2023 at 4:32 PM > *To: *Blumenthal, Uri - 0553 - MITLL > *Cc: *tls@ietf.org > *Subject: *Re: [TLS] Resurrect AuthKEM? > > Hi Uri, > > > > Just to be clear, the AuthKEM draft you mean is this one? > > > > https://datatracker.ietf.org/doc/draft-celi-wiggers-tls-authkem/ > > > > Assuming that's the case, in case anyone else is confused (as I was), the > "AuthKEM" here does not refer to a KEM implementing the AuthEncap/AuthDecap > interface from RFC 9180. Instead it refers to the construction in that > document, which uses a normal KEM. > > > > --Richard > > > > > > On Tue, Mar 21, 2023 at 2:34 PM Blumenthal, Uri - 0553 - MITLL < > u...@ll.mit.edu> wrote: > > I’m surprised to see that there isn’t much (isn’t any?) discussion of the > AuthKEM draft. > > > > It seems pretty obvious that with the advent of PQ algorithms, the sheer > sizes of signatures and public keys would make {cDm}TLS existing > authentication and key exchange impractical in bandwidth-constrained > environments, especially when higher security-level algorithms (like, > what’s demanded by CNSA-2.0) are required. > > > > Thus, implicit authentication (think – MQV, Hugo Krawczyk’s HMQV, etc.) > seems to be a-must for making the PQ impact on bandwidth somewhat > manageable. > > > > I would like this WG to resurrect the AuthKEM draft. > > > > I can’t be in Yokohama, and am not fanatical enough to spend nights on > XMPP or such. But hopefully, we can discuss AuthKEM approach here on the > list. > > > > Thank you! > > -- > > V/R, > > Uri Blumenthal Voice: (781) 981-1638 > > Secure Resilient Systems and Technologies Cell: (339) 223-5363 > > MIT Lincoln Laboratory > > 244 Wood Street, Lexington, MA 02420-9108 > > > > Web: https://www.ll.mit.edu/biographies/uri-blumenthal > > Root CA: https://www.ll.mit.edu/llrca2.pem > > > > *There are two ways to design a system. One is to make it so simple there > are obviously no deficiencies.* > > *The other is to make it so complex there are no obvious deficiencies.* > > * > > - > C. A. R. Hoare* > > > > ___ > TLS mailing list > TLS@ietf.org > https://www.ietf.org/mailman/listinfo/tls > > ___ > TLS mailing list > TLS@ietf.org > https://www.ietf.org/mailman/listinfo/tls > ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Merkle Tree Certificates
On Fri, Mar 10, 2023 at 05:09:10PM -0500, David Benjamin wrote: > > I've just uploaded a draft, below, describing several ideas we've > been mulling over regarding certificates in TLS. This is a draft-00 > with a lot of moving parts, so think of it as the first pass at > some of ideas that we think fit well together, rather than a > concrete, fully-baked system. > > Thoughts? We're very eager to get feedback on this. Some quick comments / ideas: - I think it would be easier for subscribers to get inclusion proofs from transparency service than certificate authority. This is because issuance is heavily asynchronous, whereas most servers assume ACME is essentially synchronous. If certificates are canonicalized (this is mostly matter of ensuring the names are always sorted), this could be endpoint to download known inclusion proofs by certificate hash. Or maybe even have both, and subscribers can use whichever is more convinient. - I don't think there are any sane uses for >64kB claims, so the claim_info length could be shortened to 16 bits. I don't see rule for how claims are sorted within each type, only how different types are sorted. - If each claim was in its own Claim, then one could maybe even shorten it to 8 bits. Similarly, one could merge ipv4/ipv6 and dns/dns_wildcard. This could also simplify sorting: Sort by type, lexicographic sort by claim contents. - I don't think anybody is going to use signatures with >64kB keys, so subject_info length could be shortened to 16 bits. - What does it mean that in this document the hash is always SHA-256? - Apparently issuer id is limited to 32 octets. This could be noted in the definition. - I think it would be easier if lifetime was expressed in batch durations. Then one would not need window size, and especially not handle lifetime / batch_duration not being an integer! - The root hash being dependent on issuer and batch number iff there are multiple assertions looks very odd. Empty assertion list might be special. But this also happens for one assertion. - I think LabeledWindow should add 64 spaces in front, so it reuses the TLS 1.3 signature format. This reduces risks of cross-protocol attack if the key gets reused anyway (despite there being MUST NOT requirement). - Is there reason for type of trust_anchor_data to vary by proof_type? Why not always have MerkleTreeTrustAnchor there? - And proof_data length field should probably be 16 bits. I don't think proof_data will ever exceed 8kB. - For type-independent expiry info to be helpful, this must be somehow plumbed to the TLS server. - Even if ACME itself allows for long processing delays, many (most?) ACME clients do not. - Multiple orders from single newOrder or multiple certificates in single order sounds like it would break assumption made in ACME rather badly, and thus recipe for trouble. - Isn't cert_type deprecated? - "We may need to define a third one.", you men fourth one? - I think the uniform certificate format is already a requirement in TLS 1.3. And OpenPGP format is banned in TLS 1.3 anyway. So parsing to extension blocks without knowing certificate type is no problem. > On Fri, Mar 10, 2023 at 4:38 PM wrote: > > > > > A new version of I-D, draft-davidben-tls-merkle-tree-certs-00.txt > > has been successfully submitted by David Benjamin and posted to the > > IETF repository. > > > > Name: draft-davidben-tls-merkle-tree-certs > > Revision: 00 > > Title: Merkle Tree Certificates for TLS > > Document date: 2023-03-10 > > Group: Individual Submission > > Pages: 45 -Ilari ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Merkle Tree Certificates
Hi Hubert, I totally agree on your points about time-to-first-byte vs time-to-last-byte. We (some of my previous work too) have been focusing on time-to-first byte which makes some of these handshakes look bad for the tails of the 80-95th percentiles. But in reality, the time-to-last-byte or time-to-some-byte-that-makes-the-user-think-there-is-progress would be the more accurate measurement to assess these connections. > Neither cached data nor Merkle tree certificates reduce round-trips Why is that? Assuming Dilithium WebPKI and excluding CDNs, QUIC sees 2 extra round-trips (amplification, initcwnd) and TLS sees 1 (initcwnd). Trimming down the "auth data" will at least get rid of the initcwnd extra round-trip. I think the Merkle tree cert approach fits in the default QUIC amplification window too so it would get rid of that round-trip in QUIC as well. -Original Message- From: Hubert Kario Sent: Wednesday, March 22, 2023 8:46 AM To: David Benjamin Cc: Kampanakis, Panos ; ; Devon O'Brien Subject: RE: [EXTERNAL][TLS] Merkle Tree Certificates CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. On Tuesday, 21 March 2023 17:06:54 CET, David Benjamin wrote: > On Tue, Mar 21, 2023 at 8:01 AM Hubert Kario wrote: > >> On Monday, 20 March 2023 19:54:24 CET, David Benjamin wrote: >>> I don't think flattening is the right way to look at it. See my >>> other reply for a discussion about flattening, and how this does a >>> bit more than that. (It also handles SCTs.) >>> >>> As for RFC 7924, in this context you should think of it as a funny >>> kind of TLS resumption. In clients that talk to many ... >> https://github.com/MattMenke2/Explainer---Partition-Network-State/blo >> b/main/README.md >>> and https://github.com/privacycg/storage-partitioning. >> >> Sorry, but as long as the browsers are willing to perform session >> resumption I'm not buying the "cached info is a privacy problem". >> > > I'm not seeing where this quote comes from. I said it had analogous > properties to resumption, not that it was a privacy problem in the absolute. I meant it as a summary not as a quote. > The privacy properties of resumption and cached info on the situation. If > you were okay correlating the two connections, both are okay in this > regard. If not, then no. rfc8446bis discusses this: > https://tlswg.org/tls13-spec/draft-ietf-tls-rfc8446bis.html#appendix-C.4 > > In browsers, the correlation boundaries (across *all* state, not just TLS) > were once browsing-profile-wide, but they're shifting to this notion of > "site". I won't bore the list with the web's security model, but roughly > the domain part of the top-level (not the same as destination!) URL. See > the links above for details. > > That equally impacts resumption and any hypothetical deployment of cached > info. So, yes, within those same bounds, a browser could deploy cached > info. Whether it's useful depends on whether there are many cases where > resumption wouldn't work, but cached info would. (E.g. because resumption > has different security properties than cached info.) The big difference is that tickets generally should be valid only for a day or two, while cached info, just like cookies, can be valid for many months if not years. Now, a privacy focused user may decide to clear the cookies and cached info daily, while others may prefer the slightly improved performance on first visit after a week or month break. > >> It also completely ignores the encrypted client hello >> > > ECH helps with outside observers correlating your connections, but it > doesn't do anything about the server correlating connections. In the > context of correlation boundaries within a web browser, we care about the > latter too. How's that different from cookies? Which don't correlate, but cryptographically prove previous visit? >> Browser doesn't have to cache the certs since the beginning of time to be >> of benefit, a few hours or even just current boot would be enough: >> >> 1. if it's a page visited once then all the tracking cookies >> and javascript >>will be an order of magnitude larger download anyway >> 2. if it's a page visited many times, then optimising for the subsequent >>connections is of higher benefit anyway >> > > I don't think that's quite the right dichotomy. There are plenty of reasons > to optimize for the first connection, time to first bytes, etc. Indeed, > this WG did just that with False Start and TLS 1.3 itself. (Prior to those, > TLS 1.2 was 2-RTT for the first connection and 1-RTT for resumption.) In my opinion time to first byte is a metric that's popular because it's easy to measure, not because it's representative. Feel free to point me to double blind studies with representative sample sizes showing otherwise. Yes, reducing round-trips is important as latency of connection is not corr