Re: [TLS] Comments on draft-celi-wiggers-tls-authkem-00.txt

2021-07-12 Thread Kampanakis, Panos

> So, while I'm not that enthusiastic about paying a few K, I think on balance 
> it's a better than doing this kind of major rearchitecture of TLS.

+1. KEMTLS is a great scheme but significantly changes the TLS state machine. 
It introduces implicit and explicit auth concepts which do not exist in TLS 1.3 
and would need further security proofs and study. Also, it may save ~1-2KB of 
data (Falcon, Dilithium assumed), but still uses PQ Sigs in the PKI which means 
we go to 10+ KB for the cert chain. So we alleviate 1-2KB, but still have to 
deal with 10+. Also note ia.cr/2019/1447 which makes 
the argument that the more the data the more the slowdown in lossy environments 
(which intuitively makes sense as the loss probability (1-(1-p)n) increases 
with more packets). Imo the draft-celi-wiggers-tls-authkem should be considered 
for future versions of TLS as the drastic changes do not justify the marginal 
benefit.

And a couple of comments regarding Ekr’s points:

> - If you are doing TLS over TCP, then the server can use IW10 as specified in 
> RFC 6928. This will allow the server's first flight to be about 14 KB, which 
> should be large enough.

That is not completely accurate. The smallest Dilithium parameters will fit in 
10MSS only when doing plain TLS (no SCTs, no OCSP staples) with up to 2 ICAs. 
Anything else will go beyond 14KB. Now if we talk web (include SCTs), then only 
1 ICA would fit with Dilithium-II.

Having said that, if we talk about the other lattice based NIST PQ Sig 
Finalist, Falcon-512, then there are more TLS cases (up to 3 ICAs) where the 
data would fit in an TCP initcwnd.  For Falcon-1024 that is not the case either.

> - If you are doing QUIC, then the server is restricted to 3x the client's 
> initial message, which is potentially a problem with very large server first 
> flights, but the client can add extra bytes to its Initial messages to 
> increase the limit [0]

Padding to the client initial message to 1200B would mean the QUIC 
amplification attack protection would kick in for any PQ KEM Round 3 Finalist 
and any PQ Sig Finalist, even Falcon-512 which is the smallest one. The 
smallest PQ Sig will still be >3x when used with X25519+the biggest PQ KEM 
Finalists. I am not sure what dummy key exchange data and how much of it 
someone could put in the client message in order to reach 2.5KB in the request 
in order for the response to fit in the 3x2.5 window necessary (assuming 2 
ICAs).

I think the best answer to the extra round-trip problems which are inevitable 
for PQ Sigs (as shown in ia.cr/2020/071 , 
dl.acm.org/doi/10.1145/3386367.3431305)
 is Martin’s 
draft-thomson-tls-sic
 which should be revived imo.

Cert compression will not help as these big certs mostly consist of big keys or 
sigs which are random sequences and thus do not benefit from compression.

Rgs,
Panos






From: TLS  On Behalf Of Eric Rescorla
Sent: Monday, July 12, 2021 9:10 PM
To: Douglas Stebila 
Cc:  
Subject: RE: [EXTERNAL] [TLS] Comments on draft-celi-wiggers-tls-authkem-00.txt


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.




On Mon, Jul 12, 2021 at 5:58 PM Douglas Stebila 
mailto:dsteb...@gmail.com>> wrote:
Hi Eric,

The main motivation is that, in some cases, post-quantum signatures are larger 
in terms of communication size compared to a post-quantum KEM, under the same 
cryptographic assumption.

For example, the KEM Kyber (based on module LWE) at the 128-bit security level 
has 800-byte public keys and 768-byte ciphertexts.  The matching signature 
scheme Dilithium (also based on module LWE) has 1312-byte public keys and 
2420-byte signatures.  Doing KEM-based server authentication rather than 
signature-based server authentication would thus save 2164 bytes per handshake.

Doug,

Thanks for the explanation.

I agree that all things being equal it's good to save bytes, but in this case, 
I think this is the wrong tradeoff.

In general, TLS handshake latency is dominated not by message size but by the 
number of round trips you have to use to perform the handshake, which is only 
loosely coupled to the number of bytes.

Specifically:
- If you are doing TLS over TCP, then the server can use IW10 as specified in 
RFC 6928. This will allow the server's first flight to be about 14 KB, which 
should be large enough.
- If you are doing QUIC, then the server is restricted to 3x 

Re: [TLS] Comments on draft-celi-wiggers-tls-authkem-00.txt

2021-07-12 Thread Kampanakis, Panos
Hi Uri,

If we are talking NIST Level 5 (and I am assuming you are discussing mTLS), 
have you calculated the total CertVerify+cert chain sizes there assuming 2 ICAs 
let's say? 

And would constrained devices or mediums that sweat about 5KB really be able to 
support PQ KEMs and Sigs at NIST Level 5?




-Original Message-
From: TLS  On Behalf Of Blumenthal, Uri - 0553 - MITLL
Sent: Monday, July 12, 2021 11:39 PM
To: Douglas Stebila ; Eric Rescorla 
Cc:  
Subject: RE: [EXTERNAL] [TLS] Comments on draft-celi-wiggers-tls-authkem-00.txt

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Let me emphasize the reasons Douglas brought up. Note that I need to use NIST 
Sec Level 5 algorithms. So, Kyber-1024 and Dilithium5 (other algorithms show 
even worse ratio between KEM and signature!).

Communications costs:
- Difference in public key sizes: 1568 bytes of Kyber vs. 2592 bytes of 
Dilithium => 1024 extra bytes to carry over channel each way;
- Signature: extra 4595 bytes each way, because in addition to exchanging certs 
(aka "signed public keys", which is inevitable) you need to sign the exchange 
and communicate that signature across;
- Total: 5619 extra bytes each way. For peer-to-peer broadband connections, you 
can say "so what?". But my links are *very* austere.

Computation costs (ballpark, on a powerful CPU):
- KEM: keygen 15us, encap 18us, decap 14us (say, double encap and decap for 
PFS-providing exchange);
- Signature: sign 113us, verify 55us;
- Comparison: 134us for signature-less KEM vs. 215us for TLS-like exchange => 
almost twice as long;
- Difference may be negligible for Intel Xeon, but for my much weaker hardware 
it matters.

So, for constrained environments with austere comm links, signature-less 
"authkem" is God-sent.
Big servers that need to support many clients (so they care how much CPU cycles 
and comm bytes they spend on every connection) would appreciate these savings 
too.

@ekr,I hope this provides convincing explanation why "authkem" is needed.

P.S. I know that Falcon has much more favorable sizes - but (a) it takes three 
times as long to sign, and (b) it uses FP calculations, which isn't great to 
implement in my environment.
--
Regards,
Uri

There are two ways to design a system. One is to make is so simple there are 
obviously no deficiencies.
The other is to make it so complex there are no obvious deficiencies.

 -  C. A. R. Hoare


On 7/12/21, 20:59, "TLS on behalf of Douglas Stebila"  wrote:

Hi Eric,

The main motivation is that, in some cases, post-quantum signatures are 
larger in terms of communication size compared to a post-quantum KEM, under the 
same cryptographic assumption.

For example, the KEM Kyber (based on module LWE) at the 128-bit security 
level has 800-byte public keys and 768-byte ciphertexts.  The matching 
signature scheme Dilithium (also based on module LWE) has 1312-byte public keys 
and 2420-byte signatures.  Doing KEM-based server authentication rather than 
signature-based server authentication would thus save 2164 bytes per handshake.

We would still need digital signatures for a PKI (i.e., the root and 
intermediate CAs would sign certificates using PQ digital signature schemes), 
but the public key of the endpoint server can be a KEM public key, not a 
digital signature public key.

Douglas


> On Jul 12, 2021, at 20:30, Eric Rescorla  wrote:
>
> Hi folks,
>
> I have just given draft-celi-wiggers-tls-authkem-00.txt a quick
> read. I'm struggling a bit with the rationale, which I take to be
> these paragraphs:
>
>In this proposal we use the DH-based KEMs from [I-D.irtf-cfrg-hpke].
>We believe KEMs are especially worth discussing in the context of the
>TLS protocol because NIST is in the process of standardizing post-
>quantum KEM algorithms to replace "classic" key exchange (based on
>elliptic curve or finite-field Diffie-Hellman [NISTPQC]).
>
>This proposal draws inspiration from [I-D.ietf-tls-semistatic-dh],
>which is in turn based on the OPTLS proposal for TLS 1.3 [KW16].
>However, these proposals require a non-interactive key exchange: they
>combine the client's public key with the server's long-term key.
>This imposes a requirement that the ephemeral and static keys use the
>same algorithm, which this proposal does not require.  Additionally,
>there are no post-quantum proposals for a non-interactive key
>exchange currently considered for standardization, while several KEMs
>are on the way.
>
> I see why this motivates using a KEM for key establishment, but I'm
> not sure it motivates this design, whic

Re: [TLS] Comments on draft-celi-wiggers-tls-authkem-00.txt

2021-07-22 Thread Kampanakis, Panos
Thx. Understood. 

>> - can cache or fetch the peer public keys in order to do KEMTLS
> I did not say that. As far as I can tell now, there's no way to fetch 
> (outside/OOB of this protocol) peer's pub keys or certs.

draft-ietf-tls-esni does it with DNS HTTPS RRs, but indeed it would require new 
efforts to make it happen with additional operational challenges.

If caching the peer public key is an option for your usecase, then cache 
management could be an operational challenge for clients that talk to many 
peers.

If caching peer public keys is not an option either, then you will have to pay 
the price of an extra round-trip to get the peer KEM public key. 



-Original Message-
From: Blumenthal, Uri - 0553 - MITLL  
Sent: Thursday, July 22, 2021 8:49 AM
To: Kampanakis, Panos 
Cc: tls@ietf.org; Douglas Stebila ; Eric Rescorla 

Subject: RE: [EXTERNAL] [TLS] Comments on draft-celi-wiggers-tls-authkem-00.txt

On Jul 22, 2021, at 00:46, Kampanakis, Panos  wrote:
> 
> Hi Uri,
> 
> Thank you for the clarifications. 
> 
> So you have a usecase that 
> - want to use PQ algorithms
> - is significantly affected by an extra 1-2 or 4-5KB on the link
> - does not send a cert chain, only leaf certs

Yes. 

> - can cache or fetch the peer public keys in order to do KEMTLS

I did not say that. As far as I can tell now, there's no way to fetch 
(outside/OOB of this protocol) peer's pub keys or certs. 

Caching received and validated keys to ease the reconnects is an interesting 
idea. I'll need to figure whether the comm savings outweigh the extra 
complexity and branching of the protocol. 

> Although I don't consider it the general usecase, maybe KEMTLS is the way to 
> go there. 

I'm 99.9% sure it is.

> Other good options imo for it would be draft-ietf-tls-ctls and rfc7924 to 
> save even more on data put on the link.

Thank you! Seems applicable - let me check. 

Thanks

> -Original Message-
> From: Blumenthal, Uri - 0553 - MITLL  
> Sent: Tuesday, July 13, 2021 1:17 AM
> To: Kampanakis, Panos 
> Cc:  ; Douglas Stebila ; Eric 
> Rescorla 
> Subject: RE: [EXTERNAL] [TLS] Comments on 
> draft-celi-wiggers-tls-authkem-00.txt
> 
>> If we are talking NIST Level 5 (and I am assuming you are
>> discussing mTLS), 
> 
> Yes. ;-)
> 
>> ...have you calculated the total CertVerify+cert chain sizes
>> there assuming 2 ICAs let's say? 
> 
> More or less. ;-)
> 
> My use case has all the ICAs pre-loaded - the transmitted chain contains only 
> one entity cert. I'm sacrificing flexibility for performance under 
> constraints. Size is the real enemy here.
> 
> 
>> And would constrained devices or mediums that sweat about 5KB
>> really be able to support PQ KEMs and Sigs at NIST Level 5?
> 
> My tests showed that they *do* support PQ KEMs (NTRU and Kyber - haven't 
> tried McEliece ;) and Sigs (Falcon and Dilithium - haven't tried Rainbow ;) 
> at Level 5. Caveat - they do only Sig *verification* (which suits me fine).
> 
> (I posted benchmarks from Intel Core i9, but they work acceptably well on the 
> "smaller" chips.)
> 
> Also, sorry if I did not make it clear - it's not the *devices* themselves 
> that sweat 5KB, it's their austere links.
> 
> 
> 
>-Original Message-
>From: TLS  On Behalf Of Blumenthal, Uri - 0553 - 
> MITLL
>Sent: Monday, July 12, 2021 11:39 PM
>To: Douglas Stebila ; Eric Rescorla 
>Cc:  
>Subject: RE: [EXTERNAL] [TLS] Comments on 
> draft-celi-wiggers-tls-authkem-00.txt
> 
>CAUTION: This email originated from outside of the organization. Do not 
> click links or open attachments unless you can confirm the sender and know 
> the content is safe.
> 
> 
> 
>Let me emphasize the reasons Douglas brought up. Note that I need to use 
> NIST Sec Level 5 algorithms. So, Kyber-1024 and Dilithium5 (other algorithms 
> show even worse ratio between KEM and signature!).
> 
>Communications costs:
>- Difference in public key sizes: 1568 bytes of Kyber vs. 2592 bytes of 
> Dilithium => 1024 extra bytes to carry over channel each way;
>- Signature: extra 4595 bytes each way, because in addition to exchanging 
> certs (aka "signed public keys", which is inevitable) you need to sign the 
> exchange and communicate that signature across;
>- Total: 5619 extra bytes each way. For peer-to-peer broadband 
> connections, you can say "so what?". But my links are *very* austere.
> 
>Computation costs (ballpark, on a powerful CPU):
>- KEM: keygen 15us, encap 18us, decap 14us (say, double encap and decap 
> for PFS-providing exchange);
>- Signature: sign 113us, verify 55us;

Re: [TLS] New Version Notification for draft-kampanakis-tls-scas-latest-00.txt (ICA Supression)

2022-02-13 Thread Kampanakis, Panos
Hi TLS WG,

This draft draft-kampanakis-tls-scas-latest is attempting to resurrect Martin’s 
original draft-thomson-tls-sic. It proposes using two new TLS 1.3 flags 
(draft-ietf-tls-tlsflags ) to signal to the TLS server or client to not send 
its Intermediate CA (ICA) certificates. 

It assumes that we can pre-cache or load all the necessary intermediate CAs in 
order to build the cert chains to authenticate peers. As a data point, the size 
of a full ICA cache for the web would be 1-2MB (1-2 thousand ICAs) based on 
testing and 3rd party data [7][8]. 1-2MB is trivial for most usecases. When it 
is not, other caching mechanisms can be used. 

The main usecases that would benefit from this would be 
- post-quantum (D)TLS (PQ certs are going to be big and thus introduce issues 
for (D)TLS and QUIC [1][2][3][4]).
- EAP-TLS in cases with big cert chains [5][6]
- constrained environments where even a few KB in a (D)TLS handshake matter

We believe we have addressed the comments regarding the original draft 
https://mailarchive.ietf.org/arch/browse/tls/?q=draft-thomson-tls-sic  

Feedback and discussion are welcome. 

Rgs,
Panos

[1] https://blog.cloudflare.com/sizing-up-post-quantum-signatures/   
[2] 
https://www.ndss-symposium.org/ndss-paper/post-quantum-authentication-in-tls-1-3-a-performance-study/
  
[3] https://dl.acm.org/doi/10.1145/3386367.3431305 
[4] 
https://assets.amazon.science/00/f8/aa76ff93472d9b55b6a84716e34c/speeding-up-post-quantum-tls-handshakes-by-suppressing-intermediate-ca-certificates.pdf
 
[5] https://datatracker.ietf.org/doc/html/draft-ietf-emu-eaptlscert 
[6] https://datatracker.ietf.org/doc/html/draft-ietf-emu-eap-tls13 
[7] https://github.com/FiloSottile/intermediates  
[8] 
https://ccadb-public.secure.force.com/mozilla/MozillaIntermediateCertsCSVReport 
 

 

-Original Message-
From: internet-dra...@ietf.org  
Sent: Sunday, February 13, 2022 2:34 PM
To: Bas Westerbaan ; Bytheway, Cameron 
; Martin Thomson ; Kampanakis, Panos 

Subject: [EXTERNAL] New Version Notification for 
draft-kampanakis-tls-scas-latest-00.txt

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



A new version of I-D, draft-kampanakis-tls-scas-latest-00.txt
has been successfully submitted by Panos Kampanakis and posted to the IETF 
repository.

Name:   draft-kampanakis-tls-scas-latest
Revision:   00
Title:  Suppressing CA Certificates in TLS 1.3
Document date:  2022-02-13
Group:  Individual Submission
Pages:  10
URL:
https://www.ietf.org/archive/id/draft-kampanakis-tls-scas-latest-00.txt
Status: 
https://datatracker.ietf.org/doc/draft-kampanakis-tls-scas-latest/
Htmlized:   
https://datatracker.ietf.org/doc/html/draft-kampanakis-tls-scas-latest


Abstract:
   A TLS client or server that has access to the complete set of
   published intermediate certificates can inform its peer to avoid
   sending certificate authority certificates, thus reducing the size of
   the TLS handshake.




The IETF Secretariat


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New Version Notification for draft-kampanakis-tls-scas-latest-00.txt (ICA Supression)

2022-02-15 Thread Kampanakis, Panos
Good comments, thank you Ilari. 

To answer your comments  

> 1) There are a few "shall" in the text. Should those be "SHALL"?

The two "shall" refer to draft-ietf-tls-tlsflags. Based on experience from 
previous drafts, we do not want to repeat normative language from another 
draft, so we kept them lowercase. We can still consider making them normative 
though. 


> 2) Section 3.2: "To prevent a failed TLS connection, a client could chose to 
> not send its intermediates regardless of the flag from the server, if it has 
> a reason to believe the issuing CAs do not exist in the server ICA list."
> ... Shouldn't the client send its intermediates if it thinks the server does 
> not have them. 

You are right. Nit. We will fix it. I created issue 
https://github.com/csosto-pk/tls-suppress-intermediates/issues/5 for it. 


> 3) Why there are two flags? I do not see a case where both would be sent in 
> the same message.

In the original draft there was only one. But we want for both the client and 
server (CertReq) to be able to signal to their peer to suppress CAs. 
draft-ietf-tls-tlsflags defines that the peer needs to acknowledge the flag, 
thus we needed one per direction. 


> 4) In WebPKI, there are some cornercases (constrained ICAs) where the client 
> might be missing a certificate or certificates in the chain.
> Currently the WebPKI root program rules allow not disclosing "technically 
> constrained" certificates (but there are plans to change this).

Good point. That has come up in discussions with my co-authors. As Martin was 
pointing out, a lot hinges on the semantics of the tls_flags bit.  We probably 
can say that it means "I have all the intermediates I am willing to accept".  
That's a little too absolute for the web PKI as it stands. We don't have stats 
on how often we'd fail as a result; we would have to check but unconstrained 
intermediates probably isn't exceptional. The flag should probably say "I have 
all the *unconstrained* intermediates that I'm willing to accept" or maybe "I 
have all the intermediates from that I'm willing to accept, unless it's the 
WebPKI and then I only have unconstrained intermediates"' 

But if MSRP 2.8 adds constrained intermediates, then "I have all the 
intermediates I am willing to accept" may just suffice. 

I created issue 
https://github.com/csosto-pk/tls-suppress-intermediates/issues/6  for this.


> 5) In the client auth scenario, the server might have exhaustive list of all 
> issuing ICAs it accepts, so including any ICAs is never necressary. However, 
> this might be handled even currently by not giving the client a chain. 
> However, doing this in other direction can be quite dangerous without prior 
> agreement.

I am not sure I am following that argument. If the client does not have a chain 
what happens if the server does not have all intermediates?

By quite dangerous do you mean that if they have not pre-agreed on the ICA list 
there could be an auth failure and recovery will not be easy because the server 
can't track the clients it is expecting ICAs from? Am I getting you right? 




-Original Message-
From: TLS  On Behalf Of Ilari Liusvaara
Sent: Monday, February 14, 2022 2:43 AM
To: tls@ietf.org
Subject: RE: [EXTERNAL] [TLS] New Version Notification for 
draft-kampanakis-tls-scas-latest-00.txt (ICA Supression)

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On Mon, Feb 14, 2022 at 03:33:05AM +, Kampanakis, Panos wrote:
> Hi TLS WG,
>
> This draft draft-kampanakis-tls-scas-latest is attempting to resurrect 
> Martin’s original draft-thomson-tls-sic. It proposes using two new TLS
> 1.3 flags (draft-ietf-tls-tlsflags ) to signal to the TLS server or 
> client to not send its Intermediate CA (ICA) certificates.
>
> Feedback and discussion are welcome.
>
> -Original Message-
> From: internet-dra...@ietf.org 
> Sent: Sunday, February 13, 2022 2:34 PM
> To: Bas Westerbaan ; Bytheway, Cameron 
> ; Martin Thomson ; Kampanakis, 
> Panos 
> Subject: [EXTERNAL] New Version Notification for 
> draft-kampanakis-tls-scas-latest-00.txt
>
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you can confirm the sender and know the 
> content is safe.
>
>
>
> A new version of I-D, draft-kampanakis-tls-scas-latest-00.txt
> has been successfully submitted by Panos Kampanakis and posted to the IETF 
> repository.
>
> Name:   draft-kampanakis-tls-scas-latest
> Revision:   00
> Title:  Suppressing CA Certificates in TLS 1.3
> Document

Re: [TLS] New Version Notification for draft-kampanakis-tls-scas-latest-00.txt (ICA Supression)

2022-02-25 Thread Kampanakis, Panos
> I only have some isolated random datapoints on number of disclosed WebPKI 
> ICAs since 2021-02-08 (a bit over year ago), but during that time, that 
> number has grown from 1669 to 1820.

Thx Ilari. Understood. 

We are looking into how we could quantify how the complete ICA list changes 
over time in order to evaluate TBD3. Probably it would be in the days to weeks 
timeline than years, but that remains to be seen. Of course that would not 
cover usecases other than WebPKI, but probably that is the more dynamic one. 



-Original Message-
From: TLS  On Behalf Of Ilari Liusvaara
Sent: Saturday, February 19, 2022 6:15 AM
To: tls@ietf.org
Subject: RE: [EXTERNAL] [TLS] New Version Notification for 
draft-kampanakis-tls-scas-latest-00.txt (ICA Supression)

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On Sat, Feb 19, 2022 at 03:59:23AM +, Kampanakis, Panos wrote:

> - If we can assume an OOB mechanism to load the ICAs then we can 
> simplify things. Practically we can assume there is no failure.
> Agreed, but I am not sure that we should not include any non-normative 
> language for the inadvertent corner case though.
> There should be a fallback, one that we are assuming will never 
> happen, but an implementer should account for it.

It seems to me that the dominant failure modes are:

- Using old ICA list that is missing some newly minted ICA.
- Using custom TA that is missing ICA data.


> - Connection re-establishment affects the security and privacy 
> assumptions and should be captured. I am not sure the concern is worse 
> than the regular fingerprinting text already in the draft, but point 
> taken. We can improve the text and I created an issue for it. 
> https://github.com/csosto-pk/tls-suppress-intermediates/issues/12

Regarding security and privacy, the most severe impact of any attack I can come 
up with is determining if some arbitrary ICA is on the ICA list or not (for 
passive attacks, that is restricted to the issuing ICA used by the server). 
Practical impact of attacker being able to do that depends on how many 
endpoints share that same ICA list.

Rough outline of the attack (active variant): Fabricate a certificate 
purporting to be from some ICA, send it to client and observe if the client 
retries (ICA not on the list) or just fails (ICA is on the list).


> I would be interested to track how that ICA list has been changing 
> over time. Let’s see if we can get data on that for FFs preload list, 
> Filippo’s or others.

I only have some isolated random datapoints on number of disclosed WebPKI ICAs 
since 2021-02-08 (a bit over year ago), but during that time, that number has 
grown from 1669 to 1820.



-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PQC key exchange sizes

2022-07-26 Thread Kampanakis, Panos
Hi Ilari,

> - DTLS-level fragmentation. There are buggy implementations that break if one 
> tries this.

DTLS servers have been fragmenting and sending cert chains that don’t fit in 
the MTU for a long time. Is this buggy on the TLS client side? Any public info 
you can share about these buggy implementations for my education?



-Original Message-
From: TLS  On Behalf Of Ilari Liusvaara
Sent: Tuesday, July 26, 2022 10:59 AM
To:  
Subject: RE: [EXTERNAL][TLS] PQC key exchange sizes

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On Tue, Jul 26, 2022 at 02:15:34PM +0200, Thom Wiggers wrote:
>
> In yesterday’s working group meeting we had a bit of a discussion of 
> the impact of the sizes of post-quantum key exchange on TLS and 
> related protocols like QUIC. As we neglected to put Kyber’s key sizes 
> in our slide deck (unlike the signature schemes), I thought it would 
> be a good idea to get the actual numbers of Kyber onto the mailing list.
>
> Note that in the context of TLS’s key exchange, the public key would 
> be what goes into the ClientHello key_shares extension, and the 
> ciphertext would go into the Server’s ServerHello key_shares extension.
>
> Kyber512: NIST level I, "strength ~AES128"
>   public key: 800 bytes
>   ciphertext: 768 bytes
>   secret key: 1632 bytes
> Kyber768: NIST level III, "~AES192"
>   public key: 1184
>   ciphertext: 1088
>   secret key: 2400 bytes
> Kyber1024: NIST level V, "~AES256"
>   public key: 1568
>   ciphertext: 1568
>   secret key: 3168
>
> So for the key exchange at least, it seems to me Kyber512 should work 
> for TLS and QUIC just fine; Kyber768 might be a bit of a squeeze if 
> you want to stay in QUIC’s default 1300 byte initial packet? Also, I 
> don't really know how the D of DTLS might change the story.

The initial packet size is 1200, so Kyber768 public key does not fit into a 
packet. However, the initial packets can be split, so even
Kyber1024 key does fit into two initial packets (this also doubles the server 
initial window from 3600 to 7200 due to the way amplification limit works)


DTLS is a bit more problematic. There are two ways to deal with the key being 
too big to fit in a single IP packet.

- IP-level fragmentation. REALLY SHOULD NOT be used.
- DTLS-level fragmentation. There are buggy implementations that break
  if one tries this.

And in both case, the failure modes are not easy to recover from.




-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PQC key exchange sizes

2022-07-27 Thread Kampanakis, Panos
Gotcha. This is a reasonable explanation for a potential problem, but I would 
also like to see experimental proof that DTLS implementation X, Y, Z have the 
problem. TLS implementations don't deal with big ClientHellos today so we could 
assume they would have a problem, but when tested they do OK for the most part.
 

-Original Message-
From: TLS  On Behalf Of Ilari Liusvaara
Sent: Wednesday, July 27, 2022 10:42 AM
To:  
Subject: RE: [EXTERNAL][TLS] PQC key exchange sizes

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On Wed, Jul 27, 2022 at 02:27:12AM +0000, Kampanakis, Panos wrote:
> Hi Ilari,
>
> > - DTLS-level fragmentation. There are buggy implementations that
> >   break if one tries this.
>
> DTLS servers have been fragmenting and sending cert chains that don’t 
> fit in the MTU for a long time. Is this buggy on the TLS client side?

These problems are specific to fragmenting Client Hello. Handling fragmented 
DTLS Client Hello is different from handling fragmented DTLS Certificate (and 
even more so in DTLS 1.3). I think DTLS specification just pretends both cases 
are the same. They are not.


QUIC implementations could have similar issues with multiple initial packets, 
but operating QUIC with fast failure-independent fallback would make failures 
soft.


There is the general principle that if some protocol feature is not used in the 
wild, it tends to break, even if required part of the protocol. Either by 
implementation being poorly tested and buggy, assuming the feature does not 
exist, or being missing entierely.
Combine this with interop failures having outsize impact and old versions 
sticking around far longer than desriable. And I do not think fragmented Client 
Hellos in DTLS or multiple initials in QUIC are seen much.


One trick with DTLS would be sending client hello with no key shares. Causes 
extra round-trip, but any server that selects PQC causing fragmentation would 
presumably be capable of handling that.



-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-hybrid-design

2022-08-17 Thread Kampanakis, Panos
+1 on the 3 proposed by Scott and P256+Kyber512 proposed by Kris.

From: TLS  On Behalf Of Kris Kwiatkowski
Sent: Wednesday, August 17, 2022 3:31 PM
To: tls@ietf.org
Subject: RE: [EXTERNAL][TLS] WGLC for draft-ietf-tls-hybrid-design


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



I support those choices, but would also add P256+Kyber512.
* (1) same reason as below (by Scott)
* (2) to be able to declare security of generated keys in FIPS-mode for
  _both_ - classical and post-quantum schemes (once Kyber is standardized).
After double-checking with NIST today, currently there is no clear plan for
updating SP 800-56A with X25519 (opposite to SP 800-186).

Kind regards,
Kris
On 8/17/22 20:06, Scott Fluhrer (sfluhrer) wrote:

So that we get an initial answer to this (so we can put it into the draft - of 
course, we can debate what's in the draft...)



Illari suggested:



X25519+Kyber768

P384+Kyber768



Well, I would suggest adding in



X25519+Kyber512



For those situations where we need to limit the message size (perhaps DTLS and 
QUIC).



Is the working group happy with that?



-Original Message-

From: TLS  On Behalf Of 
Ilari Liusvaara

Sent: Saturday, August 13, 2022 11:12 AM

To: TLS@ietf.org

Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design



On Fri, Aug 12, 2022 at 06:13:38PM +, Scott Fluhrer (sfluhrer) wrote:

Again, this is late, however Stephen did ask this to be discussed in the

working group, so here we go:



-Original Message-

From: TLS  On Behalf Of 
Stephen Farrell

Sent: Saturday, April 30, 2022 11:49 AM

To: Ilari Liusvaara 
; 
TLS@ietf.org

Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design





Hiya,



On 30/04/2022 10:05, Ilari Liusvaara wrote:

On Sat, Apr 30, 2022 at 01:24:58AM +0100, Stephen Farrell wrote:

- section 5: IMO all combined values here need to have

recommended == "N" in IANA registries for a while and that needs

to be in this draft before it even gets parked. Regardless of

whether or not the WG agree with me on that, I think the current

text is missing stuff in this section and don't recall the WG

discussing that



I think that having recommended = Y for any combined algorithm

requires NIST final spec PQ part and recommended = Y for the

classical part (which allows things like x25519 to be the classical part).



That is, using latest spec for NISTPQC winner is not enough. This

impiles recommended = Y for combined algorithm is some years out

at the very least.



I agree, and something like the above points ought be stated in the

draft after discussion in the WG.



Section 5 is 'IANA considerations', and would be where we would list

the various supported hybrids, which we don’t at the moment.



Well, if we were to discuss some suggested hybrids (and we now know

the NIST selection), I would suggest these possibilities:



- X25519 + Kyber512

- P256 + Kyber512

- X448 + Kyber768

- P384 + Kyber768



I would take:



X25519+Kyber768

P384+Kyber768



The reason for taking Kyber768 is because the CRYSTALS team recommends

it. The reason for taking P384 is because it is CNSA-approved, so folks that

need CNSA can use that.



Of course, that is likely to bust packet size limits. I do not think that is an

issue in TLS, but DTLS and QUIC might be another matter entierely (in theory

DTLS and QUIC can handle it just fine, practice might be another matter

entierely. And if such problems are there, it is good to know about those...

This stuff is experimental).





Of course, it's possible that NIST will tweak the definition of Kyber;

that's just a possibility we'll need to live with (and wouldn't change

what hybrid combinations we would initially define)



I would think such changes would just mean the interim post-quantum kex is

not compatible with the final one. Not that big of deal, there are tens of

thoursands of free codepoints. If an implementation  needs both, it can

probably share vast majority of the code.







-Ilari



___

TLS mailing list

TLS@ietf.org

https://www.ietf.org/mailman/listinfo/tls

___

TLS mailing list

TLS@ietf.org

https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-hybrid-design

2022-08-17 Thread Kampanakis, Panos
I forgot to suggest to add a paragraph in the Sec Considerations section about 
Kyber-512.

Kyber-512 offers a CoreSVP hardness of ~120 bits of security which is a little 
lower than it should. The Kyber submission refines the CoreSVP cost by using 
sieving cost simulations and claims that the gate and memory cost is ~2^150 and 
~2^90 approximately which they argue is better than AES. I think it would be 
worth to call out the CoreSVP hardness and the refined estimate for Kyber-512 
in the Sec Considerations section.



From: Kampanakis, Panos
Sent: Wednesday, August 17, 2022 4:26 PM
To: 'Kris Kwiatkowski' ; tls@ietf.org
Subject: RE: [EXTERNAL][TLS] WGLC for draft-ietf-tls-hybrid-design

+1 on the 3 proposed by Scott and P256+Kyber512 proposed by Kris.

From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of Kris 
Kwiatkowski
Sent: Wednesday, August 17, 2022 3:31 PM
To: tls@ietf.org<mailto:tls@ietf.org>
Subject: RE: [EXTERNAL][TLS] WGLC for draft-ietf-tls-hybrid-design


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



I support those choices, but would also add P256+Kyber512.
* (1) same reason as below (by Scott)
* (2) to be able to declare security of generated keys in FIPS-mode for
  _both_ - classical and post-quantum schemes (once Kyber is standardized).
After double-checking with NIST today, currently there is no clear plan for
updating SP 800-56A with X25519 (opposite to SP 800-186).

Kind regards,
Kris
On 8/17/22 20:06, Scott Fluhrer (sfluhrer) wrote:

So that we get an initial answer to this (so we can put it into the draft - of 
course, we can debate what's in the draft...)



Illari suggested:



X25519+Kyber768

P384+Kyber768



Well, I would suggest adding in



X25519+Kyber512



For those situations where we need to limit the message size (perhaps DTLS and 
QUIC).



Is the working group happy with that?



-Original Message-

From: TLS <mailto:tls-boun...@ietf.org> On Behalf Of 
Ilari Liusvaara

Sent: Saturday, August 13, 2022 11:12 AM

To: TLS@ietf.org<mailto:TLS@ietf.org>

Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design



On Fri, Aug 12, 2022 at 06:13:38PM +, Scott Fluhrer (sfluhrer) wrote:

Again, this is late, however Stephen did ask this to be discussed in the

working group, so here we go:



-Original Message-

From: TLS <mailto:tls-boun...@ietf.org> On Behalf Of 
Stephen Farrell

Sent: Saturday, April 30, 2022 11:49 AM

To: Ilari Liusvaara 
<mailto:ilariliusva...@welho.com>; 
TLS@ietf.org<mailto:TLS@ietf.org>

Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design





Hiya,



On 30/04/2022 10:05, Ilari Liusvaara wrote:

On Sat, Apr 30, 2022 at 01:24:58AM +0100, Stephen Farrell wrote:

- section 5: IMO all combined values here need to have

recommended == "N" in IANA registries for a while and that needs

to be in this draft before it even gets parked. Regardless of

whether or not the WG agree with me on that, I think the current

text is missing stuff in this section and don't recall the WG

discussing that



I think that having recommended = Y for any combined algorithm

requires NIST final spec PQ part and recommended = Y for the

classical part (which allows things like x25519 to be the classical part).



That is, using latest spec for NISTPQC winner is not enough. This

impiles recommended = Y for combined algorithm is some years out

at the very least.



I agree, and something like the above points ought be stated in the

draft after discussion in the WG.



Section 5 is 'IANA considerations', and would be where we would list

the various supported hybrids, which we don’t at the moment.



Well, if we were to discuss some suggested hybrids (and we now know

the NIST selection), I would suggest these possibilities:



- X25519 + Kyber512

- P256 + Kyber512

- X448 + Kyber768

- P384 + Kyber768



I would take:



X25519+Kyber768

P384+Kyber768



The reason for taking Kyber768 is because the CRYSTALS team recommends

it. The reason for taking P384 is because it is CNSA-approved, so folks that

need CNSA can use that.



Of course, that is likely to bust packet size limits. I do not think that is an

issue in TLS, but DTLS and QUIC might be another matter entierely (in theory

DTLS and QUIC can handle it just fine, practice might be another matter

entierely. And if such problems are there, it is good to know about those...

This stuff is experimental).





Of course, it's possible that NIST will tweak the definition of Kyber;

that's just a possibility we'll need to live with (and wouldn't change

what hybrid combinations we would initially define)



I would think such changes would just mean the interim post-quantum kex is

not compatible with the final one. Not that big of deal, there are tens of

thoursands 

Re: [TLS] [UNVERIFIED SENDER] Re: New Version Notification for draft-kampanakis-tls-scas-latest-01.txt

2022-11-28 Thread Kampanakis, Panos
Thanks John.

Good points about draft-ietf-tls-subcerts. I am tracking it in git and will 
update.

Before bringing the draft up for discussion again, we are trying to quantify 
the "stale ICA cache causing TLS connection failures for the web", as this was 
a concern the group brought up. Getting this data is not straightforward, I 
must say.


From: John Mattsson 
Sent: Thursday, November 24, 2022 6:04 AM
To: Kampanakis, Panos ; tls@ietf.org
Cc: Bytheway, Cameron 
Subject: [EXTERNAL] [UNVERIFIED SENDER] Re: New Version Notification for 
draft-kampanakis-tls-scas-latest-01.txt


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi,

I think this is great work and something the TLS WG should adopt and work on. 
Reducing the total number of bytes is very important not only in constrained 
IoT, but also in TLS based EAP methods, and in applications where handshake 
time to completion is important.

I quicky read the -02 draft. It seems to be in a good shape. Some comments:

- I think it would be good if the draft described how it works with 
draft-ietf-tls-subcerts. While the latest version of draft-ietf-tls-subcerts 
talks about "delegated credential" and not certifcates, they are commonly 
refered to as subcerts.
- I think draft-kampanakis-tls-scas-latest could considered allowing 
suppressing also the end-entity certificate for use cases when 
draft-ietf-tls-subcerts is used.

Cheers,
John

From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
Kampanakis, Panos 
mailto:kpanos=40amazon@dmarc.ietf.org>>
Date: Friday, 4 March 2022 at 16:42
To: tls@ietf.org<mailto:tls@ietf.org> mailto:tls@ietf.org>>
Cc: Bytheway, Cameron mailto:byth...@amazon.com>>
Subject: Re: [TLS] New Version Notification for 
draft-kampanakis-tls-scas-latest-01.txt
Hi all,

The updated -01 version fixes a couple of nits identified by Ilari, removes the 
needs for two different tlsflags, one each direction, and does not require an 
acknowledgement of the ICA suppression tlsflag based on discussions about the 
tlsflags draft 
https://mailarchive.ietf.org/arch/msg/tls/SIvCO_ZFmNfTEeyiuZOcdBzTdAo/

There are more issues we are tracking based on discussions in this list 
https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-45444731-24c7ac234ac8e19f&q=1&e=76ac0dba-b0c6-4ac8-9538-5faabd060cb2&u=https%3A%2F%2Fgithub.com%2Fcsosto-pk%2Ftls-suppress-intermediates%2Fissues

-Original Message-
From: internet-dra...@ietf.org<mailto:internet-dra...@ietf.org> 
mailto:internet-dra...@ietf.org>>
Sent: Friday, March 4, 2022 10:34 AM
To: Bas Westerbaan mailto:b...@cloudflare.com>>; Bytheway, 
Cameron mailto:byth...@amazon.com>>; Martin Thomson 
mailto:m...@lowentropy.net>>; Kampanakis, Panos 
mailto:kpa...@amazon.com>>
Subject: [EXTERNAL] New Version Notification for 
draft-kampanakis-tls-scas-latest-01.txt

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



A new version of I-D, draft-kampanakis-tls-scas-latest-01.txt
has been successfully submitted by Panos Kampanakis and posted to the IETF 
repository.

Name:   draft-kampanakis-tls-scas-latest
Revision:   01
Title:  Suppressing CA Certificates in TLS 1.3
Document date:  2022-03-04
Group:  Individual Submission
Pages:  10
URL:
https://www.ietf.org/archive/id/draft-kampanakis-tls-scas-latest-01.txt
Status: 
https://datatracker.ietf.org/doc/draft-kampanakis-tls-scas-latest/
Htmlized:   
https://datatracker.ietf.org/doc/html/draft-kampanakis-tls-scas-latest
Diff:   
https://www.ietf.org/rfcdiff?url2=draft-kampanakis-tls-scas-latest-01

Abstract:
   A TLS client or server that has access to the complete set of
   published intermediate certificates can inform its peer to avoid
   sending certificate authority certificates, thus reducing the size of
   the TLS handshake.




The IETF Secretariat


___
TLS mailing list
TLS@ietf.org<mailto:TLS@ietf.org>
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] about hash and post-quantum ciphers

2023-01-27 Thread Kampanakis, Panos
+1 on starting to see a little SHA-3 trickle down to TLS, IPsec, SSH and more 
common protocols.


From: TLS  On Behalf Of John Mattsson
Sent: Friday, January 27, 2023 6:25 AM
To: tls@ietf.org
Cc: hojarasca2022 ; Salz, Rich 

Subject: RE: [EXTERNAL][TLS] about hash and post-quantum ciphers


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi,

I don't think non-standardized algorithms should be adopted by the WG. Even for 
just assigning a number, a good first step would be CFRG.

But this mail got me thinking:

- I think the lack of hash algorithm crypto agility in TLS 1.3 is 
unsatisfactory. The _only_ option in TLS 1.3 is SHA2.

- NIST is expected to exclusively use SHA3 in the lattice-based PQC algorithms. 
I think it would make very much sense to include SHA3 (the SHAKE variants) at 
the same time as the standardized NIST PQC algorithms.

- TLS 1.3 hardcodes use of the quite outdated HMAC and HDKF constructions that 
only exists because SHA2 is fixed-length and suffers badly from 
length-extension attacks. Modern hash algorithm like SHAKE/KMAC are 
variable-length and does not suffer from length-extension attacks. If SHA3 is 
added in the future, I think it would make sense to use KMAC instead of HMAC 
and HKDF. Might also be nice to use the duplex construction whose security can 
be shown to be equivalent to the sponge construction.

Cheers,
John
From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
Salz, Rich 
mailto:rsalz=40akamai@dmarc.ietf.org>>
Date: Thursday, 26 January 2023 at 20:42
To: hojarasca2022 
mailto:hojarasca2022=40proton...@dmarc.ietf.org>>,
 tls@ietf.org mailto:tls@ietf.org>>
Subject: Re: [TLS] about hash and post-quantum ciphers
In TLS 1.3, AES256-SHA384 is not mandatory to implement.

If there is a freely available published specification of BLAKE3, you can 
request an assigned number for it in the TLS registry [1].


  *   Furthermore, NIST selected some post-quantum ciphers: 
https://nist.gov/pqcrypto

Hm, are you new here?  The archives have a couple hundred messages about 
post-quantum.

[1] 
https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-4
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Merkle Tree Certificates

2023-03-14 Thread Kampanakis, Panos
Hi David,

Interesting idea. Seems like a radical, hard change but I want to understand it 
better. Some clarifications:

- Previously, in the ICA suppression draft you had correctly brought up the 
challenge of keeping an up-to-date ICA cache while most browsers are not up to 
date. The Merkle tree mechanism requires constant updates. Would that be even 
more of challenge with browsers that have not been updated?

- To make this work for WebPKI, would the Transparency Service need to fetch 
from all WebPKI issuing CAs and update them every hour?

- CAs would also need to publish their Merkle tree logs similarly to CT, right?

- Negotiating a new CertType would be a fingerprint as you say in Section 12. 
The size in the response is also a fingerprint for the Subscriber. It is not a 
huge concern for me personally especially if this got wide adoption, but it was 
brought up before in similar contexts.

- To me this draft eliminates the need for a PKI and basically makes the 
structure flat. Each CA issues certs in the form of a batched tree. Relying 
parties that “trust and are aware” of this issuing CA’s tree can verify the 
signed window structure and then trust it. So in a TLS handshake we would have 
(1 subscriber public key + 2 signatures + some relatively small tree structure) 
compared to (1 signature + (3 sigs + 1 public key) for server cert + (1 Sig + 1 
Public key) per ICA cert in the chain). If we borrowed the same flat PKI logic 
though and started “trusting” on a per issuer CA basis then the comparison 
becomes (1 public key + 2 signatures + some small tree structure) vs (1 public 
key + 4 sigs). So we are saving 2 PQ sig minus the small tree structure size . 
Am I misunderstanding the premise here?



From: TLS  On Behalf Of David Benjamin
Sent: Friday, March 10, 2023 5:09 PM
To:  
Cc: Devon O'Brien 
Subject: [EXTERNAL] [TLS] Merkle Tree Certificates


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi all,

I've just uploaded a draft, below, describing several ideas we've been mulling 
over regarding certificates in TLS. This is a draft-00 with a lot of moving 
parts, so think of it as the first pass at some of ideas that we think fit well 
together, rather than a concrete, fully-baked system.

The document describes a new certificate format based on Merkle Trees, which 
aims to mitigate the many signatures we send today, particularly in 
applications that use Certificate Transparency, and as post-quantum signature 
schemes get large. Four signatures (two SCTs, two X.509 signatures) and an 
intermediate CA's public key gets rather large, particularly with something 
like Dilithium3's 3,293-byte signatures. This format uses a single Merkle Tree 
inclusion proof, which we estimate at roughly 600 bytes. (Note that this 
proposal targets certificate-related signatures but not the TLS handshake 
signature.)

As part of this, it also includes an extensibility and certificate negotiation 
story that we hope will be useful beyond this particular scheme.

This isn't meant to replace existing PKI mechanisms. Rather, it's an optional 
optimization for connections that are able to use it. Where they aren't, you 
negotiate another certificate. I work on a web browser, so this has browsers 
and HTTPS over TLS in mind, but we hope it, or some ideas in it, will be more 
broadly useful.

That said, we don't expect it's for everyone, and that's fine! With a robust 
negotiation story, we don't have to limit ourselves to a single answer for all 
cases at once. Even within browsers and the web, it cannot handle all cases, so 
we're thinking of this as one of several sorts of PKI mechanisms that might be 
selected via negotiation.

Thoughts? We're very eager to get feedback on this.

David

On Fri, Mar 10, 2023 at 4:38 PM 
mailto:internet-dra...@ietf.org>> wrote:

A new version of I-D, draft-davidben-tls-merkle-tree-certs-00.txt
has been successfully submitted by David Benjamin and posted to the
IETF repository.

Name:   draft-davidben-tls-merkle-tree-certs
Revision:   00
Title:  Merkle Tree Certificates for TLS
Document date:  2023-03-10
Group:  Individual Submission
Pages:  45
URL:
https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-00.txt
Status: 
https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/
Html:   
https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-00.html
Htmlized:   
https://datatracker.ietf.org/doc/html/draft-davidben-tls-merkle-tree-certs


Abstract:
   This document describes Merkle Tree certificates, a new certificate
   type for use with TLS.  A relying party that regularly fetches
   information from a transparency service can use this certificate type
   as a size optimization over more conventional mechanisms with post-
   quantum signatures.  Merkle 

Re: [TLS] Merkle Tree Certificates

2023-03-14 Thread Kampanakis, Panos
Hi Hubert, 

I am not an author of draft-davidben-tls-merkle-tree-certs, but I had some 
feedback on this question: 

RFC7924 was a good idea but I don’t think it got deployed. It has the 
disadvantage that it allows for connection correlation and it is also 
challenging to demand a client to either know all its possible destination 
end-entity certs or be able to have a caching mechanism that keeps getting 
updated. Given these challenges and that CAs are more static and less (~1500 in 
number) than leaf certs, we have proposed suppressing the ICAs in the chain 
(draft-kampanakis-tls-scas-latest which replaced draft-thomson-tls-sic ) , but 
not the server cert. 

I think draft-davidben-tls-merkle-tree-certs is trying to achieve something 
similar by introducing a Merkle tree structure for certs signed by a CA. To me 
it seems to leverage a Merkle tree structure which "batches the public key + 
identities" the CA issues. Verifiers can just verify the tree and thus assume 
that the public key of the peer it is talking to is "certified by the tree CA". 
The way I see it, this construction flattens the PKI structure, and issuing 
CA's are trusted now instead of a more limited set of roots. This change is not 
trivial in my eyes, but the end goal is similar, to shrink the amount of auth 
data. 



-Original Message-
From: TLS  On Behalf Of Hubert Kario
Sent: Monday, March 13, 2023 11:08 AM
To: David Benjamin 
Cc:  ; Devon O'Brien 
Subject: RE: [EXTERNAL][TLS] Merkle Tree Certificates

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Why not rfc7924?

On Friday, 10 March 2023 23:09:10 CET, David Benjamin wrote:
> Hi all,
>
> I've just uploaded a draft, below, describing several ideas we've been 
> mulling over regarding certificates in TLS. This is a
> draft-00 with a lot of moving parts, so think of it as the first pass 
> at some of ideas that we think fit well together, rather than a 
> concrete, fully-baked system.
>
> The document describes a new certificate format based on Merkle Trees, 
> which aims to mitigate the many signatures we send today, particularly 
> in applications that use Certificate Transparency, and as post-quantum 
> signature schemes get large. Four signatures (two SCTs, two X.509 
> signatures) and an intermediate CA's public key gets rather large, 
> particularly with something like Dilithium3's 3,293-byte signatures. 
> This format uses a single Merkle Tree inclusion proof, which we 
> estimate at roughly 600 bytes. (Note that this proposal targets 
> certificate-related signatures but not the TLS handshake signature.)
>
> As part of this, it also includes an extensibility and certificate 
> negotiation story that we hope will be useful beyond this particular 
> scheme.
>
> This isn't meant to replace existing PKI mechanisms. Rather, it's an 
> optional optimization for connections that are able to use it. Where 
> they aren't, you negotiate another certificate. I work on a web 
> browser, so this has browsers and HTTPS over TLS in mind, but we hope 
> it, or some ideas in it, will be more broadly useful.
>
> That said, we don't expect it's for everyone, and that's fine!
> With a robust negotiation story, we don't have to limit ourselves to a 
> single answer for all cases at once. Even within browsers and the web, 
> it cannot handle all cases, so we're thinking of this as one of 
> several sorts of PKI mechanisms that might be selected via 
> negotiation.
>
> Thoughts? We're very eager to get feedback on this.
>
> David
>
> On Fri, Mar 10, 2023 at 4:38 PM  wrote:
>
> A new version of I-D, draft-davidben-tls-merkle-tree-certs-00.txt
> has been successfully submitted by David Benjamin and posted to the 
> IETF repository.
>
> Name:   draft-davidben-tls-merkle-tree-certs
> Revision:   00
> Title:  Merkle Tree Certificates for TLS
> Document date:  2023-03-10
> Group:  Individual Submission
> Pages:  45
> URL:
> https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-0
> 0.txt
> Status:
>  
> https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/
> Html:
>  
> https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-0
> 0.html
> Htmlized:
>  
> https://datatracker.ietf.org/doc/html/draft-davidben-tls-merkle-tree-c
> erts
>
>
> Abstract:
>This document describes Merkle Tree certificates, a new certificate
>type for use with TLS.  A relying party that regularly fetches
>information from a transparency service can use this certificate type
>as a size optimization over more conventional mechanisms with post-
>quantum signatures.  Merkle Tree certificates integrate the roles of
>X.509 and Certificate Transparency, achieving comparable security
>properties with a smaller message size, at the cost of more limited
>applicability.
>
>
>
>
> The IETF Secretar

Re: [TLS] Merkle Tree Certificates

2023-03-21 Thread Kampanakis, Panos
Thx David. The spirit of the draft is clearer now.

ACK about the miscalculated signature. I was counting the sig in the signed 
window, but indeed that does not take place with every connection.

I think I understand your point about delegating trust. You basically say that 
if we could have an accurate picture of all ICAs at the RP we would be doing 
the same thing already. And the way you address the problem in this draft is by 
introducing the window within which you can rest assured you have the accurate 
picture of all certs.

The non-updated RP is an issue that can’t be neglected (your draft addresses it 
with the time window). Let’s say we assumed RPs are up-to-date, and the ones 
that are not can retry without assuming an accurate ICA picture. How bad is 
having a non-updated RP retry? Is the problem keeping a connection state issue 
as someone had mentioned before?

Another operational question: So this draft would work for ACME issued certs 
where the client can acquire the tree structure from the Merkle Tree CA which 
tracks ACME certs. But if you generalized this, wouldn’t that mean that either 
the RP needs to keep track and negotiate tree roots from multiple Merkle Tree 
CAs or a central Merkle Tree CA needs to aggregate all kinds of issued certs 
from CT or somewhere else?

Another random comment: The tree version approach resembles the CA dictionary 
approach of cTLS. You basically have snapshots of the delegated trust book and 
if the peer recognizes it then you no longer need to establish trust.


From: David Benjamin 
Sent: Monday, March 20, 2023 2:43 PM
To: Kampanakis, Panos 
Cc:  ; Devon O'Brien 
Subject: RE: [EXTERNAL][TLS] Merkle Tree Certificates


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi Panos,

> - Previously, in the ICA suppression draft you had correctly brought up the 
> challenge of keeping an up-to-date ICA cache while most browsers are not up 
> to date. The Merkle tree mechanism requires constant updates. Would that be 
> even more of challenge with browsers that have not been updated?

I think you misunderstood my comments on the ICA suppression draft. :-) Most 
browsers can be kept mostly up-to-date, and I agree that is an opportunity to 
reduce sizes for most connections, including addressing ICAs.

The key word here is "most". The challenge isn't keeping most RPs up-to-date, 
but addressing the remaining out-of-date RPs. My feedback was on how the draft 
handled this. It uses something time-based and makes implicit assumptions about 
the structure of the PKI. This document instead tries to have a more robust 
negotiation scheme. I also think this is better thought of as a trust agility 
problem, and it's a missed opportunity to not address that. More on this below.

I also agree with [0]. X.509 has grown to be a pretty poor fit for TLS and 
HTTPS, and led to a slew of complexity, deployment problems, and security 
problems. The PQ transition, and things we do motivated by PQ's unique 
constraints, are a good place to rethink what parts of X.509 do and don't still 
make sense. This draft doesn't directly address this---it's not, on its own, a 
replacement and can coexist either X.509 or a non-X.509 mechanism---but I think 
the certificate negotiation and deployment ideas are worth exploring for that 
space.

[0] https://mailarchive.ietf.org/arch/msg/pqc/Q8GDQPTsmhOIblYECcaEMIwRhP0/

> - To make this work for WebPKI, would the Transparency Service need to fetch 
> from all WebPKI issuing CAs and update them every hour?

Almost. This document describes a new kind of CA. X.509 CAs and Merkle Tree CAs 
are distinct roles, though having some entities operate both kinds of CAs would 
be a natural deployment strategy. So, to clarify, yes, the TS would need to 
fetch from all trusted Merkle Tree CAs every hour. It would not need to fetch 
from CAs for other mechanisms, such as existing X.509 CAs.

> - CAs would also need to publish their Merkle tree logs similarly to CT, 
> right?

Merkle Tree CAs would need to publish some state described in sections 5.2 and 
8. This document doesn't affect existing X.509 CAs. The new state is similar to 
CT in that it is a Merkle Tree and aims to provide a transparency property. It 
differs from CT in that:

- Rather than appending to a single tree, with consistency proofs, each batch 
forms an independent tree. The tradeoffs we make reduce the number of valid 
trees enough that there's no need to link them together into a larger structure 
to optimize consistency checks.
- CT logs attest that something was logged. By putting an assertion in a tree 
and signing the root, Merkle Tree CAs are certifying the assertions themselves.
- CT logs are operated separately from X.509 CAs. In this design, the Merkle 
Tree CA logs its own 

Re: [TLS] Merkle Tree Certificates

2023-03-22 Thread Kampanakis, Panos
Hi Hubert, 

I totally agree on your points about time-to-first-byte vs time-to-last-byte. 
We (some of my previous work too) have been focusing on time-to-first byte 
which makes some of these handshakes look bad for the tails of the 80-95th 
percentiles. But in reality, the time-to-last-byte or 
time-to-some-byte-that-makes-the-user-think-there-is-progress would be the more 
accurate measurement to assess these connections.

> Neither cached data nor Merkle tree certificates reduce round-trips

Why is that? Assuming Dilithium WebPKI and excluding CDNs, QUIC sees 2 extra 
round-trips (amplification, initcwnd) and TLS sees 1 (initcwnd). Trimming down 
the "auth data" will at least get rid of the initcwnd extra round-trip. I think 
the Merkle tree cert approach fits in the default QUIC amplification window too 
so it would get rid of that round-trip in QUIC as well.  



-Original Message-
From: Hubert Kario  
Sent: Wednesday, March 22, 2023 8:46 AM
To: David Benjamin 
Cc: Kampanakis, Panos ;  ; Devon 
O'Brien 
Subject: RE: [EXTERNAL][TLS] Merkle Tree Certificates

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On Tuesday, 21 March 2023 17:06:54 CET, David Benjamin wrote:
> On Tue, Mar 21, 2023 at 8:01 AM Hubert Kario  wrote:
>
>> On Monday, 20 March 2023 19:54:24 CET, David Benjamin wrote:
>>> I don't think flattening is the right way to look at it. See my 
>>> other reply for a discussion about flattening, and how this does a 
>>> bit more than that. (It also handles SCTs.)
>>>
>>> As for RFC 7924, in this context you should think of it as a funny 
>>> kind of TLS resumption. In clients that talk to many ...
>> https://github.com/MattMenke2/Explainer---Partition-Network-State/blo
>> b/main/README.md
>>> and https://github.com/privacycg/storage-partitioning.
>>
>> Sorry, but as long as the browsers are willing to perform session 
>> resumption I'm not buying the "cached info is a privacy problem".
>>
>
> I'm not seeing where this quote comes from. I said it had analogous
> properties to resumption, not that it was a privacy problem in the absolute.

I meant it as a summary not as a quote.

> The privacy properties of resumption and cached info on the situation. If
> you were okay correlating the two connections, both are okay in this
> regard. If not, then no. rfc8446bis discusses this:
> https://tlswg.org/tls13-spec/draft-ietf-tls-rfc8446bis.html#appendix-C.4
>
> In browsers, the correlation boundaries (across *all* state, not just TLS)
> were once browsing-profile-wide, but they're shifting to this notion of
> "site". I won't bore the list with the web's security model, but roughly
> the domain part of the top-level (not the same as destination!) URL. See
> the links above for details.
>
> That equally impacts resumption and any hypothetical deployment of cached
> info. So, yes, within those same bounds, a browser could deploy cached
> info. Whether it's useful depends on whether there are many cases where
> resumption wouldn't work, but cached info would. (E.g. because resumption
> has different security properties than cached info.)

The big difference is that tickets generally should be valid only for
a day or two, while cached info, just like cookies, can be valid for many
months if not years.

Now, a privacy focused user may decide to clear the cookies and cached
info daily, while others may prefer the slightly improved performance
on first visit after a week or month break.

>
>> It also completely ignores the encrypted client hello
>>
>
> ECH helps with outside observers correlating your connections, but it
> doesn't do anything about the server correlating connections. In the
> context of correlation boundaries within a web browser, we care about the
> latter too.

How's that different from cookies? Which don't correlate, but
cryptographically
prove previous visit?

>> Browser doesn't have to cache the certs since the beginning of time to be
>> of benefit, a few hours or even just current boot would be enough:
>>
>> 1. if it's a page visited once then all the tracking cookies
>> and javascript
>>will be an order of magnitude larger download anyway
>> 2. if it's a page visited many times, then optimising for the subsequent
>>connections is of higher benefit anyway
>>
>
> I don't think that's quite the right dichotomy. There are plenty of reasons
> to optimize for the first connection, time to first bytes, etc. Indeed,
> this WG did just that with False Start and TLS 1.3 itself. 

Re: [TLS] Consensus call on codepoint strategy for draft-ietf-tls-hybrid-design

2023-03-28 Thread Kampanakis, Panos
+1 for NIST curve codepoints.


From: TLS  On Behalf Of Krzysztof Kwiatkowski
Sent: Tuesday, March 28, 2023 10:00 PM
To: Christopher Wood 
Cc: TLS@ietf.org
Subject: RE: [EXTERNAL][TLS] Consensus call on codepoint strategy for 
draft-ietf-tls-hybrid-design


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hello,

Can we add secp256r1_kyber768 option for those who prefer NIST curves?

Kris



On 29 Mar 2023, at 10:48, Christopher Wood 
mailto:c...@heapingbits.net>> wrote:

As discussed during yesterday's meeting, we would like to assess consensus for 
moving draft-ietf-tls-hybrid-design forward with the following strategy for 
allocating codepoints we can use in deployments.

1. Remove codepoints from draft-ietf-tls-hybrid-design and advance this 
document through the process towards publication.
2. Write a simple -00 draft that specifies the target variant of 
X25519+Kyber768 with a codepoint from the standard ranges. (Bas helpfully did 
this for us already [1].) Once this is complete, request a codepoint from IANA 
using the standard procedure.

The intent of this proposal is to get us a codepoint that we can deploy today 
without putting a "draft codepoint" in an eventual RFC.

Please let us know if you support this proposal by April 18, 2023. Assuming 
there is rough consensus, we will move forward with this proposal.

Best,
Chris, Joe, and Sean

[1] https://datatracker.ietf.org/doc/html/draft-tls-westerbaan-xyber768d00-00
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Consensus call on codepoint strategy for draft-ietf-tls-hybrid-design

2023-03-28 Thread Kampanakis, Panos

> I would also like secp384r1_kyber1024 option, please.

Why do you up the ECDH curve sec level with Kyber1024? It adds unnecessary size 
to the keyshare. like secp384r1_kyber768 combines two equivalent security 
levels.
Those that want to be extra conservative can go secp521r1_kyber1024 which won’t 
be much worse than secp384r1_kyber1024 in performance or size.



From: TLS  On Behalf Of Blumenthal, Uri - 0553 - MITLL
Sent: Tuesday, March 28, 2023 10:40 PM
To: Krzysztof Kwiatkowski ; Christopher Wood 

Cc: TLS@ietf.org
Subject: RE: [EXTERNAL][TLS] Consensus call on codepoint strategy for 
draft-ietf-tls-hybrid-design


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Can we add secp256r1_kyber768 option for those who prefer NIST curves?

I support this.

I would also like secp384r1_kyber1024 option, please.

Thanks
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Consensus call on codepoint strategy for draft-ietf-tls-hybrid-design

2023-03-31 Thread Kampanakis, Panos
Hi Bas,

I prefer for the MTI to be P-256+Kyber768 for compliance reasons.

It would be trivial for servers to add support for both identifiers as they 
introduce Kyber768, but you are right, the new draft should include an MTI 
identifier.


From: TLS  On Behalf Of Bas Westerbaan
Sent: Friday, March 31, 2023 8:04 PM
To: Ilari Liusvaara 
Cc: TLS@ietf.org
Subject: RE: [EXTERNAL][TLS] Consensus call on codepoint strategy for 
draft-ietf-tls-hybrid-design


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Regarding additional key agreements.

For the (public) web it would be best if we can agree on a default key 
agreement. If one half uses P-256+Kyber768 and the other X25519+Kyber768, then 
clients will either HRR half the time or need to send both. Neither are ideal.

Obviously this point is moot for internal networks. So I do not oppose 
specifying additional preliminary key agreements, but I do not like to actively 
support it. What about specifying further preliminary key agreements in yet 
again a separate draft?

Best,

 Bas

On Sat, Apr 1, 2023 at 1:56 AM Bas Westerbaan 
mailto:b...@cloudflare.com>> wrote:
The draft draft-tls-westerbaan-xyber768d00-00 references
draft-cfrg-schwabe-kyber-01, which has a number of annoying mistakes,
since fixed in editor's copy.

And then, the correct reference for X25519 is probably RFC7748 instead
of RFC8037...


Really quick and dirty way to fix this would be to publish editor's
copy as draft-cfrg-schwabe-kyber-02 (or if CFRG adapts quickly, the
RG-00), and then publish draft-tls-westerbaan-xyber768d00-01, fixing
the references.

Thanks, done. Posted -02 of both the Kyber and Xyber drafts.

Best,

 Bas

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Consensus call on codepoint strategy for draft-ietf-tls-hybrid-design

2023-05-11 Thread Kampanakis, Panos
Great!

So to clarify, when Kyber gets ratified as MLWE_KEM or something like that, 
will we still be using 0x6399 in the keyshare when we are negotiating? Or is  
0x6399 just a temporary codepoint for Kyber768 Round 3 combined with X25519?


From: TLS  On Behalf Of Bas Westerbaan
Sent: Wednesday, May 10, 2023 3:09 PM
To: Christopher Wood 
Cc: tls@ietf.org
Subject: RE: [EXTERNAL][TLS] Consensus call on codepoint strategy for 
draft-ietf-tls-hybrid-design


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


FYI IANA has added the following entry to the TLS Supported Groups registry:

Value: 25497
Description: X25519Kyber768Draft00
DTLS-OK: Y
Recommended: N
Reference: [draft-tls-westerbaan-xyber768d00-02]
Comment: Pre-standards version of Kyber768

Please see
https://www.iana.org/assignments/tls-parameters

On Mon, May 1, 2023 at 11:59 AM Christopher Wood 
mailto:c...@heapingbits.net>> wrote:
It looks like we have consensus for this strategy. We’ll work to remove 
codepoints from draft-ietf-tls-hybrid-design and then get experimental 
codepoints allocated based on draft-tls-westerbaan-xyber768d00.

Best,
Chris, for the chairs

> On Mar 28, 2023, at 9:49 PM, Christopher Wood 
> mailto:c...@heapingbits.net>> wrote:
>
> As discussed during yesterday's meeting, we would like to assess consensus 
> for moving draft-ietf-tls-hybrid-design forward with the following strategy 
> for allocating codepoints we can use in deployments.
>
> 1. Remove codepoints from draft-ietf-tls-hybrid-design and advance this 
> document through the process towards publication.
> 2. Write a simple -00 draft that specifies the target variant of 
> X25519+Kyber768 with a codepoint from the standard ranges. (Bas helpfully did 
> this for us already [1].) Once this is complete, request a codepoint from 
> IANA using the standard procedure.
>
> The intent of this proposal is to get us a codepoint that we can deploy today 
> without putting a "draft codepoint" in an eventual RFC.
>
> Please let us know if you support this proposal by April 18, 2023. Assuming 
> there is rough consensus, we will move forward with this proposal.
>
> Best,
> Chris, Joe, and Sean
>
> [1] https://datatracker.ietf.org/doc/html/draft-tls-westerbaan-xyber768d00-00

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [UNVERIFIED SENDER] Re: Consensus call on codepoint strategy for draft-ietf-tls-hybrid-design

2023-05-11 Thread Kampanakis, Panos
ACK, thx all. So we should refrain from defining such “point-in-time” 
codepoints for other needed long-term algorithm combinations to not waste 
registry space. Only absolutely necessary codepoints should be registered.

From: Bas Westerbaan 
Sent: Thursday, May 11, 2023 10:39 AM
To: Kampanakis, Panos 
Cc: Christopher Wood ; tls@ietf.org
Subject: [EXTERNAL] [UNVERIFIED SENDER] Re: [TLS] Consensus call on codepoint 
strategy for draft-ietf-tls-hybrid-design


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi Panos,

No, for the final version of Kyber we'd need a different code point. (And that 
one will presumably be defined in Douglas' hybrid I-D.)

The raison d'être of draft-schwabe-cfrg-kyber-02 and 
draft-westerbaan-tls-xyber768d00 is to have a stable reference for this 
preliminary version of Kyber.

Best,

 Bas

On Thu, May 11, 2023 at 4:17 PM Kampanakis, Panos 
mailto:40amazon@dmarc.ietf.org>> wrote:
Great!

So to clarify, when Kyber gets ratified as MLWE_KEM or something like that, 
will we still be using 0x6399 in the keyshare when we are negotiating? Or is  
0x6399 just a temporary codepoint for Kyber768 Round 3 combined with X25519?


From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of Bas 
Westerbaan
Sent: Wednesday, May 10, 2023 3:09 PM
To: Christopher Wood mailto:c...@heapingbits.net>>
Cc: tls@ietf.org<mailto:tls@ietf.org>
Subject: RE: [EXTERNAL][TLS] Consensus call on codepoint strategy for 
draft-ietf-tls-hybrid-design


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


FYI IANA has added the following entry to the TLS Supported Groups registry:

Value: 25497
Description: X25519Kyber768Draft00
DTLS-OK: Y
Recommended: N
Reference: [draft-tls-westerbaan-xyber768d00-02]
Comment: Pre-standards version of Kyber768

Please see
https://www.iana.org/assignments/tls-parameters

On Mon, May 1, 2023 at 11:59 AM Christopher Wood 
mailto:c...@heapingbits.net>> wrote:
It looks like we have consensus for this strategy. We’ll work to remove 
codepoints from draft-ietf-tls-hybrid-design and then get experimental 
codepoints allocated based on draft-tls-westerbaan-xyber768d00.

Best,
Chris, for the chairs

> On Mar 28, 2023, at 9:49 PM, Christopher Wood 
> mailto:c...@heapingbits.net>> wrote:
>
> As discussed during yesterday's meeting, we would like to assess consensus 
> for moving draft-ietf-tls-hybrid-design forward with the following strategy 
> for allocating codepoints we can use in deployments.
>
> 1. Remove codepoints from draft-ietf-tls-hybrid-design and advance this 
> document through the process towards publication.
> 2. Write a simple -00 draft that specifies the target variant of 
> X25519+Kyber768 with a codepoint from the standard ranges. (Bas helpfully did 
> this for us already [1].) Once this is complete, request a codepoint from 
> IANA using the standard procedure.
>
> The intent of this proposal is to get us a codepoint that we can deploy today 
> without putting a "draft codepoint" in an eventual RFC.
>
> Please let us know if you support this proposal by April 18, 2023. Assuming 
> there is rough consensus, we will move forward with this proposal.
>
> Best,
> Chris, Joe, and Sean
>
> [1] https://datatracker.ietf.org/doc/html/draft-tls-westerbaan-xyber768d00-00

___
TLS mailing list
TLS@ietf.org<mailto:TLS@ietf.org>
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Pqc] Post-Quantum TLS instantiations and synthetic benchmarks

2023-06-27 Thread Kampanakis, Panos
Imo, we have been measuring handshake time as an indication or performance, but 
time-to-last-byte or time-to-x%-byte should be used instead. There is nothing 
wrong with your study Thom. It is pretty detailed and useful. I just think that 
if these new algos get deployed, we would know if their impact would be 
noticeable by measuring different things that what we have been measuring so 
far. An 150KB (on average) web page over a lossy LTE connection will have 
pretty bad user experience regardless of adding 10-15KB of Dilithium certs or 
1-2KB of Kyber keys/ciphertexts.

From: Pqc  On Behalf Of Thom Wiggers
Sent: Tuesday, June 27, 2023 4:04 PM
To: Bas Westerbaan 
Cc: Martin Thomson ; Sofía Celi ; 
tls@ietf.org; p...@ietf.org
Subject: RE: [EXTERNAL][Pqc] [TLS] Post-Quantum TLS instantiations and 
synthetic benchmarks


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi Bas,

Op di 27 jun 2023 om 14:44 schreef Bas Westerbaan 
mailto:b...@cloudflare.com>>:
Thanks for preparing the excerpt; this will be helpful for many use cases. (For 
the WebPKI, as you already mention, we also need to consider SCTs and 
realistically crappy networks.)

 "this is LTE in a city", and "this is what a poor-quality rural 3G link looks 
like". But alas, these don't seem to exist either.

Unfortunately, it will not be as simple as plugging in a single packet loss 
number and then dropping that fraction of packets. Because TCP interpets packet 
loss as congestion, it grinds down to a halt much earlier than at a loss of 2%. 
Instead, lossy links such as WiFi and cellular have their own retransmission 
protocols hidden from TCP.

Yeah, I'm all too familiar with wireless retransmission (a previous laptop had 
a bad wifi chip that would drop up to 1/3rd of the packets leading to massive 
latency spikes). Still, I hope that someone has a good idea on how to best 
represent these facets of real-world networking in some way that is useful for 
experiments :)

Cheers,

Thom
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-07 Thread Kampanakis, Panos
Hi Dennis, 

This is an interesting draft. The versioned dictionary idea for ICA and Root 
CAs especially was something I was considering for the ICA Suppression draft 
[1] given the challenges brought up before about outages with stale dictionary 
caches. As you point out in the draft, cTLS uses something similar as well. 
Btw, if we isolated the ICA and Root CA dictionary, I don't think you need pass 
1, assuming the parties can agree on a dictionary version. They could just 
agree on the dictionary and be able to build the cert chain, but providing the 
identifiers probably simplifies the process. This could be simplified further I 
think. 

I also think one thing missing from the draft is how the client negotiates this 
compression with the server as the CertificateCompressionAlgorithms from 
RFC8879 will not be the same. 

About the end-entity compression, I wonder if compression, decompression 
overhead is significant and unbalanced. RFC8879 did not want to introduce a DoS 
threat by offering a cumbersome compression/decompression. Any data on that?

About your data in section 4, I think these are classical cert chains and it 
looks to be they improve 0.5-1KB from RFC8879 compression. In a WebPKI 
Dilithium2 cert with 2 SCTs the end-entity cert size will amount to ~7-8KB. 85% 
of that will be the "random" Dilithium public key and signatures which will not 
get much compression. So, do we get any benefit from compressing 7-8KB certs to 
6-7KB? Is it worth the compression/decompression effort?

Rgs,
Panos

[1] 
https://github.com/csosto-pk/tls-suppress-intermediates/issues/17#issue-1671378265
 



-Original Message-
From: TLS  On Behalf Of Dennis Jackson
Sent: Thursday, July 6, 2023 6:18 PM
To: TLS List 
Subject: [EXTERNAL] [TLS] Abridged Certificate Compression

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Hi all,

I've submitted the draft below that describes a new TLS certificate compression 
scheme that I'm calling 'Abridged Certs' for now. The aim is to deliver 
excellent compression for existing classical certificate chains and smooth the 
transition to PQ certificate chains by eliminating the root and intermediate 
certificates from the bytes on the wire. It uses a shared dictionary 
constructed from the CA certificates listed in the CCADB [1] and the associated 
extensions used in end entity certificates.

Abridged Certs compresses the median certificate chain from ~4000 to
~1000 bytes based on a sample from the Tranco Top 100k. This beats traditional 
TLS certificate compression which produces a median of ~3200 bytes when used 
alone and ~1400 bytes when combined with the outright removal of CA 
certificates from the certificate chain. The draft includes a more detailed 
evaluation.

There were a few other key considerations. This draft doesn't impact trust 
decisions, require trust in the certificates in the shared dictionary or 
involve extra error handling. Nor does the draft favor popular CAs or websites 
due to the construction of the shared dictionary. Finally, most browsers 
already ship with a complete list of trusted intermediate and root certificates 
that this draft reuses to reduce the client storage footprint to a few 
kilobytes.

I would love to get feedback from the working group on whether the draft is 
worth developing further.

For those interested, a few issues are tagged DISCUSS in the body of the draft, 
including arrangements for deploying new versions with updated dictionaries and 
the tradeoff between equitable CA treatment and the disk space required on 
servers (currently 3MB).

Best,
Dennis

[1] Mozilla operates the Common CA Database on behalf of Apple, Microsoft, 
Google and other members.

On 06/07/2023 23:11, internet-dra...@ietf.org wrote:
> A new version of I-D, draft-jackson-tls-cert-abridge-00.txt
> has been successfully submitted by Dennis Jackson and posted to the 
> IETF repository.
>
> Name: draft-jackson-tls-cert-abridge
> Revision: 00
> Title:Abridged Compression for WebPKI Certificates
> Document date:2023-07-06
> Group:Individual Submission
> Pages:19
> URL:
> https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.txt
> Status: 
> https://datatracker.ietf.org/doc/draft-jackson-tls-cert-abridge/
> Html:   
> https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.html
> Htmlized:   
> https://datatracker.ietf.org/doc/html/draft-jackson-tls-cert-abridge
>
>
> Abstract:
> This draft defines a new TLS Certificate Compression scheme which
> uses a shared dictionary of root and intermediate WebPKI
> certificates.  The scheme smooths the transition to post-quantum
> certificates by eliminating the root and intermediate certificates
> from the TLS certificate chain without impacting trus

Re: [TLS] Abridged Certificate Compression

2023-07-11 Thread Kampanakis, Panos
Thanks Dennis. Your answers make sense.

Digging a little deeper on the benefit of compressing (a la Abridged Certs 
draft) the leaf cert or not. Definitely this draft improves vs plain 
certificate compression, but I am trying to see if it is worth the complexity 
of pass 2. So, section 4 shows a 2.5KB improvement over plain compression which 
would be even more significant for Dilithium certs, but I am trying to find if 
the diff between ICA suppression/Compression vs ICA 
suppression/Compression+leaf compression is significant. I am arguing that the 
table 4 numbers would be much different when talking about Dilithium certs 
because all of these numbers would be inflated and any compression would have a 
small impact. Replacing a CA cert (no SCTs) with a dictionary index would save 
us ~4KB (Dilithium2) or 5.5KB (Dilithium3). That is significant. Compressing 
the leaf (of size 8-9KB (Dilithium2) or 11-12 KB (Dilithium 3)) using any 
mechanism would trim down ~0.5-1KB compared to not compressing. That is because 
the PK and Sig can't 
 be compressed and these account for most of the PQ leaf cert size. So, I am 
trying to see if pass 2 and compression of the leaf cert benefit us much. 

Where I am going with this is that if the benefit of leaf compression from the 
Abridged Mechanism is not significant, I would like to use your abridged 
dictionary for ICA suppression only because it does not suffer from outages. 
So, am I wrong in claiming that compressing a Dilithium leaf cert compared to 
sending it as-is saves us a lot of data?  

If MTCs came to fruition I think they would do pretty well without the abridged 
certs, but the abridged certs have the advantage that they can be implemented 
for WebPKI or elsewhere which is important.  


-Original Message-
From: Dennis Jackson  
Sent: Monday, July 10, 2023 7:23 AM
To: Kampanakis, Panos ; Dennis Jackson 
; TLS List 
Subject: RE: [EXTERNAL][TLS] Abridged Certificate Compression

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Hi Panos,

On 08/07/2023 02:49, Kampanakis, Panos wrote:
> Hi Dennis,
>
> This is an interesting draft.
Thanks!

> The versioned dictionary idea for ICA and Root CAs especially was something I 
> was considering for the ICA Suppression draft [1] given the challenges 
> brought up before about outages with stale dictionary caches.
>
> Btw, if we isolated the ICA and Root CA dictionary, I don't think you need 
> pass 1, assuming the parties can agree on a dictionary version. They could 
> just agree on the dictionary and be able to build the cert chain, but 
> providing the identifiers probably simplifies the process. This could be 
> simplified further I think.

Ah I hadn't seen, thank you for the link to [1].

I thought a bit about suppressing pass 1 as well but I don't think its 
desirable.

A key selling point of the current Abridged Certs draft is that it can be 
enabled by default without the risk of connection failures or requiring 
retries, even if the server / client fall out of date. This keeps the 
deployment story very simple as you can just turn it on knowing it can only 
make things better and never make things worse.

Suppressing Pass 1 could be used to reduce the storage requirements on the 
server but then the server wouldn't know whether a particular ICA was in the 
dictionary and so the operator would have to configure that, leading to the 
same kind of error handling flows as in the CA Cert Suppression draft. 
Similarly, the bytes on the wire saving isn't significant and it would make it 
harder to use Abridged Certs in other contexts as it would no longer be a 
lossless compression scheme.

> I also think one thing missing from the draft is how the client negotiates 
> this compression with the server as the CertificateCompressionAlgorithms from 
> RFC8879 will not be the same.

Sorry I'm afraid I don't follow.

Abridged Certs would negotiated just the same as any other certificate 
compression algorithm. The client indicates support by including the Abridged 
Certs identifier in its Certificate Compression extension in the ClientHello 
(along with the existing algorithms like plain Zstd).
The server has the choice of whether to use it in its CompressedCertificate 
message. If a new version of Abridged Certs were minted in a few years with 
newer dictionaries then it would have its own algorithm identifier and would 
coexist with or replace the existing one.

> About the end-entity compression, I wonder if compression, decompression 
> overhead is significant and unbalanced. RFC8879 did not want to introduce a 
> DoS threat by offering a cumbersome compression/decompression. Any data on 
> that?

Abridged Certs is just a thin wrapper around Zstd which is already deployed as 
TLS Certificat

[TLS] Abridged Certificate Compression (dictionary versioning)

2023-07-11 Thread Kampanakis, Panos
Hi Dennis, 

Spinning up a new thread since this is a different topic. 

Section 5.1 talks about the dictionary versioning approach and suggests an 
annual cadence is enough. The issue of an up-to-date cache was a big concern 
for the ICA Suppression draft, and rightfully so. A stale dictionary does not 
cause an outage in the abridged case, but it does eliminate the benefit. 

Appendix B.1 talks about 100-200 new ICA and 10 Root certs per year. In the 
past I had looked at fluctuations of CCADB and there are daily changes. When 
checking in the past, I did not generate the ordered list as per pass 1 on a 
daily basis to confirm it, but I confirmed the fluctuations. The commits in 
https://github.com/FiloSottile/intermediates/commits/main  show it too. Given 
that, I am wondering if CCADB is not that stable. Are you confident that ICA 
dictionaries (based on CCADB) won't materially change often? 


-Original Message-
From: TLS  On Behalf Of Kampanakis, Panos
Sent: Tuesday, July 11, 2023 11:34 PM
To: Dennis Jackson ; Dennis Jackson 
; TLS List 
Subject: RE: [EXTERNAL][TLS] Abridged Certificate Compression

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Thanks Dennis. Your answers make sense.

Digging a little deeper on the benefit of compressing (a la Abridged Certs 
draft) the leaf cert or not. Definitely this draft improves vs plain 
certificate compression, but I am trying to see if it is worth the complexity 
of pass 2. So, section 4 shows a 2.5KB improvement over plain compression which 
would be even more significant for Dilithium certs, but I am trying to find if 
the diff between ICA suppression/Compression vs ICA 
suppression/Compression+leaf compression is significant. I am arguing that the 
table 4 numbers would be much different when talking about Dilithium certs 
because all of these numbers would be inflated and any compression would have a 
small impact. Replacing a CA cert (no SCTs) with a dictionary index would save 
us ~4KB (Dilithium2) or 5.5KB (Dilithium3). That is significant. Compressing 
the leaf (of size 8-9KB (Dilithium2) or 11-12 KB (Dilithium 3)) using any 
mechanism would trim down ~0.5-1KB compared to not compressing. That is because 
the PK and Sig can't 
  be compressed and these account for most of the PQ leaf cert size. So, I am 
trying to see if pass 2 and compression of the leaf cert benefit us much.

Where I am going with this is that if the benefit of leaf compression from the 
Abridged Mechanism is not significant, I would like to use your abridged 
dictionary for ICA suppression only because it does not suffer from outages. 
So, am I wrong in claiming that compressing a Dilithium leaf cert compared to 
sending it as-is saves us a lot of data?

If MTCs came to fruition I think they would do pretty well without the abridged 
certs, but the abridged certs have the advantage that they can be implemented 
for WebPKI or elsewhere which is important.


-Original Message-
From: Dennis Jackson 
Sent: Monday, July 10, 2023 7:23 AM
To: Kampanakis, Panos ; Dennis Jackson 
; TLS List 
Subject: RE: [EXTERNAL][TLS] Abridged Certificate Compression

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Hi Panos,

On 08/07/2023 02:49, Kampanakis, Panos wrote:
> Hi Dennis,
>
> This is an interesting draft.
Thanks!

> The versioned dictionary idea for ICA and Root CAs especially was something I 
> was considering for the ICA Suppression draft [1] given the challenges 
> brought up before about outages with stale dictionary caches.
>
> Btw, if we isolated the ICA and Root CA dictionary, I don't think you need 
> pass 1, assuming the parties can agree on a dictionary version. They could 
> just agree on the dictionary and be able to build the cert chain, but 
> providing the identifiers probably simplifies the process. This could be 
> simplified further I think.

Ah I hadn't seen, thank you for the link to [1].

I thought a bit about suppressing pass 1 as well but I don't think its 
desirable.

A key selling point of the current Abridged Certs draft is that it can be 
enabled by default without the risk of connection failures or requiring 
retries, even if the server / client fall out of date. This keeps the 
deployment story very simple as you can just turn it on knowing it can only 
make things better and never make things worse.

Suppressing Pass 1 could be used to reduce the storage requirements on the 
server but then the server wouldn't know whether a particular ICA was in the 
dictionary and so the operator would have to configure that, leading to the 
same kind of error handling flows as in the CA Cert Suppression draft. 
Similarly, the bytes on the wire sav

[TLS] Abridged Certificate Compression (server participation)

2023-07-11 Thread Kampanakis, Panos
Hi Dennis,

One more topic for general discussion.

The abridged certs draft requires a server who participates and fetches 
dictionaries in order to make client connections faster. As Bas has pointed out 
before, this paradigm did not work well with OSCP staples in the past. Servers 
did not chose to actively participate and go fetch them. 

Are we confident that servers would deploy the dictionary fetching mechanism to 
benefit their connecting clients?



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression (server participation)

2023-07-12 Thread Kampanakis, Panos
Imo, the dictionary approach a simple way of trimming down the PQ auth data. 
And your argument for the frequency of synching OCSP staples vs these certs is 
a good one. I hope TLS termination points will agree if this moves forward, but 
personally I don't find the approach too bad. 

-Original Message-
From: TLS  On Behalf Of Dennis Jackson
Sent: Wednesday, July 12, 2023 1:16 PM
To: Kampanakis, Panos ; TLS List 

Subject: RE: [EXTERNAL][TLS] Abridged Certificate Compression (server 
participation)

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On 12/07/2023 05:02, Kampanakis, Panos wrote:

> The abridged certs draft requires a server who participates and fetches 
> dictionaries in order to make client connections faster. As Bas has pointed 
> out before, this paradigm did not work well with OSCP staples in the past. 
> Servers did not chose to actively participate and go fetch them.
>
> Are we confident that servers would deploy the dictionary fetching mechanism 
> to benefit their connecting clients?

I think OCSP staples is quite a bit different from this draft. OCSP Staples 
requires the server to fetch new data from the CA every day or week. It's 
inherently hard to do this reliably, especially with the large number of poor 
quality or poorly maintained OCSP servers and the large fraction of operators 
who do not want their servers making outbound connections. Besides these 
barriers I don't think the benefit was huge as clients already cached OCSP 
responses for up to a week so at most it was speeding up one connection per 
client per week (this was before network partitioning in browsers) and at worst 
it was breaking your website entirely.

In contrast, this draft aims to speed up every connection that isn't using 
session tickets, cause no harm if its misconfigured or out of date and be slow 
moving enough that the dictionaries can be shipped as part of a regular 
software release and so suitable for anyone willing to update their server 
software once a year (or less). Similarly, these updates aren't going to 
involve code changes, just changes to the static dictionaries, so they are 
suitable for backporting or ESR releases.

It would definitely be good to hear from maintainers or smaller operators if 
they have concerns though!

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-12 Thread Kampanakis, Panos
> The performance benefit isn't purely in the ~1KB saved, its whether it brings 
> the chain under the QUIC amplification limit or shaves off an additional 
> packet and so avoids a loss+retry. There's essentially no difference in 
> implementation complexity, literally just a line of code, so the main 
> tradeoff is the required disk space on the client & server.

Fair. I would add one more tradeoff which is pulling the end-entity certs in 
the CT window for pass 2. This is an one time cost for each dictionary version, 
so maybe not that bad. 

Regardless, would compressing the leaf bring us below the QUIC 3.6KB threshold 
for Dilithium 2 or 3 certs whereas not suppressing would keep us above? I think 
it is not even close if we are talking WebPKI. Without SCTs, maybe compressing 
could keep us below 4KB for Dilithium 2 leaf certs. But even then, if we add 
the CertVerify signature size we will be well over 4KB. 

Additionally, would compressing the leaf bring us below the 9-10KB threshold 
that Bas had tested to be an important inflection point? For WebPKI, it may the 
8-9KB cert below 9KB if we add the CertVerify signature size. Maybe not. It 
would need to tested. For Dilithium 3, maybe compression could render the 
11-12KB cert below 9KB if we got lucky, maybe not, but if we add the CertVerify 
signature we won’t make it. For non-WebPKI, they will already be below 9-10KB.

So, I am arguing that we can't remain below the QUIC threshold by compressing 
the leaf Dilithium cert. And we could remain below the 9-10KB only for 
Dilithium2 leaves.  I could be proven wrong if you have implemented it. 

One more argument for making pass 2 optional or allowing for just pass 1 
dictionaries is that if we are not talking about WebPKI we don't have the 
luxury of CT logs. But we would still want to option of compressing / omitting 
the ICAs by using CCADB. 




-Original Message-
From: Dennis Jackson  
Sent: Wednesday, July 12, 2023 12:39 PM
To: Kampanakis, Panos ; TLS List 
Subject: RE: [EXTERNAL][TLS] Abridged Certificate Compression

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On 12/07/2023 04:34, Kampanakis, Panos wrote:

> Thanks Dennis. Your answers make sense.
>
> Digging a little deeper on the benefit of compressing (a la Abridged 
> Certs draft) the leaf cert or not. Definitely this draft improves vs 
> plain certificate compression, but I am trying to see if it is worth 
> the complexity of pass 2. So, section 4 shows a 2.5KB improvement over 
> plain compression which would be even more significant for Dilithium 
> certs, but I am trying to find if the diff between ICA 
> suppression/Compression vs ICA suppression/Compression+leaf 
> compression is significant. [/n]
>
> I am arguing that the table 4 numbers would be much different when 
> talking about Dilithium certs because all of these numbers would be 
> inflated and any compression would have a small impact. Replacing a CA 
> cert (no SCTs) with a dictionary index would save us ~4KB (Dilithium2) 
> or 5.5KB (Dilithium3). That is significant. [/n]
>
> Compressing the leaf (of size 8-9KB (Dilithium2) or 11-12 KB (Dilithium 3)) 
> using any mechanism would trim down ~0.5-1KB compared to not compressing. 
> That is because the PK and Sig can't be compressed and these account for most 
> of the PQ leaf cert size. So, I am trying to see if pass 2 and compression of 
> the leaf cert benefit us much.

I think there's a fairly big difference between suppressing CA certs in SCA and 
compressing CA certs with pass 1 of this draft. But I do agree its fair to ask 
if pass 2 is worth the extra effort.

The performance benefit isn't purely in the ~1KB saved, its whether it brings 
the chain under the QUIC amplification limit or shaves off an additional packet 
and so avoids a loss+retry. There's essentially no difference in implementation 
complexity, literally just a line of code, so the main tradeoff is the required 
disk space on the client & server.

Best,
Dennis

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression (dictionary versioning)

2023-07-12 Thread Kampanakis, Panos
I wish there was a study of the certs issued by newly introduced CAs in CCADB 
and how quickly they ramp up. I am concerned that a 1 year old dictionary could 
end up slowing down a good amount of destinations. But again, that slowdown 
does not mean an outage. And servers could ensure they get their certs issued 
or cross-issued by relatively mature CAs if they do not want PQ Sig related 
slowdowns. 

Btw, in 3.1.1 I noticed 
- "Remove all intermediate certificates which are not signed by root 
certificates still in the listing."

That could eliminate some 2+ ICA cert chains. Any reason why?



-Original Message-
From: Dennis Jackson  
Sent: Wednesday, July 12, 2023 1:01 PM
To: Kampanakis, Panos ; TLS List 
Subject: RE: [EXTERNAL][TLS] Abridged Certificate Compression (dictionary 
versioning)

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On 12/07/2023 04:54, Kampanakis, Panos wrote:

> Hi Dennis,
>
> Appendix B.1 talks about 100-200 new ICA and 10 Root certs per year. In the 
> past I had looked at fluctuations of CCADB and there are daily changes. When 
> checking in the past, I did not generate the ordered list as per pass 1 on a 
> daily basis to confirm it, but I confirmed the fluctuations. The commits in 
> https://github.com/FiloSottile/intermediates/commits/main  show it too. Given 
> that, I am wondering if CCADB is not that stable. Are you confident that ICA 
> dictionaries (based on CCADB) won't materially change often?

I checked the historical data for the last few years to ballpark a rate of 
100-200 new intermediates per year. A uniform distribution of arrivals would 
mean 2 to 4 changes a week, which matches Filippo's commit frequency [1]. In 
practice Filippo's commits include removals (which we don't care about) and 
batched additions (which we do), but the numbers seem about right.

In terms of impact, the question is how much usage do those new ICAs see in 
their first year. If we expect websites to adopt them equally likely as 
existing ICAs then they should make up <5% of the population. I think in 
practice they see much slower adoption and so the impact is even lower, for 
example a reasonable proportion are vanity certificates with limited 
applicability or intended to replace an existing cert in the future. If we 
wanted to confirm this we could build the abridged cert dictionaries for '22 
and then use CT to sample the cert chains used by websites that year. I'll see 
if I can find the time to put that together.

If there was an appetite for a faster moving dictionary, we could use the 
scheme I sketched in the appendix to the draft. But I think we should try to 
avoid that complexity if we can.

Best,
Dennis

[1] https://github.com/FiloSottile/intermediates/graphs/commit-activity

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression (discussion of leaf cert compression discussion)

2023-07-19 Thread Kampanakis, Panos
> I agree we should measure carefully before deciding whether it be mandatory 
> for PQ certs.

I tested this further for Dilithium and wanted to share the results. The TL;DR; 
is that compressing the leaf cert on top of compressing/omitting the CAs vs 
just compressing/omitting the CAs may only drop us below 9KB for Dilithium3 
(non WebPKI). And that may not always be the case. All other non WebPKI or 
WebPKI cases will not see any significant benefit. Also, there is no case where 
compressing the leaf cert will drop us below the QUIC amplification limit. That 
is one reason why I am suggesting to differentiate between the leaf cert and 
the CA certs compression. 

Another reason is that we should be able to use just compression of CA certs 
(pass 1) for non WebPKI cases where the CT leaf cert dictionary cannot be built 
(pass 2) 

More details on the experiments follow. (Sorry for the length. )

I tested with P256+Kyber512 with Dilithium2 certs and  P256+Kyber512 with 
Dilithium3 certs in TLS 1.3. My Dilithium certs did not include any SCTs (no 
WebPKI). Also, the cert were minimalistic without basic fields, EKUs, Cert 
Policies, CRLS,  SKI, AIA, complicated SANs etc. So my leaf cert was pretty 
slim other that the signature and public key and it was not "compressible" 
much. 

* P256_Kyber512 + Dilithium2:
  - ClientHello = 1137B 
  - ServerHello + Server ChangeCipherSpec = 923+1=924B
  - Server Certificates, Certificate Verify + Server Finished = 7868+2450=10318 
B
DER encoded CA and Server certs are 3.9KB each. That basically adds up to 1.3KB 
(Dilithium2 public key)+2.4(Dilithium2 signatures) + a little more for the rest 
of the cert fields which are small anyway. So a total chain is 7.8KB. To 
confirm intuitively, the Server Certificates, Certificate Verify + Server 
Finished roughly adds up to 10.318=7.8 (cert chain DER 
formatted)+2.4(Dilithium2 signature)+miniscule size of Finished message and 
other fields.  So, if we omitted the CA cert we would get 10.3-3.9=6.4KB. If we 
compressed the leaf cert fields further, we could save maximum another 0.5-1KB 
which is not even possible for these certs because they were really 
minimalistic. So we would definitely end up over 5KB which is way over 
3xClientHello size. QUIC amplification would still kick in. 

Regarding the 9-10KB TLS 1.3 limit from Bas' blog post, at 6.4+1KB if we 
account for heavier certs, we would be way below 9KB by just omitting the CA 
certs even with heavier leaf certs than my minimalistic ones.

So, leaf compression on top of CA omission would not make a difference for the 
QUIC limit or the 9-10KB TLS 1.3 limit. 

Now, for WebPKI, if we add 2 more Dilithium2 signatures 2*2.4=4.8KB, it would 
take us to 6.4+4.8=11.2KB by just omitting the CA certs. If we compress the 
leaf fields on top of that and we save 0.5-1KB more KB, we still stay over both 
the QUIC and the TLS limit. So for WebPKI, compressing the leaf fields does not 
buy us much.

* P384_Kyber768 + Dilithium3: 
  - ClientHello = 1554B
  - ServerHello + Server ChangeCipherSpec = 1271+1=1272B
  - Server Certificates, Certificate Verify + Server Finished = 
10894+3323=14217B
DER encoded CA and Server certs are 5.4KB each. That basically adds up to 1.9KB 
(Dilithium3 public key)+3.3(Dilithium3 signatures) + a little more for the rest 
of the cert fields which are small anyway. So a total chain is 10.8KB. To 
confirm intuitively, the Server Certificates, Certificate Verify + Server 
Finished roughly adds up to 14.217=10.8 (cert chain DER 
formatted)+3.3(Dilithium3 signature)+miniscule size of Finished message and 
other fields. So, if we omitted the CA cert we would get 8.8KB. If we 
compressed the leaf cert fields further, we could gain maximum another 0.5-1KB 
which is not even possible for these certs because they were really minimal. So 
we would end up around 8KB which is way over 3xClientHello size. QUIC 
amplification would kick in. 

Regarding the 9-10KB TLS 1.3 limit from Bas' blog post, compression could get 
us around 8KB although I think this would be a stretch. It is probably more 
realistic to say it would be around 9KB with real leaf certs heavier than my 
minimalistic ones. So, TLS 1.3 could see a benefit in some cases, not in others 
depending on the leaf cert bloat.

So leaf compression on top of CA omission would not put us below the QUIC 
amplification limit. It could make a difference for the 9-10KB TLS 1.3 limit 
depending on the leaf cert bloat. 

Now, for WebPKI, if we add 2 more Dilithium3 signatures 2*3.3=6.6KB, it would 
take us to 8.8+6.6=16.4KB by just omitting the CA certs. If we compress the 
leaf fields on top of that and we save 0.5-1KB  more KB, we still stay over 
both the QUIC and the TLS limit. So for WebPKI, compressing the leaf fields 
does not buy us anything.




-Original Message-
From: Dennis Jackson  
Sent: Friday, July 14, 2023 6:28 AM
To: Kampanakis, Panos ; TLS List 
Subject: RE: [EXTERNAL]

Re: [TLS] The TLS WG has placed draft-jackson-tls-cert-abridge in state "Call For Adoption By WG Issued"

2023-08-01 Thread Kampanakis, Panos
I support adoption as well. 
There are some technical objections/ suggestions to address which I have shared 
earlier, but the details can be figured out later.

-Original Message-
From: TLS  On Behalf Of IETF Secretariat
Sent: Tuesday, August 1, 2023 3:38 PM
To: draft-jackson-tls-cert-abri...@ietf.org; tls-cha...@ietf.org; tls@ietf.org
Subject: [EXTERNAL] [TLS] The TLS WG has placed draft-jackson-tls-cert-abridge 
in state "Call For Adoption By WG Issued"

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



The TLS WG has placed draft-jackson-tls-cert-abridge in state Call For Adoption 
By WG Issued (entered by Christopher Wood)

The document is available at
https://datatracker.ietf.org/doc/draft-jackson-tls-cert-abridge/


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New Version Notification for draft-ounsworth-lamps-pq-external-pubkeys-00.txt

2023-10-10 Thread Kampanakis, Panos
Personally, I am against any practical use of McEliece given all the other 
available options. 1MB public keys are unnecessary, impact performance, and are 
wasteful.

Regardless of the public key in the cert though, RFC7924 allows (with other 
caveats) for not sending the server cert (and public key) if the client has 
prior knowledge of it. So, it solves the issue for TLS at least in one 
direction.

Are there any other uses for this draft? For example, what use-cases would see 
a material difference by omitting 1-2KB of the Dilithium or Falcon public key 
when the rest of the cert will still amount to 2-3KB (4-7KB if we add in SCTs)?





From: TLS  On Behalf Of Mike Ounsworth
Sent: Saturday, September 30, 2023 6:19 PM
To: tls@ietf.org
Subject: [TLS] FW: [EXTERNAL] New Version Notification for 
draft-ounsworth-lamps-pq-external-pubkeys-00.txt

Hi TLS WG!

This is both a new draft announcement, and a request for a short (5 min?) 
speaking slot at 118.

We want to socialize the idea of X.509 certificates with external public keys 
(ie the cert contains a link and hash of the public key that can be fetched or 
cached out-of-band.

The primary motivator of this LAMPS draft is Classic McEliece encryption certs, 
but we think this could also be valuable for TLS authentication certs.

Consider the following two potential use-cases:

1. Browsers
Browsers already have mechanisms to cache intermediate CA certificates. It does 
not seem like a big leap to also cache external public keys for the server 
certs of frequently-visited websites. (yes, yes, I know that the idea of 
caching server public keys runs counter to the desire for the Internet to move 
to 14-day certs. Shrug)

2. Mutual-auth TLS within a cluster
Consider a collection of docker containers within a kubernetes cluster. 
Consider that each container has a local volume mount of a read-only database 
of the public keys of all containers in the cluster. Then 
container-to-container mutual-auth TLS sessions could use much smaller 
certificates that contain references to public key objects in the shared 
database, instead of the large PQ public keys themselves.

---
Mike Ounsworth

From: Spasm mailto:spasm-boun...@ietf.org>> On Behalf 
Of Mike Ounsworth
Sent: Saturday, September 30, 2023 5:16 PM
To: 'LAMPS' mailto:sp...@ietf.org>>
Cc: John Gray mailto:john.g...@entrust.com>>; 
Markku-Juhani O. Saarinen mailto:m...@pqshield.com>>; David 
Hook mailto:david.h...@keyfactor.com>>
Subject: [lamps] FW: [EXTERNAL] New Version Notification for 
draft-ounsworth-lamps-pq-external-pubkeys-00.txt

Hi LAMPS! This is both a new draft announcement, and a request for a short (5 
min?) speaking slot at 118. Actually, this is not a new draft. Back in 2021 
Markku and I put forward a draft for External Public Key -- 
draft-ounsworth-pq-external-pubkeys-00

Hi LAMPS!

This is both a new draft announcement, and a request for a short (5 min?) 
speaking slot at 118.

Actually, this is not a new draft. Back in 2021 Markku and I put forward a 
draft for External Public Key -- draft-ounsworth-pq-external-pubkeys-00 (the 
only reason this is an -00 is because I included "lamps" in the draft name). 
The idea is that instead of a putting the full public key in a cert, you just 
put a hash and pointer to it:

   ExternalValue ::= SEQUENCE {
 location GeneralName,
 hashAlg  AlgorithmIdentifier,
 hashVal  BIT STRING
   }

That allows super small PQ certs in cases where you can pass the public key 
blob through some out-of-band mechanism.

Here's the mail list discussion from 2021:
https://mailarchive.ietf.org/arch/msg/spasm/yv7mbMMtpSlJlir8H8_D2Hjr99g/


It turns out that BouncyCastle has implemented this at the request of one of 
their customers as a way around megabyte sized Classic McEliece certs; it is 
especially useful for usecases where clients have a way to fetch-and-cache or 
be pre-provisioned with its peer's public keys out-of-band. As such, Entrust 
and KeyFactor are reviving this draft.

We suspect this might also be of interest to the TLS WG, but I will start a 
separate thread on the TLS list.


---
Mike Ounsworth

From: internet-dra...@ietf.org 
mailto:internet-dra...@ietf.org>>
Sent: Saturday, September 30, 2023 5:12 PM
To: D. Hook mailto:david.h...@keyfactor.com>>; John 
Gray mailto:john.g...@entrust.com>>; Markku-Juhani O. 
Saarinen mailto:m...@pqshield.com>>; John Gray 
mailto:john.g...@entrust.com>>; Markku-Juhani Saarinen 
mailto:m...@pqshield.com>>; Mike Ounsworth 
mailto:mike.ounswo...@entrust.com>>
Subject: [EXTERNAL] New Version Notification for 
draft-ounsworth-lamps-pq-external-pubkeys-00.txt

A new version of Internet-Draft draft-ounsworth-lamps-pq-external-pubkeys-00. 
txt has bee

Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

2023-11-06 Thread Kampanakis, Panos
> Concretely, after ML-KEM is finished, I was planning to update 
> draft-schwabe-cfrg-kyber to match it, and proposing to register a codepoint 
> for a single ML-KEM-768 hybrid in draft-ietf-tls-hybrid-design.

Agreed, but I would suggest three (x25519-mlkem768, p256-mlkem768, 
p384-mlkem1024) to cover FIPS and CNSA 2.0 compliance. More than three 
combinations is unnecessary imo.


From: TLS  On Behalf Of Bas Westerbaan
Sent: Monday, November 6, 2023 6:37 AM
To: John Mattsson 
Cc: TLS@ietf.org
Subject: RE: [EXTERNAL] [TLS] What is the TLS WG plan for quantum-resistant 
algorithms?


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Thanks for bringing this up. There are a bunch of (implicit) questions in your 
e-mail.

1. Do we want rfc describing the final NIST standards? And for which? I'm ok 
with that — in this order of priority: ml-kem, ml-dsa, slh-dsa.

2. For which algorithms do we want to assign codepoints once the NIST standards 
are out? Codepoints are cheap and use cases/rules are different, but especially 
with the hybrids, I'd encourage us to try to be disciplined and keep the list 
as short as we can for now, so that early adopters for which it doesn't matter, 
all choose the same thing. The DNS mechanism of 
draft-davidben-tls-key-share-prediction helps on the performance side, but it 
doesn't solve the duplicate engineering/validation if there are a dozen 
essentially equivalent KEMs.

3. Do we want to standardise non-hybrid KEMs for TLS? I don't care for them 
yet, but others might.

4. Do we need hybrid signatures for the TLS handshake? I don't see the use, but 
could be convinced otherwise.

5. What is the future of AuthKEM? That's definitely a different e-mail thread.

Concretely, after ML-KEM is finished, I was planning to update 
draft-schwabe-cfrg-kyber to match it, and proposing to register a codepoint for 
a single ML-KEM-768 hybrid in draft-ietf-tls-hybrid-design.

Best,

 Bas


On Mon, Nov 6, 2023 at 10:10 AM John Mattsson 
mailto:40ericsson@dmarc.ietf.org>>
 wrote:
Hi,

NIST has released draft standards for ML-KEM, ML-DSA, and ML-SLH. Final 
standards are expected in Q1 2024.
https://csrc.nist.gov/news/2023/three-draft-fips-for-post-quantum-cryptography

I would like to have standard track TLS (and DTLS, QUIC) RFCs for ML-KEM and 
ML-DSA (all security levels standardized by NIST) as soon as possible after the 
final NIST standards are ready. 3GPP is relying almost exclusively on IETF RFCs 
for uses of public key cryptography (the exception is ECIES for IMSI encryption 
but that will likely use HPKE with ML-KEM in the future).

Looking at the TLS document list, it seems severely lacking when it comes to 
ML-KEM, ML-DSA…

The adopted draft-ietf-tls-hybrid-design is an informal draft dealing with the 
pre-standard Kyber.
https://datatracker.ietf.org/doc/draft-ietf-tls-hybrid-design/
AuthKEM is a quite big change to TLS
https://datatracker.ietf.org/doc/draft-wiggers-tls-authkem-psk/

This is not adopted, informal, and dealing with the pre-standard Kyber.
https://datatracker.ietf.org/doc/draft-kwiatkowski-tls-ecdhe-kyber/

What is the TLS WG plan for quantum-resistant algorithms? My current view is 
that I would like ML-KEM-512, ML-KEM-768, ML-KEM-1024, ML-DSA-44, ML-DSA-65, 
and ML-DSA-87 registered asap. For hybrid key exchange I think X25519 and X448 
are the only options that make sense. For hybrid signing, ECDSA, EdDSA, and RSA 
could all make sense.

Cheers,
John

From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
internet-dra...@ietf.org 
mailto:internet-dra...@ietf.org>>
Date: Friday, 8 September 2023 at 02:48
To: i-d-annou...@ietf.org 
mailto:i-d-annou...@ietf.org>>
Cc: tls@ietf.org mailto:tls@ietf.org>>
Subject: [TLS] I-D Action: draft-ietf-tls-hybrid-design-09.txt
Internet-Draft draft-ietf-tls-hybrid-design-09.txt is now available. It is a
work item of the Transport Layer Security (TLS) WG of the IETF.

   Title:   Hybrid key exchange in TLS 1.3
   Authors: Douglas Stebila
Scott Fluhrer
Shay Gueron
   Name:draft-ietf-tls-hybrid-design-09.txt
   Pages:   23
   Dates:   2023-09-07

Abstract:

   Hybrid key exchange refers to using multiple key exchange algorithms
   simultaneously and combining the result with the goal of providing
   security even if all but one of the component algorithms is broken.
   It is motivated by transition to post-quantum cryptography.  This
   document provides a construction for hybrid key exchange in the
   Transport Layer Security (TLS) protocol version 1.3.

   Discussion of this work is encouraged to happen on the TLS IETF
   mailing list tls@ietf.org or on the GitHub repository 
which contains
   the draft: 
https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-45444

Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-11 Thread Kampanakis, Panos
+1 on making X-Wing a generic construction and stir in the KEM ciphertext.

In the ML-KEM case, the SHAKE256 cost of an additional 1-1.5KB ciphertext c2 
will be miniscule compared to the other operations. And this will be similar 
for other KEMs are well.

For example, from https://bench.cr.yp.to/results-sha3.html it seems the total 
additional cost would be ~15 Kcycles for ML-KEM-1024 in most platforms which is 
pretty small compared to the sk2<-random(32)+ske<-random(32)+X25519.DH(ske, 
gX25519)+X25519.DH(sk2, gX25519) costs which amount to 400-1200 Kcycles (using 
https://bench.cr.yp.to/results-dh.html). Is a 5% savings worth the ML-KEM 
specific combiner?



From: TLS  On Behalf Of Peter C
Sent: Thursday, January 11, 2024 10:38 AM
To: Mike Ounsworth ; Bas 
Westerbaan 
Cc: IRTF CFRG ;  ; k...@cupdev.net
Subject: RE: [EXTERNAL] [TLS] [CFRG] [EXTERNAL] X-Wing: the go-to PQ/T hybrid 
KEM?


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Mike,

X-Wing is not a profile of the generic construction.  Dropping the ML-KEM 
ciphertext changes the security assumptions you need to make.  If X25519 is 
secure then, in the generic construction, ML-KEM doesn’t need to satisfy any 
security properties at all for the hybrid to be secure.  In X-Wing, it still 
needs to be ciphertext collision resistant.  The X-Wing paper 
(https://ia.cr/2024/039) argues this holds for ML-KEM – or any similar KEM – 
but that depends on decapsulation functioning correctly.

Peter

From: CFRG mailto:cfrg-boun...@irtf.org>> On Behalf Of 
Mike Ounsworth
Sent: Thursday, January 11, 2024 2:57 PM
To: Bas Westerbaan 
mailto:bas=40cloudflare@dmarc.ietf.org>>
Cc: IRTF CFRG mailto:c...@irtf.org>>; 
mailto:tls@ietf.org>> mailto:tls@ietf.org>>; 
k...@cupdev.net
Subject: Re: [CFRG] [EXTERNAL] X-Wing: the go-to PQ/T hybrid KEM?

Right. I’m just thinking out loud here.

If the Generic is

KDF(counter || KEM1_ct || KEM1_ss || KEM2_ct  || KEM2_ss || fixedInfo)

And X-Wing is:

SHA3-256( “\.//^\” || ML-KEM_ss || X25519_ss || X25519_ct || X25519_pk )

It looks pretty close to me; you’ve dropped the ML-KEM CT, added the X25519 
recipient public key, and moved the fixedInfo from the end to the beginning.

The question is: is that close enough to be considered a profile? Do we want to 
adapt the Generic so that X-Wing is properly a profile? Binding to the ECC 
public keys is probably not a bad idea in general. Certainly it would make no 
sense for some IETF protocols to use X-Wing while others use the ML-KEM + 
X25519 instantiation of the generic. I think I’m convincing myself that the 
Generic should be adjusted so that X-Wing is the obvious instantiation for 
ML-KEM + X25519.

Aside: do you have an opinion about fixedInfo as a prefix vs a suffix? We chose 
suffix simply because it more obviously aligns with SP 800-56Cr2, and we’ve all 
had the experience of FIPS labs being picky about things like that.

---
Mike Ounsworth

From: Bas Westerbaan 
mailto:bas=40cloudflare@dmarc.ietf.org>>
Sent: Thursday, January 11, 2024 7:07 AM
To: Mike Ounsworth 
mailto:mike.ounswo...@entrust.com>>
Cc: IRTF CFRG mailto:c...@irtf.org>>; 
mailto:tls@ietf.org>> mailto:tls@ietf.org>>; 
Deirdre Connolly mailto:durumcrustu...@gmail.com>>; 
k...@cupdev.net
Subject: Re: [EXTERNAL] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

Speaking for myself (not for my co-authors), this feels like friendly, 
complementary work to draft-ounsworth-cfrg-kem-combiners; I agree. We could 
consider adding a section with concrete instantiations, and the first one would 
be X-Wing 😊 (followed


Speaking for myself (not for my co-authors), this feels like friendly, 
complementary work to draft-ounsworth-cfrg-kem-combiners;

I agree.

We could consider adding a section with concrete instantiations, and the first 
one would be X-Wing 😊 (followed by ML-KEM + P-256, Brainpool, and RSA variants).

I guess that leads to the following question: @Bas 
Westerbaan, @Deirdre 
Connolly, Peter, would you be open to merging 
X-Wing into the generic combiner draft, or is there value in it being 
standalone?

X-Wing explicitly trades genericity for simplicity. We will not get such a 
simple and efficient construction if it is the instantiation of an easy-to-use 
generic construction.

Best,

 Bas


---
Mike Ounsworth

From: CFRG mailto:cfrg-boun...@irtf.org>> On Behalf Of 
Bas Westerbaan
Sent: Wednesday, January 10, 2024 2:14 PM
To: IRTF CFRG mailto:c...@irtf.org>>; 
mailto:tls@ietf.org>> mailto:tls@ietf.org>>
Cc: k...@cupdev.net
Subject: [EXTERNAL] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

Dear tls and cfrg working groups, With ML-KEM (née Kyber) expected to be 
finalized this year, it’s time to revisit the question of which PQ/T hybrid 
KEMs to

Re: [TLS] draft-ietf-tls-cert-abridge Update

2024-03-01 Thread Kampanakis, Panos
Hi Dennis,

I created a git issue 
https://github.com/tlswg/draft-ietf-tls-cert-abridge/issues/23 but I am pasting 
it here for the sake of the discussion:

What does the client do if the server only does Pass 1 and compresses / omits 
the chain certs but does not compress the end-entity certs (Pass 2)?

The client should be fine with that. It should be able to reconstruct the chain 
and used the uncompressed end-entity cert. It should not fail the handshake. I 
suggest the Implementation Complexity Section to say something like

> Servers MAY chose to compress just the cert chain or the end-certificate 
> depending on their ability to perform Pass 1 or 2 respectively. Client MUST 
> be able to process a compressed chain or an end-entity certificate 
> independently.

Thanks,
Panos


From: TLS  On Behalf Of Dennis Jackson
Sent: Friday, March 1, 2024 8:03 AM
To: TLS List 
Subject: [EXTERNAL] [TLS] draft-ietf-tls-cert-abridge Update


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Hi all,

I wanted to give a quick update on the draft.

On the implementation side, we have now landed support for TLS Certificate 
Compression in Firefox Nightly which was a prerequisite for experimenting with 
this scheme (thank you to Anna Weine). We're working on a rust crate 
implementing the current draft and expect to start experimenting with abridged 
certs in Firefox (with a server-side partner) ahead of IETF 120.

On the editorial side, I've addressed the comments on presentation and 
clarification made since IETF 117 which are now in the editors copy - there's 
an overall diff here [1] and atomic changes here [2] . There are two small PRs 
I've opened addressing minor comments by Ben Schwarz on fingerprinting 
considerations [3] and Jared Crawford on the ordering of certificates [4]. 
Feedback is welcome via mail or on the PRs directly.

Best,
Dennis

[1] 
https://author-tools.ietf.org/api/iddiff?doc_1=draft-ietf-tls-cert-abridge&url_2=https://tlswg.github.io/draft-ietf-tls-cert-abridge/draft-ietf-tls-cert-abridge.txt

[2] https://github.com/tlswg/draft-ietf-tls-cert-abridge/commits/main/

[3] https://github.com/tlswg/draft-ietf-tls-cert-abridge/pull/21/files

[4] https://github.com/tlswg/draft-ietf-tls-cert-abridge/pull/19/files
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-ietf-tls-cert-abridge Update

2024-03-04 Thread Kampanakis, Panos
Hi Dennis,

> I can see two different ways to handle it. Either as you suggest, we have it 
> be a runtime decision and we just prefix the compressed form with a byte to 
> indicate whether pass 2 has been used. Alternatively, we can define two 
> codepoints, (pass 1 + pass 2, pass 1).
> I'd like to experiment with both operations and measure what the real world 
> difference is first, then we can make a decision on how to proceed. There's 
> also been more interest in the non-webpki use case than I expected, so that 
> needs to factor in to whichever option we pick.

Maybe these will not matter for the scenario I am considering. Let’s say the 
client advertised support for draft-ietf-tls-cert-abridge. And the server sent 
back
- CompressedCertificate which includes the 2 identifiers for the ICA and RootCA 
from Pass 1.
- uncompressed, traditional CertificateEnty of the end-entity certificate

Or it sent back

- uncompressed, traditional CertificateEnties for the  ICA and RootCA certs
- CompressedCertificate which includes the ZStandard compressed (based on the 
Pass2 dictionary) end-entity cert

My point is that nothing should prevent the client from being able to handle 
these two scenarios and normative language should point that out. Any software 
that can parse certs in compressed form, ought to be able to parse them in 
regular form if the server did not support Pass1 (CA cers were not available 
for some reason) or Pass2 (eg. if CT Logs were not available for some reason).

Am I overseeing something?


From: Dennis Jackson 
Sent: Monday, March 4, 2024 10:47 AM
To: Kampanakis, Panos ; TLS List 
Subject: RE: [EXTERNAL] [TLS] draft-ietf-tls-cert-abridge Update


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Hi Panos,

On 02/03/2024 04:09, Kampanakis, Panos wrote:
Hi Dennis,

I created a git issue 
https://github.com/tlswg/draft-ietf-tls-cert-abridge/issues/23 but I am pasting 
it here for the sake of the discussion:

What does the client do if the server only does Pass 1 and compresses / omits 
the chain certs but does not compress the end-entity certs (Pass 2)?

The client should be fine with that. It should be able to reconstruct the chain 
and used the uncompressed end-entity cert. It should not fail the handshake. I 
suggest the Implementation Complexity Section to say something like

I can see two different ways to handle it. Either as you suggest, we have it be 
a runtime decision and we just prefix the compressed form with a byte to 
indicate whether pass 2 has been used. Alternatively, we can define two 
codepoints, (pass 1 + pass 2, pass 1).

I'd like to experiment with both operations and measure what the real world 
difference is first, then we can make a decision on how to proceed. There's 
also been more interest in the non-webpki use case than I expected, so that 
needs to factor in to whichever option we pick.

Best,
Dennis

> Servers MAY chose to compress just the cert chain or the end-certificate 
> depending on their ability to perform Pass 1 or 2 respectively. Client MUST 
> be able to process a compressed chain or an end-entity certificate 
> independently.

Thanks,
Panos


From: TLS <mailto:tls-boun...@ietf.org> On Behalf Of 
Dennis Jackson
Sent: Friday, March 1, 2024 8:03 AM
To: TLS List <mailto:tls@ietf.org>
Subject: [EXTERNAL] [TLS] draft-ietf-tls-cert-abridge Update


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Hi all,

I wanted to give a quick update on the draft.

On the implementation side, we have now landed support for TLS Certificate 
Compression in Firefox Nightly which was a prerequisite for experimenting with 
this scheme (thank you to Anna Weine). We're working on a rust crate 
implementing the current draft and expect to start experimenting with abridged 
certs in Firefox (with a server-side partner) ahead of IETF 120.

On the editorial side, I've addressed the comments on presentation and 
clarification made since IETF 117 which are now in the editors copy - there's 
an overall diff here [1] and atomic changes here [2] . There are two small PRs 
I've opened addressing minor comments by Ben Schwarz on fingerprinting 
considerations [3] and Jared Crawford on the ordering of certificates [4]. 
Feedback is welcome via mail or on the PRs directly.

Best,
Dennis

[1] 
https://author-tools.ietf.org/api/iddiff?doc_1=draft-ietf-tls-cert-abridge&url_2=https://tlswg.github.io/draft-ietf-tls-cert-abridge/draft-ietf-tls-cert-abridge.txt

[2] https://github.com/tlswg/draft-ietf-tls-cert-abridge/commits/main/

[3] https://github.com/tlswg/draft-ietf-tls-cert-abridge/pull/21/files

[4] https://github.com/tlswg/draft-ietf-tls-cert-abridge

Re: [TLS] draft-ietf-tls-cert-abridge Update

2024-03-06 Thread Kampanakis, Panos
Good point, understood, thanks.

> I was suggesting either have it be a single label for the entire message or 
> putting the label into the TLS1.3 Cert Compression codepoint.

I think the former sounds more reasonable. 2 codepoints for (only CA pass 1 
compression) and (Pass1+Pass2) seems to be wasting codepoints.

The problem I am trying to address is cases where 1/ SCTs are not available 
(like Private PKIs), or 2/ the server is lazy and does not want to create that 
dictionary, or 3/ the benefit of Pass 2 is not important enough. I understand 
that you will collect data for 3/ to hopefully prove it, so I will wait for 
those. But I think 1/ and 2/ are still worth addressing.


From: TLS  On Behalf Of Dennis Jackson
Sent: Wednesday, March 6, 2024 7:39 AM
To: tls@ietf.org
Subject: RE: [EXTERNAL] [TLS] draft-ietf-tls-cert-abridge Update


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Hi Panos,
On 05/03/2024 04:14, Kampanakis, Panos wrote:
Hi Dennis,

> I can see two different ways to handle it. Either as you suggest, we have it 
> be a runtime decision and we just prefix the compressed form with a byte to 
> indicate whether pass 2 has been used. Alternatively, we can define two 
> codepoints, (pass 1 + pass 2, pass 1).
> I'd like to experiment with both operations and measure what the real world 
> difference is first, then we can make a decision on how to proceed. There's 
> also been more interest in the non-webpki use case than I expected, so that 
> needs to factor in to whichever option we pick.

Maybe these will not matter for the scenario I am considering. Let’s say the 
client advertised support for draft-ietf-tls-cert-abridge. And the server sent 
back
- CompressedCertificate which includes the 2 identifiers for the ICA and RootCA 
from Pass 1.
- uncompressed, traditional CertificateEnty of the end-entity certificate

Or it sent back

- uncompressed, traditional CertificateEnties for the  ICA and RootCA certs
- CompressedCertificate which includes the ZStandard compressed (based on the 
Pass2 dictionary) end-entity cert

My point is that nothing should prevent the client from being able to handle 
these two scenarios and normative language should point that out. Any software 
that can parse certs in compressed form, ought to be able to parse them in 
regular form if the server did not support Pass1 (CA cers were not available 
for some reason) or Pass2 (eg. if CT Logs were not available for some reason).

Am I overseeing something?


Yes I think so. TLS1.3 Certificate Compression applies to the entire 
Certificate Message, not individual CertificateEntries in that message. Those 
individual entries don't currently carry identifiers about what type they are, 
their type is negotiated earlier in the EncryptedExtensions extension.

So to handle this as you propose, we'd need to define a type field for each 
entry to specify whether that particular entry had undergone a particular pass 
(or both). In my message, I was suggesting either have it be a single label for 
the entire message or putting the label into the TLS1.3 Cert Compression 
codepoint.

Best,
Dennis
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Time to first byte vs time to last byte

2024-03-07 Thread Kampanakis, Panos
Thx Deirdre for bringing it up.

David,

ACK. I think the overall point of our paper is that application performance is 
more closely related to PQ TTLB than PQ TTFB/handshake.

Snippet from the paper

> Google’s PageSpeed Insights [12] uses a set of metrics to measure the user 
> experience and webpage performance. The First Contentful Paint (FCP), Largest 
> Contentful Paint (LCP), First Input Delay (FID), Interaction to Next Paint 
> (INP), Total Blocking Time (TBT), and Cumulative Layout Shift (CLS) metrics 
> include this work’s TTLB along with other client-side, browser 
> application-specific execution delays. The PageSpeed Insights TTFB metric 
> measures the total time up to the point the first byte of data makes it to 
> the client. So, PageSpeed Insights TTFB is like this work’s TTFB/TLS 
> handshake time with additional network delays like DNS lookup, redirect, 
> service worker startup, and request time.

Specifically about the Web, TTLB (as defined in the paper) is directly related 
to FCP, LCP, FID, INP, TBT, CLS, which are 6 of the 7 metrics in Google’s 
PageSpeed Insights. We don’t want to declare that TTLB is the ultimate metric, 
but intuitively, I think it is a better indicator when it comes to application 
performance than TTFB.

That does not intend to underestimate the importance of the studies on 
handshake performance which was crucial to identify the best performing new 
KEMs and signatures. It also does not intend to underestimate the importance of 
slimming down PQ TLS 1.3 handshakes as much as possible.

Side note about Rob’s point:
We have not collected QUIC TTLB data yet, but I want to say that the paper’s 
TTLB experimental results could more or less be extended to QUIC be subtracting 
one RTT. OK, I don’t have experimental measurements to prove it yet. So I will 
only make this claim and stop until I have more data.



From: TLS  On Behalf Of David Benjamin
Sent: Thursday, March 7, 2024 3:41 PM
To: Deirdre Connolly 
Cc: TLS@ietf.org
Subject: RE: [EXTERNAL] [TLS] Time to first byte vs time to last byte


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


This is good work, but we need to be wary of getting too excited about TTLB, 
and then declaring performance solved. Ultimately, TTLB simply dampens the 
impact of postquantum by mixing in the (handshake-independent) time to do the 
bulk transfer. The question is whether that reflects our goals.

Ultimately, the thing that matters is overall application performance, which 
can be complex to measure because you actually have to try that application. 
Metrics like TTLB, TTFB, etc., are isolated to one connection and thus easier 
to measure, and without checking each application one by one. But they're only 
as valuable as they are predictors of overall application performance. For 
TTLB, both the magnitude and desirability of dampening effect are 
application-specific:

If your goal is transferring a large file on the backend, such that you really 
only care when the operation is complete, then yes, TTLB is a good proxy for 
application system performance. You just care about throughput in that case. 
Moreover, in such applications, if you are transferring a lot of data, the 
dampening effect not only reflects reality but is larger.

However, interactive, user-facing applications are different. There, TTLB is a 
poor proxy for application performance. For example, on the web, performance is 
determined more by how long it takes to display a meaningful webpage to the 
user. (We often call this the time to "first contentful paint".) Now, that is a 
very high-level metric that is impacted by all sorts of things, such as whether 
this is a repeat visit, page structure, etc. So it is hard to immediately 
translate that back down to TLS. But it is frequently much closer to the TTFB 
side of the spectrum than the TTLB side. And indeed, we have been seeing 
impacts from PQ to our high-level metrics on mobile.

There's also a pretty natural intuition for this: since there is much more 
focus on latency than throughput, optimizing an interactive application often 
involves trying to reduce the amount of traffic on the critical path. The more 
the application does so, the less accurate TTLB's dampening effect is, and the 
closer we trend towards TTFB. (Of course, some optimizations in this space 
involve making fewer connections, etc. But the point here was to give a rough 
intuition.)

On Thu, Mar 7, 2024 at 2:58 PM Deirdre Connolly 
mailto:durumcrustu...@gmail.com>> wrote:
"At the 2024 Workshop on Measurements, Attacks, and Defenses for the Web 
(MADweb), we presented a paper¹ advocating time to last byte (TTLB) as a metric 
for assessing the total impact of data-heavy, quantum-resistant algorithms such 
as ML-KEM and ML-DSA on real-world TLS 1.3 connections. Our paper shows that 
the new algorithms will have a much 

Re: [TLS] Time to first byte vs time to last byte

2024-03-08 Thread Kampanakis, Panos
Hi Martin,

I think we are generally in agreement, but I want to push back on the argument 
that the PQ slowdown for a page transferring 72KB is going to be the problem. I 
will try to quantify this below (look for [72KBExample]). 

Btw, if you have any stats on Web content size distribution, I am interested. 
Other than averages, I could not find any data on how Web content size looks 
today.

Note that our paper not bashing TTFB as a metric, we are just saying TTFB is 
more relevant for use-cases that send little data, which is not the case for 
most applications today. Snippet from the Conclusion of the paper 
> Connections that transfer <10-20KB of data will probably be more impacted by 
> the new data-heavy handshakes  
This study picked data sizes based on public data on Web sizes (HTTP Archive) 
and other data for other cloud uses. Of course, if we reached a world where 
most use-cases (Web connections, IoT sensor measurement conns, cloud conns) 
were typically sending <50KB, then the TTFB would become more relevant. I am 
not sure we are there or we will ever be. Even the page you referenced (thx, I 
did not know of it) argues " ~100KiB of HTML/CSS/fonts and ~300-350KiB of JS." 
from 2021. 

[72KBExample] 
I think your 20-25% for a 72KB example page probably came from reading Fig 4b 
which includes an extra RTT due to initcwnd=10. Given that especially for the 
web, CDNs used much higher initcwnds, let's focus on Figure 10. Based on Fig 
10, 50-100KB of data over a PQ connection, the TTLB would be 10-15% slower for 
1Mbps and 200ms RTT. At higher speeds, this percentage is much less (1-1.5% 
based on Fig 9b), but let's focus on the slow link. 

If we consider the same case for handshake, then the PQ handshake slowdown is 
30-35% which definitely looks like a very impactful slowdown. A 10-15% for the 
TTLB is much less, but someone could argue that even that is a significant 
slowdown. Note we are still in a slow link, so even the classical conn 
transferring 72KB is probably suffering. To quantify that I looked at my data 
from these experiments. A classical connection TTLB for 50-100KB of data at 
1Mbps and 200ms RTT and 0% loss was ~1.25s. This is not shown in the paper 
because I only included text about the 10% loss case. 1.25s for a 72KB page to 
start getting rendered on a browser over a classical conn vs 1.25*1.15=1.44s 
for a PQ one. I am not sure any user waiting for 1.25s will close the browser 
at 1.44s. 

Btw, the Google PageSpeed Insights TTFB metric which includes (DNS lookup, 
redirects and more) considers 0.8s - 1.8s as "Needs improvement". In our 
experiments, the handshake time for 1Mbps and 200ms RTT amounted to 436ms and 
576ms for the classical and PQ handshakes respectively. I am not sure the extra 
140ms (30-35% slowdown) for the PQ handshake would even throw the Google 
PageSpeed Insights TTFB metric to the "Needs improvement" category. 



-Original Message-
From: Martin Thomson  
Sent: Thursday, March 7, 2024 10:26 PM
To: Kampanakis, Panos ; David Benjamin 
; Deirdre Connolly ; Rob Sayre 

Cc: TLS@ietf.org; Childs-Klein, Will 
Subject: RE: [EXTERNAL] [TLS] Time to first byte vs time to last byte

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Hi Panos,

I realize that TTLB might correlate well for some types of web content, but 
it's important to recognize that lots of web content is badly bloated (if you 
can tolerate the invective, this is a pretty good look at the situation, with 
numbers: https://infrequently.org/series/performance-inequality/).

I don't want to call out your employer's properties in particular, but at over 
3M and with relatively few connections, handshakes really don't play much into 
page load performance.  That might be typical, but just being typical doesn't 
mean that it's a case we should be optimizing for.

The 72K page I linked above looks very different.  There, your paper shows a 
20-25% hit on TTLB.  TTFB is likely more affected due to the way congestion 
controllers work and the fact that you never leave slow start.

Cheers,
Martin

On Fri, Mar 8, 2024, at 13:56, Kampanakis, Panos wrote:
> Thx Deirdre for bringing it up.
>
> David,
>
> ACK. I think the overall point of our paper is that application 
> performance is more closely related to PQ TTLB than PQ TTFB/handshake.
>
> Snippet from the paper
>
> *> Google’s PageSpeed Insights [12] uses a set of metrics to measure 
> the user experience and webpage performance. The First Contentful 
> Paint (FCP), Largest Contentful Paint (LCP), First Input Delay (FID), 
> Interaction to Next Paint (INP), Total Blocking Time (TBT), and 
> Cumulative Layout Shift (CLS) metrics include this work’s TTLB along 
> with other client-side, browse

Re: [TLS] [EXT] Re: Time to first byte vs time to last byte

2024-03-13 Thread Kampanakis, Panos
I think we are getting distracted from the point which is to consider the whole 
connection time when assessing handshake impact. Even an extra RTT due to 
initcwnd=10 becomes less and less significant when we are talking about 5+ RTTs 
to establish the conn and transfer >50KB of data.

Interestingly enough, for the example page size in question (72KB), the total 
connection time includes the same number of RTTs (assuming initcwnd=10~=15KB):
- Classical case: 1 for the TCP handshake + 1 for the TLS handshake + 3 for the 
data (15+30+27)
- PQ case: 1 for the TCP handshake + 2 for the TLS handshake + 2 for the data 
(30+42)
OK, this is just because of how 72KB aligns with the TCP congestion window 
increasing.


From: Blumenthal, Uri - 0553 - MITLL 
Sent: Wednesday, March 13, 2024 7:16 PM
To: resea...@bensmyth.com
Cc: Bas Westerbaan ; Kampanakis, Panos 
; TLS@ietf.org; Childs-Klein, Will 
Subject: RE: [EXTERNAL] [EXT] Re: [TLS] Time to first byte vs time to last byte

Please, let us not assume every website is behind a CDN.
Isn't that assumption reasonable? At least for global websites --- without CDN 
performance sucks.
Of course it isn’t.

As a reference point:

Consider reading the New York Times in Canberra,

Well, if you have nothing better to do there… ;-)

doesn't happen without CDN

Of course. The whole point is not to assume every website is behind CDN. Which 
part of “every” is unclear?
Of course there are sites behind a CDN of some kind. And there are sites that 
are not.  It is stupid unwise to ignore that.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A suggestion for handling large key shares

2024-03-19 Thread Kampanakis, Panos
Hi Scott, David,

I think it would make more sense for the normative language about Client and 
Server behavior (section 3.2, 3.4) in 
draft-davidben-tls-key-share-prediction-00 
(https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.html
 ) to go in draft-ietf-tls-hybrid-design. These are now discussed in the Sec 
Considerations of draft-davidben-tls-key-share-prediction-01, but the “SHOULD” 
and “SHOULD NOT” language from -00 section 3.2 and 3.4 ought to be in 
draft-ietf-tls-hybrid-design.

I definitely want to see draft-davidben-tls-key-share-prediction move forward 
too.

Rgs,
Panos

From: TLS  On Behalf Of David Benjamin
Sent: Tuesday, March 19, 2024 1:26 AM
To: Scott Fluhrer (sfluhrer) 
Cc: TLS@ietf.org
Subject: RE: [EXTERNAL] [TLS] A suggestion for handling large key shares


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


> If the server supports P256+ML-KEM, what Matt suggested is that, instead of 
> accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.  We then 
> continue as expected and end up negotiating things in 2 round trips.

I assume ClientHelloRetry was meant to be HelloRetryRequest? If so, yeah, a 
server which aims to prefer P256+ML-KEM over P256 should, well, prefer 
P256+ML-KEM over P256. :-) See the discussions around 
draft-davidben-tls-key-share-prediction. In particular, RFC 8446 is clear on 
the semantics of such a ClientHello:

   This vector MAY be empty if the client is requesting a
   HelloRetryRequest.  Each KeyShareEntry value MUST correspond to a
   group offered in the "supported_groups" extension and MUST appear in
   the same order.  However, the values MAY be a non-contiguous subset
   of the "supported_groups" extension and MAY omit the most preferred
   groups.  Such a situation could arise if the most preferred groups
   are new and unlikely to be supported in enough places to make
   pregenerating key shares for them efficient.

rfc8446bis contains further clarifications: 
https://github.com/tlswg/tls13-spec/pull/1331

Now, some servers (namely OpenSSL) will instead unconditionally select from 
key_share first. This isn't wrong, per se. It is how you implement a server 
which believes all of its supported groups are of comparable security level and 
therefore prioritizes round trips. Such a policy is plausible when you only 
support, say, ECDH curves. It's not so reasonable if you support both ECDH and 
a PQ KEM. But all the spec text for that is in place, so all that is left is 
that folks keep this in mind when adding PQ KEMs to a TLS implementation. A TLS 
stack that always looks at key_share first is not PQ-ready and will need some 
changes before adopting PQ KEMs.

Regarding the other half of this:

> Suppose we have a client that supports both P-256 and P256+ML-KEM.  What the 
> client does is send a key share for P-256, and also indicate support for 
> P256+ML-KEM.  Because we’re including only the P256 key share, the client 
> hello is short

I don't think this is a good tradeoff and would oppose such a SHOULD here. PQ 
KEMs are expensive as they are. Adding a round-trip to it will only make it 
worse. Given the aim is to migrate the TLS ecosystem to PQ, penalizing the 
desired state doesn't make sense. Accordingly, Chrome's Kyber deployment 
includes X25519Kyber768 in the initial ClientHello. While this does mean paying 
an unfortunate upfront cost, this alternative would instead disincentivize 
servers from deploying post-quantum protections.

If you're interested in avoiding the upfront cost, see 
draft-davidben-tls-key-share-prediction-01. That provides a mechanism for 
clients to predict more accurately, though it's yet to even be adopted, so it's 
a bit early to rely on that one. Note also the Security Considerations section, 
which further depends on the server expectations above.

David

On Tue, Mar 19, 2024 at 2:47 PM Scott Fluhrer (sfluhrer) 
mailto:40cisco@dmarc.ietf.org>> wrote:
Recently, Matt Campagna emailed the hybrid KEM group (Douglas, Shay and me) 
about a suggestion about one way to potentially improve the performance (in the 
‘the server hasn’t upgraded yet’ case), and asked if we should add that 
suggestion to our draft.  It occurs to me that this suggestion is equally 
applicable to the pure ML-KEM draft (and future PQ drafts as well); hence 
putting it in our draft might not be the right spot.

Here’s the core idea (Matt’s original scenario was more complicated):


  *   Suppose we have a client that supports both P-256 and P256+ML-KEM.  What 
the client does is send a key share for P-256, and also indicate support for 
P256+ML-KEM.  Because we’re including only the P256 key share, the client hello 
is short
  *   If the server supports only P256, it accepts it, and life goes on as 
normal.
  *   If the server supports P256+ML-KEM, what Matt suggested is that, instead 
of acce

Re: [TLS] A suggestion for handling large key shares

2024-03-19 Thread Kampanakis, Panos
ACK, thx, I had missed the discussions. It looks like what I was referring to 
is addressed more prescriptively in rfc8446bis. That is great.

From: David Benjamin 
Sent: Tuesday, March 19, 2024 4:42 PM
To: Kampanakis, Panos 
Cc: Scott Fluhrer (sfluhrer) ; TLS@ietf.org
Subject: RE: [EXTERNAL] [TLS] A suggestion for handling large key shares


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


I think you're several discussions behind here. :-P I don't think 
draft-ietf-tls-hybrid-design makes sense here. This has nothing to do with 
hybrids, but anything with large key shares. If one were to do Kyber on its 
own, this would apply too. Rather, per the discussion at IETF 118, the WG opted 
to add some clarifications to rfc8446bis in light of draft-00.

It has also turned out that:
a) RFC 8446 actually already defined the semantics (when I wrote draft-00, I'd 
thought it was ambiguous), though the clarification definitely helped
b) The implementation that motivated the downgrade concern says this was not 
bug from misunderstanding the protocol, but an intentional design decision

Given that, the feedback on the list and 
https://github.com/davidben/tls-key-share-prediction/issues/5, I concluded 
past-me was overthinking this and we can simply define the DNS mechanism and 
say it is the server's responsibility to interpret the preexisting TLS spec 
text correctly and pick what it believes is a coherent selection policy. So 
draft-01 now simply defines the DNS mechanism without any complex codepoint 
classification and includes some discussion of the situation in Security 
Considerations, as you noted.

Of what remains in Security Considerations, the random client MAY is specific 
to this draft and does not make sense to move. The server NOT RECOMMENDED is 
simply restating the preexisting implications of RFC 8446 and the obvious 
implications of believing some options are more secure than others. If someone 
wishes to replicate it into another document, they're welcome to, but I 
disagree with moving it. In the context of the discussion in that section, it 
makes sense to restate this implication because this is very relevant to why 
it's okay for the client to use DNS to influence key shares.

David

On Wed, Mar 20, 2024 at 6:08 AM Kampanakis, Panos 
mailto:kpa...@amazon.com>> wrote:
Hi Scott, David,

I think it would make more sense for the normative language about Client and 
Server behavior (section 3.2, 3.4) in 
draft-davidben-tls-key-share-prediction-00 
(https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.html
 ) to go in draft-ietf-tls-hybrid-design. These are now discussed in the Sec 
Considerations of draft-davidben-tls-key-share-prediction-01, but the “SHOULD” 
and “SHOULD NOT” language from -00 section 3.2 and 3.4 ought to be in 
draft-ietf-tls-hybrid-design.

I definitely want to see draft-davidben-tls-key-share-prediction move forward 
too.

Rgs,
Panos

From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of 
David Benjamin
Sent: Tuesday, March 19, 2024 1:26 AM
To: Scott Fluhrer (sfluhrer) 
mailto:40cisco@dmarc.ietf.org>>
Cc: TLS@ietf.org<mailto:TLS@ietf.org>
Subject: RE: [EXTERNAL] [TLS] A suggestion for handling large key shares


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


> If the server supports P256+ML-KEM, what Matt suggested is that, instead of 
> accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.  We then 
> continue as expected and end up negotiating things in 2 round trips.

I assume ClientHelloRetry was meant to be HelloRetryRequest? If so, yeah, a 
server which aims to prefer P256+ML-KEM over P256 should, well, prefer 
P256+ML-KEM over P256. :-) See the discussions around 
draft-davidben-tls-key-share-prediction. In particular, RFC 8446 is clear on 
the semantics of such a ClientHello:

   This vector MAY be empty if the client is requesting a
   HelloRetryRequest.  Each KeyShareEntry value MUST correspond to a
   group offered in the "supported_groups" extension and MUST appear in
   the same order.  However, the values MAY be a non-contiguous subset
   of the "supported_groups" extension and MAY omit the most preferred
   groups.  Such a situation could arise if the most preferred groups
   are new and unlikely to be supported in enough places to make
   pregenerating key shares for them efficient.

rfc8446bis contains further clarifications: 
https://github.com/tlswg/tls13-spec/pull/1331

Now, some servers (namely OpenSSL) will instead unconditionally select from 
key_share first. This isn't wrong, per se. It is how you implement a server 
which believes all of its supported groups are of comparable securit

[TLS]Re: Curve-popularity data?

2024-06-04 Thread Kampanakis, Panos
+1 .

I was of the impression that 
https://datatracker.ietf.org/doc/html/draft-ietf-tls-hybrid-design-10#name-iana-considerations
 was going to get final codepoints for both combinations.

Also, “PQ hybrid automatically FIPSed with P256” is an important factor. Using 
a FIPS certified ML-KEM implementation in X25519+MLKEM would address this too, 
but certified implementations of ML-KEM are 2.5+ years out due to NIST’s FIPS 
queue.


From: Eric Rescorla 
Sent: Monday, June 3, 2024 12:31 PM
To: David Adrian 
Cc: Salz, Rich ; tls@ietf.org
Subject: [EXTERNAL] [TLS]Re: Curve-popularity data?


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Indeed. I'd like to pull this back a bit to the question of what we 
specify/mandate.

As I understand the situation, there are a number of environments that require 
P-256, so it seems like it would not be practical to just standardize/mandate 
X25519 + MLKEM if we want to get to 100% PQ algorithms.

-Ekr



On Mon, Jun 3, 2024 at 7:20 AM David Adrian 
mailto:davad...@umich.edu>> wrote:
I don't really see why popularity of previous methods is relevant to picking 
what the necessarily new method will be is, but from the perspective of Chrome 
on Windows, across all ephemeral TCP TLS (1.2 and 1.3, excluding 1.2 RSA), the 
breakdown is roughly:

15% P256
3% P384
56% X25519
26% X25519+Kyber

On Mon, Jun 3, 2024 at 10:05 AM Filippo Valsorda 
mailto:fili...@ml.filippo.io>> wrote:
2024-06-03 15:34 GMT+02:00 Bas Westerbaan 
mailto:b...@cloudflare.com>>:
More importantly, there are servers that will HRR to X25519 if presented a 
P-256 keyshare. (Eg. BoringSSL's default behaviour.) Unfortunately I don't have 
data at hand how often that happens.

Are you saying that some of the 97.6% of servers that support P-256 still HRR 
to X25519 if presented a P-256 keyshare and a {P-256, X25519} supported groups 
list, and that's BoringSSL's default behavior? I find that very surprising and 
would be curious about the rationale.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Curve-popularity data?

2024-06-04 Thread Kampanakis, Panos
> and (crucially) for the verified modules with ML-KEM.

True, but the NIST queue is over 2+ years right now. Check out the Modules In 
Process which go back to 2022 
https://csrc.nist.gov/Projects/cryptographic-module-validation-program/modules-in-process/modules-in-process-list
 So, if we only got X25519+ML-KEM we would not be able to use PQ-hybrid in 
endpoints that require compliance for >=2.5 years



From: Bas Westerbaan 
Sent: Monday, June 3, 2024 4:31 PM
To: Stephen Farrell 
Cc: Andrei Popov ; Salz, Rich 
; tls@ietf.org
Subject: [TLS]Re: [EXTERNAL] Re: Curve-popularity data?

X25519+ML-KEM will be acceptable for FIPS, just like P-256+Kyber is today. We 
just need to wait for the final standard, and (crucially) for the verified 
modules with ML-KEM.

On Mon, Jun 3, 2024 at 8:56 PM Stephen Farrell 
mailto:stephen.farr...@cs.tcd.ie>> wrote:

I'm afraid I have no measurements to offer, but...

On 03/06/2024 19:05, Eric Rescorla wrote:
> The question is rather what the minimum set of algorithms we need is. My
>   point is that that has to include P-256. It may well be the case that
> it needs to also include X25519.

Yep, the entirely obvious answer here is we'll end up defining at least
x25519+PQ and p256+PQ. Arguing for one but not the other (in the TLS
WG) seems pretty pointless to me. (That said, the measurements offered
are as always interesting, so the discussion is less pointless than
the argument:-)

Cheers,
S.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: draft-ietf-tls-key-share-prediction next steps

2024-09-12 Thread Kampanakis, Panos
Hi David,

Note I am not against draft-ietf-tls-key-share-prediction. It is definitely 
better to not send unnecessary bytes on the wire.

> Yup. Even adding one PQ key was a noticeable size cost (we still haven't 
> shipped Kyber/ML-KEM to mobile Chrome because the performance regression was 
> more prominent) so, yeah, we definitely do not want to send two PQ keys in 
> the initial ClientHello.

I have seen this claim before and, respectfully, I don’t fully buy it. A mobile 
client that suffers with two packet CHs is probably already crawling for 
hundreds of KBs of web content per conn. Any numbers you have to showcase the 
regression and the relevant affected web metrics?


From: David Benjamin 
Sent: Wednesday, September 11, 2024 8:02 PM
To: Ilari Liusvaara 
Cc:  
Subject: [EXTERNAL] [TLS] Re: draft-ietf-tls-key-share-prediction next steps


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


On Wed, Sep 11, 2024 at 3:58 AM Ilari Liusvaara 
mailto:ilariliusva...@welho.com>> wrote:
On Wed, Sep 11, 2024 at 10:13:55AM +0400, Loganaden Velvindron wrote:
> On Wed, 11 Sept 2024 at 01:40, David Benjamin 
> mailto:david...@chromium.org>> wrote:
> >
> > Hi all,
> >
> > Now that we're working through the Kyber to ML-KEM transition, TLS
> > 1.3's awkwardness around key share prediction is becoming starkly
> > visible. (It is difficult for clients to efficiently offer both
> > Kyber and ML-KEM, but a hard transition loses PQ coverage for some
> > clients. Kyber was a draft standard, just deployed by early
> > adopters, so while not ideal, I think the hard transition is not
> > the end of the world. ML-KEM is expected to be durable, so a
> > coverage-interrupting transition to FancyNewKEM would be a problem.)
> >
>
> Can you detail a little bit more in terms of numbers ?
> -Did you discover that handshakes are failing because of the larger
> ClientHello ?
> -Some web clients aren't auto-updating ?

The outright failures because of larger ClientHello are actually web
server issues. However, even ignoring any hard failures, larger
ClientHello can cause performance issues.

The most relevant of the issues is tldr.fail (https://tldr.fail/),
where web server ends up unable to deal with TCP-level fragmentation
of ClientHello. Even one PQ key (1216 bytes) fills vast manjority of
TCP fragment (and other stuff in ClientHello can easily push it over,
as upper limit is around 1430-1460 bytes). There is no way to fit two
PQ keys.

Then some web servers have ClientHello buffer limits. However, these
limits are almost invariably high enough that one could fit two PQ
keys. IIRC, some research years back came to conclusion that the
maximum tolerable key size is about 3.3kB, which is almost enough for
three PQ keys.

Then there are a lot of web servers that are unable to deal with TLS-
level fragmentation of ClientHello. However, this is not really
relevant, given that the limit is 16kB, which is easily enough for
10 PQ keys and more than enough to definitely cause performance issues
with TCP.

Yup. Even adding one PQ key was a noticeable size cost (we still haven't 
shipped Kyber/ML-KEM to mobile Chrome because the performance regression was 
more prominent) so, yeah, we definitely do not want to send two PQ keys in the 
initial ClientHello. Sending them in supported_groups is cheap, but as those 
options take a RTT hit, they're not really practical. Hence all the 
key-share-prediction work. (For some more background, so the earlier WG 
discussions around this draft, before it was adopted.)

And it is possible for web server to offer both, so even with hard
client transition both old and new clients get PQ coverage.

Yup, although that transition strategy requires that every PQ server move 
before any client moves, if your goal is to never interrupt coverage. That's 
not really a viable transition strategy in the long run, once PQ becomes widely 
deployed.

David
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: draft-ietf-tls-key-share-prediction next steps

2024-09-15 Thread Kampanakis, Panos

Thx Adrian for the reaction.

> There is a considerable difference between loading large amounts of data for 
> a single site, which is a decision that is controllable by a site, and adding 
> a fixed amount of latency to _all_ connections to all sites to defend against 
> a computer that does not exist [3].

Fair. And draft-ietf-tls-key-share-prediction tries to address that. I like the 
draft. Btw, I have some disagreements to your “PQC Signatures damn too big” 
blog referenced in [3], but these are more or less similar to the ones I am 
sharing below.

> Adding Kyber to the TLS handshake increased TLS handshake latency by 4% on 
> desktop [1] and 9% on Android at P50, and considerably higher at P95. In 
> general, Cloudflare found that every 1K of additional data added to the 
> server response caused median HTTPS handshake latency increase by around 1.5% 
> [2].

I have seen these arguments, but I am still skeptical. Your points focus on the 
TLS handshake which is not necessarily directly tied to Web experience. 
According to 
https://firefox-source-docs.mozilla.org/testing/perfdocs/perf-sheriffing.html , 
even the 4% (>2%) regression for Desktops would be unacceptable. So, why is 4% 
in the handshake acceptable, but 9% is not?

If I am sending 100KB of data over the conn, 1 extra packet in the CH will not 
matter even for these mobile clients. We tried to make the point in 
https://www.ndss-symposium.org/ndss-paper/auto-draft-484/ . Ideally we should 
have proven it by measuring web metrics too (other than just the TTLB) but that 
requires more work.

I am arguing that 5% or 10% or even 20% of TLS handshake slowdown does not 
equate to the same slowdown in the CrUX / web metrics. For example, the TLS 
handshake should not affect the INP or CLS metrics at all. The LCP or the FCP 
will not be affected be an extra packet if the server sends 50+ packets per 
connection. https://httparchive.org/reports/state-of-the-web says that each 
mobile connection transfers about 200KB. This means 150+ packets. Will an extra 
CH packet really show up in this connection’s performance impact? I doubt it. 
Another data point,  https://httparchive.org/reports/loading-speed#fcp says 
that the median FCP and TTI for mobile is 3 and 16 seconds respectively. Will 
an extra packet in the CH really affect the multisecond FCP or TTI even in a 
slow connection at 1Kbps? That is questionable as well.

So, respectfully, is your assertion that ML-KEM768 will have noticeable impact 
for mobile based on measurable web metric data, or is it just based on an 
intuition which is focusing on the TLS handshake and could be overestimating 
the impact on real web metrics?


From: David Adrian 
Sent: Thursday, September 12, 2024 11:26 PM
To: Kampanakis, Panos 
Cc: David Benjamin ;  
Subject: RE: [EXTERNAL] [TLS] Re: draft-ietf-tls-key-share-prediction next steps


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


> Any numbers you have to showcase the regression and the relevant affected web 
> metrics?

Adding Kyber to the TLS handshake increased TLS handshake latency by 4% on 
desktop [1] and 9% on Android at P50, and considerably higher at P95. In 
general, Cloudflare found that every 1K of additional data added to the server 
response caused median HTTPS handshake latency increase by around 1.5% [2].

> I have seen this claim before and, respectfully, I don’t fully buy it. A 
> mobile client that suffers with two packet CHs is probably already crawling 
> for hundreds of KBs of web content per conn.

There is a considerable difference between loading large amounts of data for a 
single site, which is a decision that is controllable by a site, and adding a 
fixed amount of latency to _all_ connections to all sites to defend against a 
computer that does not exist [3].

[1]: 
https://blog.chromium.org/2024/05/advancing-our-amazing-bet-on-asymmetric.html
[2]: https://blog.cloudflare.com/pq-2024/
[3]: https://dadrian.io/blog/posts/pqc-not-plaintext/


___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: New Version Notification for draft-tls-reddy-slhdsa-00.txt

2024-11-04 Thread Kampanakis, Panos
From draft-tls-reddy-slhdsa-00

>  SLH-DSA can be preferred for CA certificates, making it ideal for long-term 
> security as a trust anchor.

I think the standardized SLH-DSA parameters (designed for 2^64 signatures) 
still make the ICA cert unnecessarily large.

If there is an SLH-DSA argument to be made for Root Certs in TLS (I am not 
convinced), then I suggest it to be with just the slimmer parameters for 2^10 
sigs in https://eprint.iacr.org/2024/018.pdf . Note that NIST has committed to 
standardizing slimmer SLH-DSA params sometime in the future.


From: tirumal reddy 
Sent: Monday, November 4, 2024 2:16 AM
To: Peter C 
Cc: IETF TLS 
Subject: [EXTERNAL] [TLS] Re: New Version Notification for 
draft-tls-reddy-slhdsa-00.txt


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi Peter,

Please see inline

On Sun, 3 Nov 2024 at 22:17, Peter C 
mailto:pete...@ncsc.gov.uk>> wrote:
Tiru,

Is SLH-DSA considered a practical option for TLS end-entity certificates?

Under realistic network conditions, TLS handshakes with full SLH-DSA 
certificate chains seem to be about 5-10 times slower than traditional 
certificate chains and, in some cases, can take on the order of seconds.  See, 
for example, the results in https://eprint.iacr.org/2020/071, 
https://eprint.iacr.org/2021/1447, https://mediatum.ub.tum.de/1728103 and 
https://thomwiggers.nl/post/tls-measurements/.

I agree that there’s an argument for using SLH-DSA in root certificates, but 
I’m surprised it’s being proposed for the full chain.

SLH-DSA is not proposed for the end-entity certificates, it is preferred for CA 
certificates (please see the 3rd paragraph in 
https://www.ietf.org/archive/id/draft-tls-reddy-slhdsa-00.html#section-2)

-Tiru


Peter

From: Russ Housley mailto:hous...@vigilsec.com>>
Sent: 03 November 2024 11:13
To: tirumal reddy mailto:kond...@gmail.com>>
Cc: IETF TLS mailto:tls@ietf.org>>
Subject: [TLS] Re: New Version Notification for draft-tls-reddy-slhdsa-00.txt

Thanks for doing this work.  I hope the TLS WG will promptly adopt it.

Russ

On Nov 2, 2024, at 8:15 PM, tirumal reddy 
mailto:kond...@gmail.com>> wrote:

Hi all,

This draft https://datatracker.ietf.org/doc/draft-tls-reddy-slhdsa/ specifies 
how the PQC signature scheme SLH-DSA can be used for authentication in TLS 1.3.
Comments and suggestions are welcome.

Regards,
-Tiru
-- Forwarded message -
From: mailto:internet-dra...@ietf.org>>
Date: Sun, 3 Nov 2024 at 05:39
Subject: New Version Notification for draft-tls-reddy-slhdsa-00.txt
To: Tirumaleswar Reddy.K mailto:kond...@gmail.com>>, John 
Gray mailto:john.g...@entrust.com>>, Scott Fluhrer 
mailto:sfluh...@cisco.com>>, Timothy Hollebeek 
mailto:tim.holleb...@digicert.com>>


A new version of Internet-Draft draft-tls-reddy-slhdsa-00.txt has been
successfully submitted by Tirumaleswar Reddy and posted to the
IETF repository.

Name: draft-tls-reddy-slhdsa
Revision: 00
Title:Use of SLH-DSA in TLS 1.3
Date: 2024-11-02
Group:Individual Submission
Pages:8
URL:  https://www.ietf.org/archive/id/draft-tls-reddy-slhdsa-00.txt
Status:   https://datatracker.ietf.org/doc/draft-tls-reddy-slhdsa/
HTML: https://www.ietf.org/archive/id/draft-tls-reddy-slhdsa-00.html
HTMLized: https://datatracker.ietf.org/doc/html/draft-tls-reddy-slhdsa

Abstract:

   This memo specifies how the post-quantum signature scheme SLH-DSA
   [FIPS205] is used for authentication in TLS 1.3.

___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: [Pqc] QUIC, amplification and PQC message sizes (was: Bytes server -> client)

2024-11-10 Thread Kampanakis, Panos
+1 Regarding the TCP initcwnd and QUIC Amplification topics. I would add 
kInitialRtt which we found ( 
https://www.nccoe.nist.gov/sites/default/files/2023-12/pqc-migration-nist-sp-1800-38c-preliminary-draft.pdf,
 section 7.3, Fig. 5) to introduce 60ms slowdowns due to QUIC's packet pacing. 
Note that these are discussed in the TCP and QUIC sections of 
https://pqcc.org/standards-with-open-questions-regarding-pqc-adoption as well. 
There are other publications talking about them too. 

One thing I learned about the TCP initcwnd from the TCPM WG 
(https://mailarchive.ietf.org/arch/msg/tcpm/tmY-s-PAO9ubcb0PF1EFyxXeCWE ) is 
that  senders can choose and update the TCP initcwnd for their connections 
based on network conditions. Thus, TCP has become more flexible since RFC6928. 
I am not sure what TCP stacks support tracking and adjusting initcwnds, but 
that came out of the WG discussion. Additionally, as discussed before, for the 
web, CDNs often use their own large initcwnds. 


-Original Message-
From: Christian Huitema  
Sent: Sunday, November 10, 2024 3:23 PM
To: tls@ietf.org; p...@ietf.org
Subject: [EXTERNAL] [Pqc] QUIC, amplification and PQC message sizes (was: Bytes 
server -> client)

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



I am reading the "bytes server -> client" thread, and I think that the 
evaluation misses a point regarding QUIC, and probably other UDP based 
protocols as well.

The QUIC handshake embeds a TLS 1.3 handshake. The client sends the Client 
Hello in a series of QUIC Initial packets. The server replies with the Server 
Hello and the server first flight in a series of QUIC Initial and Handshake 
packets. One of the security concerns in "amplification". An attacker could 
send the client's Initial packets from a spoofed address; the server would 
respond with a first flight of packets to that address. If the volume of 
packets sent by the server is larger than the volume of packets received from 
the client, the attacker will have "amplified" the attack.

The main defense against that amplification attack is to perform a three ways 
handshake and verify that the client address is not spoofed.
However, that defense adds 1 RTT to the handshake duration. The next defense is 
to limit the volume of data sent by the server before the client address is 
verified. In the current version, the limit is a factor 3. In typical 
scenarios, the Client Hello first in a single packet, which must be padded to 
at least 1200 bytes. The server will send at most 3600 bytes in return, then 
wait for an acknowledgement from the client before sending the reminder of the 
first server flight, which of course will add 1 RTT to the handshake.

Clients could try to mitigate the amplification limit by repeating the Client 
Hello several time, but they typically don't do that today because they are 
reluctant to waste CPU and bandwidth with unnecessary data.

Apart from amplification considerations, we also have congestion control 
considerations. Both server and clients are limited by the "initial congestion 
window", whose current value is 10 packets. During the handshake the packet 
size is 1200 bytes, which implies that client and servers can send at most 
12,000 bytes each before having to wait for an acknowledgement from the peer. 
If either the Client Hello or the server flight is larger than 12,000, the 
handshake will require an extra RTT.

To summarize, the QUIC handshake will require an extra RTT:

* if the server flight is larger than 3 times the Client Hello,

* if the Client Hello is larger than 12,000 bytes,

* if the Server Hello is larger than 12,000 bytes.

If would be very nice to have PQC variants that fit inside that budget.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: Bytes server -> client

2024-11-07 Thread Kampanakis, Panos
Hi Bas,

That is interesting and surprising, thank you.

I am mostly interested in the ~63% of non-resumed sessions that would be 
affected by 10-15KB of auth data. It looks like your data showed that each QUIC 
conn transfers about 4.7KB which is very surprising to me. It seems very low.

In experiments I am getting here for top web servers, I see lots of conns which 
transfer hundreds of KB even over QUIC in cached browsers sessions. This aligns 
with the average KB from your blog is 551*0.6=~330KB, but not the median 4.7. 
Hundreds of KB also aligns with the p50 per page / conns per page in 
https://httparchive.org/reports/page-weight?lens=top1k&start=2024_05_01&end=latest&view=list
 . Of course browsers cache a lot of things like javascript, images etc, so 
they don’t transfer all resources which could explain the median. But still, 
based on anecdotal experience looking at top visited servers, I am noticing 
many small transfers and just a few that transfer larger HTML, css etc on every 
page even in cached browser sessions..

I am curious about the 4.7KB and the 15.8% of conns transferring <100KB in your 
blog. Like you say in your blog, if the 95th percentile includes very large 
transfers that would skew the diff between the median and the average. But I am 
wondering if there is another explanation. In my experiments I see a lot of 302 
and 301 redirects which transfer minimal data. Some pages have a lot of those. 
If you have many of them, then your median will get skewed as it fills up with 
very small data transfers that basically don’t do anything. In essence, we 
could have 10 pages which transfer 100KB each for one of their resources and 
have another 9 that are HTTP Redirects or transfer 0.1KB. That would make us 
think that 90% of the 10 pages will be blazing fast, but the 100KB resource in 
each page will take a good amount of time in a slow network.

To validate this theory, what would your data show if you queried for the % of 
conns that transfer <.5 or <1KB? If that is a lot, then there are many small 
conns that skew the median downwards. Or what if you run the query to exclude 
the very heavy conns and the very light (HTTP 301, 302 etc)? For example if you 
ran a report on the conns transferring 1KB Chrome is more cautious and set 10% as their target for maximum TLS handshake 
> time regression.
Is this public somewhere? There is no immediate link between TLS handshake and 
any of the Core Web Vitals Metrics or the CruX metrics other than the TTFB. 
Even for the TTFB, 10% in the handshake does not mean 10% TTFB; the TTFB is 
affected much less. I am wondering if we should start expecting the TLS 
handshake to slowly become a tracked web performance metric.


From: Bas Westerbaan 
Sent: Thursday, November 7, 2024 9:07 AM
To:  ; p...@ietf.org
Subject: [EXTERNAL] [TLS] Bytes server -> client


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi all,

Just wanted to highlight a blog post we just published. 
https://blog.cloudflare.com/another-look-at-pq-signatures/  At the end we share 
some statistics that may be of interest:

On average, around 15 million TLS connections are established with Cloudflare 
per second. Upgrading each to ML-DSA, would take 1.8Tbps, which is 0.6% of our 
current total network capacity. No problem so far. The question is how these 
extra bytes affect performance.
Back in 2021, we ran a large-scale experiment to measure the impact of big 
post-quantum certificate chains on connections to Cloudflare’s network over the 
open Internet. There were two important results. First, we saw a steep increase 
in the rate of client and middlebox failures when we added more than 10kB to 
existing certificate chains. Secondly, when adding less than 9kB, the slowdown 
in TLS handshake time would be approximately 15%. We felt the latter is 
workable, but far from ideal: such a slowdown is noticeable and people might 
hold off deploying post-quantum certificates before it’s too late.

Chrome is more cautious and set 10% as their target for maximum TLS handshake 
time regression. They report that deploying post-quantum key agreement has 
already incurred a 4% slowdown in TLS handshake time, for the extra 1.1kB from 
server-to-client and 1.2kB from client-to-server. That slowdown is 
proportionally larger than the 15% we found for 9kB, but that could be 
explained by slower upload speeds than download speeds.

There has been pushback against the focus on TLS handshake times. One argument 
is that session resumption alleviates the need for sending the certificates 
again. A second argument is that the data required to visit a typical website 
dwarfs the additional bytes for post-quantum certificates. One example is this 
2024 publication, where Amazon researchers have simulated the impact of large 
post-quantum certificates on data-heavy TLS connections. They

[TLS] Re: Adoption Call for Trust Anchor IDs

2025-02-04 Thread Kampanakis, Panos
I find Dennis’ writeup and most of his arguments convincing.
I don’t think the WG should adopt the draft.


From: Dennis Jackson 
Sent: Tuesday, February 4, 2025 8:28 PM
To: TLS List 
Subject: [EXTERNAL] [TLS] Re: Adoption Call for Trust Anchor IDs


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



It will not come as a surprise that I oppose adoption for the reasons laid out 
in 'Trust is non-negotiable' [1].

The claims that Trust Negotiation can improve security or compatibility just do 
not stand up to scrutiny. Especially as in over a year since first 
introduction, there has been no credible proposal for how TN could be deployed 
outside of browsers and major CDNs or how it could bring any benefit at all 
with such a limited scope for deployment. It's not like major CDNs struggle to 
offer certificates suitable for browsers.

Even if the deployability concerns could be solved and so Trust Negotiation 
enabled at scale, then it would cause much more harm than good. Managing one 
certificate chain and CA relationship is already painful for many website 
operators, but TN would compound that pain by allowing root programs to diverge 
and placing the onus on website operators to obtain and manage multiple 
certificate chains to ensure compatibility with each root program's clients.

It would also be a major abuse vector for users, who are much more likely to 
suffer than benefit from the resulting fragmentation of the WebPKI, as well as 
being put at risk by use of TN to establish new root programs with malicious or 
negligent stewardship (domestic PKIs, enshittification, ossification).

In both cases, the result is a claimed reduction in operational burden for root 
programs and major CDNs (who have the most capacity and expertise to handle it) 
and the very material transfer of risk and complexity to users and website 
operators (who are least well equipped).

As technologists evaluating a proposal that would alter the architecture of one 
of the Internet's most critical ecosystems, we owe users and website operators 
better.

Best,
Dennis

[1] 
https://datatracker.ietf.org/doc/html/draft-jackson-tls-trust-is-nonnegotiable
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: PQ Cipher Suite I-Ds: adopt or not?

2024-12-17 Thread Kampanakis, Panos
> Is the WG consensus to run four separate adoption calls for the individual 
> I-Ds in question?

I suggest to call for adoption of 
- draft-kwiatkowski-tls-ecdhe-mlkem  
- draft-tls-westerbaan-mldsa
- draft-reddy-tls-composite-mldsa 
- draft-reddy-tls-slhdsa 

Personally, I don't think all of those should be adopted, but I will share that 
when there is an adoption call.

I also think that 
- draft-connolly-tls-mlkem-key-agreement/ 
can wait for a couple of years. It already has codepoints, so it can be used by 
early adopters that need it.



-Original Message-
From: Sean Turner  
Sent: Monday, December 16, 2024 5:00 PM
To: TLS List 
Subject: [EXTERNAL] [TLS] PQ Cipher Suite I-Ds: adopt or not?

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Note that there are three parts to this email; the “ask” is at the end.

Requests:

Ciphersuite discussions in this WG often turn nasty, so we would like to remind 
everyone to keep it civil while we explain our thinking WRT recent requests for 
WG adoptions of some PQ-related I-Ds.

Also, the chairs are trying to gather information here, not actually do the 
calls. If we decide to do them we will do them in the new year.

Background:

Currently, the TLS WG has adopted one I-D related to PQ:
Hybrid key exchange in TLS 1.3;
  see https://datatracker.ietf.org/doc/draft-ietf-tls-hybrid-design/
This I-D provides a construction for hybrid key exchange in the TLS 1.3. The 
I-D has completed WG last call and is about to progress to IETF LC.

There are a number of Individual I-Ds that specify PQ cipher suite for TLS 
currently being developed that specify either “pure” PQ or composite/hybrid:

ML-KEM Post-Quantum Key Agreement for TLS 1.3;
  see https://datatracker.ietf.org/doc/draft-connolly-tls-mlkem-key-agreement/
PQ hybrid ECDHE-MLKEM Key Agreement for TLSv1.3,
  see https://datatracker.ietf.org/doc/draft-kwiatkowski-tls-ecdhe-mlkem/
Use of Composite ML-DSA in TLS 1.3;
  see https://datatracker.ietf.org/doc/draft-reddy-tls-composite-mldsa/
Use of SLH-DSA in TLS 1.3;
  see https://datatracker.ietf.org/doc/draft-reddy-tls-slhdsa/

The IANA requests for code points in the I-Ds (now) all have the same setting 
for the “Recommended” column; namely, they request that the Recommended column 
be set to “N”. As a reminder (from RFC 8447bis), “N”:

  Indicates that the item has not been evaluated by the IETF and
  that the IETF has made no statement about the suitability of the
  associated mechanism.  This does not necessarily mean that the
  mechanism is flawed, only that no consensus exists.  The IETF
  might have consensus to leave  items marked as "N" on the basis
  of it having limited applicability or usage constraints.

With an “N”, the authors are free to request code points from IANA without 
working group adoption. Currently, five code points have been assigned; 3 for 
ML-KEM and 2 for ECDHE-MLKEM.

While there have been calls to run WG adoption calls for these I-Ds, the WG 
chairs have purposely NOT done so. The WG consensus, as we understand it, is 
that because the IANA rules permit registrations in the Specification Required 
with an I-D that there has been no need to burden the WG; there is, obviously, 
still some burden because the I-Ds are discussed on-list (and yes there have 
been some complaints about the volume of messages about these cipher suites).

There are a couple of other reasons:

* The ADs are formulating a plan for cipher suites; see 
https://datatracker.ietf.org/doc/draft-pwouters-crypto-current-practices/.

* There are a lot of different opinions and that likely leads to a lack of 
consensus. Based on discussions at and since Brisbane, we do not think there 
will be consensus to mark these ciphersuites as "Y" at this point, however the 
working group can take action to do so in the future.

* There have been a few calls to change the MTI (Mandatory to Implement) 
algorithms in TLS, but in July 2024 at IETF 120 the WG consensus was that 
draft-ietf-tls-rfc8446bis would not be modified to add an additional 
ciphersuite because the update was for clarifications.

* Adopting these or some subset of these I-Ds, will inevitably result in others 
requesting code points too. The WG has historically not been good about 
progressing cipher suite related I-Ds, either the discussion rapidly turns 
unproductive or interest wanes during the final stages in the publication 
process. So while there is great interest (based on the number of messages to 
the list) about these I-Ds, we are unsure how to avoid the inevitable 
complaints that would follow failure to adopt or not adopt a specific I-D based 
on different requirements of different individuals.We know some of you are 
thinking that that’s “tough”, but if we do not need to have this fight, see the 
previous paragraph, we do not see the harm in avoidi

[TLS] Re: [Pqc] Re: Re: Bytes server -> client

2025-01-23 Thread Kampanakis, Panos
Thx Luke, Bas.

Resurrecting this old thread regarding web connection data sizes to share some 
more data I presented at a conference last week. You two know about this, but I 
thought it could benefit future group discussions.

Slides 14-19 in 
https://pkic.org/events/2025/pqc-conference-austin-us/THU_BREAKOUT_1130_Panos-Kampanakis_How-much-will-ML-DSA-affect-Webpage-Metrics.pdf#page=14
 investigate some popular web page connection data sizes. The investigation 
showed that the pages I focused on pull down large amounts of data, but they 
include a bunch of slim connections delivering other content like tracking, 
ads, HTTP 304s (browser caching) or small elements. I believe this generally 
matches what you shared in your blog. There is a caveat that this investigation 
was on a small set of popular pages, so we can’t extrapolate that the represent 
the whole web. But if they do, then the performance of the conns transferring 
the “web content” won’t suffer as much. The small conns doing the other things 
will suffer. Will these small conns affect web metrics? Intuitively, probably 
not so much, but OK, without testing no one should be sure.

The earlier slides of the preso include some results from popular pages and 
estimate the impact of ML-DSA on web user metrics like TTFB, FCP, LCP and 
Document Complete times. They show that the web metric suffers much less than 
the handshake mainly because web pages usually spend more time on doing other 
things like downloading and rendering large sums of data like html, css, 
javascript, images, json etc than on TLS handshakes.



From: Luke Valenta 
Sent: Tuesday, November 19, 2024 3:19 PM
To: Kampanakis, Panos 
Cc: Bas Westerbaan ;  
; p...@ietf.org
Subject: [EXTERNAL] [Pqc] Re: [TLS] Re: Bytes server -> client


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi Panos,

Here are some more details on what we see in connections to Cloudflare.

To validate this theory, what would your data show if you queried for the % of 
conns that transfer <.5 or <1KB? If that is a lot, then there are many small 
conns that skew the median downwards. Or what if you run the query to exclude 
the very heavy conns and the very light (HTTP 301, 302 etc)? For example if you 
ran a report on the conns transferring 1KBmailto:40amazon@dmarc.ietf.org>> wrote:
Hi Bas,

That is interesting and surprising, thank you.

I am mostly interested in the ~63% of non-resumed sessions that would be 
affected by 10-15KB of auth data. It looks like your data showed that each QUIC 
conn transfers about 4.7KB which is very surprising to me. It seems very low.

In experiments I am getting here for top web servers, I see lots of conns which 
transfer hundreds of KB even over QUIC in cached browsers sessions. This aligns 
with the average KB from your blog is 551*0.6=~330KB, but not the median 4.7. 
Hundreds of KB also aligns with the p50 per page / conns per page in 
https://httparchive.org/reports/page-weight?lens=top1k&start=2024_05_01&end=latest&view=list
 . Of course browsers cache a lot of things like javascript, images etc, so 
they don’t transfer all resources which could explain the median. But still, 
based on anecdotal experience looking at top visited servers, I am noticing 
many small transfers and just a few that transfer larger HTML, css etc on every 
page even in cached browser sessions..

I am curious about the 4.7KB and the 15.8% of conns transferring <100KB in your 
blog. Like you say in your blog, if the 95th percentile includes very large 
transfers that would skew the diff between the median and the average. But I am 
wondering if there is another explanation. In my experiments I see a lot of 302 
and 301 redirects which transfer minimal data. Some pages have a lot of those. 
If you have many of them, then your median will get skewed as it fills up with 
very small data transfers that basically don’t do anything. In essence, we 
could have 10 pages which transfer 100KB each for one of their resources and 
have another 9 that are HTTP Redirects or transfer 0.1KB. That would make us 
think that 90% of the 10 pages will be blazing fast, but the 100KB resource in 
each page will take a good amount of time in a slow network.

To validate this theory, what would your data show if you queried for the % of 
conns that transfer <.5 or <1KB? If that is a lot, then there are many small 
conns that skew the median downwards. Or what if you run the query to exclude 
the very heavy conns and the very light (HTTP 301, 302 etc)? For example if you 
ran a report on the conns transferring 1KB Chrome is more cautious and set 10% as their target for maximum TLS handshake 
> time regression.
Is this public somewhere? There is no immediate link between TLS handshake and 
any of the Core Web Vitals Metrics or the CruX metrics other than t

[TLS] Re: WG Adoption Call for Use of ML-DSA in TLS 1.3

2025-04-15 Thread Kampanakis, Panos
I support adoption and will review.  

-Original Message-
From: Sean Turner  
Sent: Tuesday, April 15, 2025 1:32 PM
To: TLS List 
Subject: [EXTERNAL] [TLS] WG Adoption Call for Use of ML-DSA in TLS 1.3

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



We are continuing with our WG adoption calls for the following I-D:
Use of ML-DSA in TLS 1.3 [1]; see [2] for more information about this tranche 
of adoption calls. If you support adoption and are willing to review and 
contribute text, please send a message to the list. If you do not support 
adoption of this draft, please send a message to the list and indicate why. 
This call will close at 2359 UTC on 29 April 2025.

Reminder:  This call for adoption has nothing to do with picking the 
mandatory-to-implement cipher suites in TLS.

Cheers,
Joe and Sean

[1] https://datatracker.ietf.org/doc/draft-tls-westerbaan-mldsa/
[2] https://mailarchive.ietf.org/arch/msg/tls/KMOTm_lE5OIAKG8_chDlRKuav7c/

___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: WG Adoption Call for Use of SLH-DSA in TLS 1.3

2025-05-16 Thread Kampanakis, Panos
I am against adoption. 

SLH-DSA sigs are too large and slow for general use in TLS 1.3 applications, 
especially since there are other options. I would support  SLH-DSA in 
self-signed certs with CA:true with one of the upcoming smaller footprint 
SLH-DSA parameters (with  2^10) which could also be used in TLS cert chains. 


-Original Message-
From: Sean Turner  
Sent: Friday, May 16, 2025 9:27 AM
To: TLS List 
Subject: [EXTERNAL] [TLS] WG Adoption Call for Use of SLH-DSA in TLS 1.3

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



We are continuing with our WG adoption calls for the following I-D: Use of 
SLH-DSA  in TLS 1.3 [1]; see [2] for more information about this tranche of 
adoption calls. If you support adoption and are willing to review and 
contribute text, please send a message to the list. If you do not support 
adoption of this draft, please send a message to the list and indicate why. 
This call will close at 2359 UTC on 30 May 2025.

Reminder:  This call for adoption has nothing to do with picking the 
mandatory-to-implement cipher suites in TLS.

Cheers,
Joe and Sean

[1] https://datatracker.ietf.org/doc/draft-reddy-tls-slhdsa/
[2] https://mailarchive.ietf.org/arch/msg/tls/KMOTm_lE5OIAKG8_chDlRKuav7c/
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: Second WG Adoption Call for Use of SLH-DSA in TLS 1.3

2025-07-14 Thread Kampanakis, Panos
-1 on adoption still, even with the extra disclaimer.

From: Sean Turner 
Sent: Monday, July 14, 2025 6:05 PM
To: TLS List 
Subject: [EXTERNAL] [TLS] Second WG Adoption Call for Use of SLH-DSA in TLS 1.3

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



We kicked off an adoption call for Use of SLH-DSA in TLS 1.3; see [0]. We 
called consensus [1], and that decision was appealed. We have reviewed the 
messages and agree that we need to redo the adoption call to get more input.

What appears to be the most common concern, which we will take from Panos' 
email, is that "SLH-DSA sigs are too large and slow for general use in TLS 1.3 
applications". One way to address this concern is to add an applicablity 
statement to address this point. We would like to propose that this (or 
something close to this) be added to the I-D:

Applications that use SLH-DSA need to be aware that the signatures sizes are 
large; the signature sizes for the cipher suites specified herein range from 
7,856 to 49,856 bytes. Likewise, the cipher suites are considered slow. While 
these costs might be amoritized over the cost of a long lived connection, the 
cipher suites specified herein are not considered for general use in TLS 1.3.

With this addition in mind, we would like to start another WG adoption call for 
draft-reddy-tls-slhdsa. If you support adoption with the above text (or 
something similar) and are willing to review and contribute text, please send a 
message to the list. If you do not support adoption of this draft with the 
above text (or something similar), please send a message to the list and 
indicate why. This call will close at 2359 UTC on 28 July 2025.

Cheers,
Deirdre, Joe, and Sean

[0] https://mailarchive.ietf.org/arch/msg/tls/o4KnXjI-OpuHPcB33e8e78rACb0/
[1] https://mailarchive.ietf.org/arch/msg/tls/hhLtBBctK5em6l82m7rgM6_hefo/
[2] https://datatracker.ietf.org/doc/draft-reddy-tls-slhdsa/
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org