[TLS] Re: Adoption call for TLS 1.2 Update for Long-term Support

2024-11-07 Thread Peter Gutmann
David A. Cooper  writes:

>It would also be inappropriate to adopt it as a WG document, especially as a
>standards track document,

I was thinking more informational.  Actually I'm not too fussed over what
category it's in, as long as it gets out of its current limbo.

>It would be contrary to the goal of this draft to suggest that those who have
>been using it since 2016 should not modify their implementations to align
>with changes made by the WG.

It was put on hold so as not to interfere with the TLS 1.3 process (and then
admittedly I forgot to un-hold it afterwards).  It seems like you're saying
that doing what the WG requested now makes it ineligible for consideration by
the WG.

Another point is that it's been around for eight years, it's not like people
haven't had more than enough opportunity to comment on it and suggest changes
in that time.  The current late-to-the-party response seems to be mostly a
chorus of "I haven't read it but I know I don't like it".

Peter.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: FATT process update

2024-11-07 Thread Muhammad Usama Sardar
Thank you for clearly writing down the process and continuing to improve 
it. I particularly like the "understood" part, which is IMO a key 
benefit of formal methods.


Also, thanks for clearly mentioning the current members in FATT process. 
I do notice that current FATT is slightly different from the initial 
announcement, and the change was never announced to the WG. Anyway, its 
good to have it in a repo now.


I would recommend putting the liaison/focal person for each document 
also in the repo. For example, I don't know who is the focal person for 
8773bis.


I would also suggest adding more onsite participants in FATT. This will 
help a lot and remove some communication latencies.


Thanks,

Usama

On 05.11.24 15:34, Joseph Salowey wrote:

There is an updated FATT available here:
https://github.com/tlswg/tls-fatt

This has taken a lot of the feedback we have received so far, but is 
still a work in progress. There will be a little time to discuss in 
the meeting Friday.


Joe

___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: [TLS]Consensus call for RFC8773bis Formal Analysis Requirement

2024-11-07 Thread Muhammad Usama Sardar

Dear chairs,

I had a short meeting with Russ today and we don't understand 
/precisely/ what the FATT is worried about and therefore why a formal 
analysis is required at all.


Extending CH and SH to negotiate external PSK follows the best current 
practice for extending TLS. Moreover, external PSK (shared out of band) 
instead of zero in the initial handshake can't make security worse. Even 
if external PSK is leaked, it's not worse than a known constant (zero). 
What could possibly go wrong?


I don't see any liaison announced for this draft. So I don't know whom 
to ask for it. Could you please clarify with the FATT exactly which 
property from Appendix E.1 of RFC8446 it thinks might break with this 
draft?


Thanks,

Usama

On 23.08.24 18:30, Joseph Salowey wrote:
In regard to moving RFC 8773 to standards track the formal analysis 
triage group has provided input on the need for formal analysis which 
was posted to the list [1].  The authors have published a revision of 
the draft [2] to address some of this feedback, however the general 
sentiment of the triage panel was that there should be some additional 
symbolic analysis done to verify the security properties of the draft 
and to verify there is no negative impact on TLS 1.3 security properties.


The formal analysis is to verify the following properties of the 
proposal in the draft:


- The properties of the handshake defined in Appendix E.1 of RFC8446 
[3] remain intact if either the external PSK is not compromised or 
(EC)DH is unbroken.
- The public key certificate authentication works as in TLS 1.3, and 
this extension adds the condition that the party has possession of the 
external PSK. The details of external PSK distribution are outside the 
scope of this extension.


This is a consensus call for the working group to determine how to 
proceed between these two options:


1. Require formal analysis in the symbolic model to verify that the 
proposal in the document does not negatively impact the security 
properties of base TLS 1.3 and that the additional security properties 
cited above are met
2. Proceed with publishing the document without additional formal 
verification


Please respond to the list with a brief reason why you think the 
document requires formal analysis or not. This call will end on 
September 16, 2024.


Thanks

Joe, Deirdre, and Sean

[1] https://mailarchive.ietf.org/arch/msg/tls/vK2N0vr83W6YlBQMIaVr9TeHzu4/
[2] 
https://author-tools.ietf.org/iddiff?url1=draft-ietf-tls-8773bis-00&url2=draft-ietf-tls-8773bis-02&difftype=--html 


[3] https://www.rfc-editor.org/rfc/rfc8446.html#appendix-E.1

___
TLS mailing list --tls@ietf.org
To unsubscribe send an email totls-le...@ietf.org___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Bytes server -> client

2024-11-07 Thread Bas Westerbaan
Hi all,

Just wanted to highlight a blog post we just published.
https://blog.cloudflare.com/another-look-at-pq-signatures/  At the end we
share some statistics that may be of interest:

On average, around 15 million TLS connections are established with
> Cloudflare per second. Upgrading each to ML-DSA, would take 1.8Tbps, which
> is 0.6% of our current total network capacity. No problem so far. The
> question is how these extra bytes affect performance.
> Back in 2021, we ran a large-scale experiment to measure the impact of big
> post-quantum certificate chains on connections to Cloudflare’s network over
> the open Internet. There were two important results. First, we saw a steep
> increase in the rate of client and middlebox failures when we added more
> than 10kB to existing certificate chains. Secondly, when adding less than
> 9kB, the slowdown in TLS handshake time would be approximately 15%. We felt
> the latter is workable, but far from ideal: such a slowdown is noticeable
> and people might hold off deploying post-quantum certificates before it’s
> too late.



Chrome is more cautious and set 10% as their target for maximum TLS
> handshake time regression. They report that deploying post-quantum key
> agreement has already incurred a 4% slowdown in TLS handshake time, for the
> extra 1.1kB from server-to-client and 1.2kB from client-to-server. That
> slowdown is proportionally larger than the 15% we found for 9kB, but that
> could be explained by slower upload speeds than download speeds.


> There has been pushback against the focus on TLS handshake times. One
> argument is that session resumption alleviates the need for sending the
> certificates again. A second argument is that the data required to visit a
> typical website dwarfs the additional bytes for post-quantum certificates.
> One example is this 2024 publication, where Amazon researchers have
> simulated the impact of large post-quantum certificates on data-heavy TLS
> connections. They argue that typical connections transfer multiple requests
> and hundreds of kilobytes, and for those the TLS handshake slowdown
> disappears in the margin.


> Are session resumption and hundreds of kilobytes over a connection typical
> though? We’d like to share what we see. We focus on QUIC connections, which
> are likely initiated by browsers or browser-like clients. Of all QUIC
> connections with Cloudflare that carry at least one HTTP request, 37% are
> resumptions, meaning that key material from a previous TLS connection is
> reused, avoiding the need to transmit certificates. The median number of
> bytes transferred from server-to-client over a resumed QUIC connection is
> 4.4kB, while the average is 395kB. For non-resumptions the median is 7.8kB
> and average is 551kB. This vast difference between median and average
> indicates that a small fraction of data-heavy connections skew the average.
> In fact, only 15.8% of all QUIC connections transfer more than 100kB.


> The median certificate chain today (with compression) is 3.2kB. That means
> that almost 40% of all data transferred from server to client on more than
> half of the non-resumed QUIC connections are just for the certificates, and
> this only gets worse with post-quantum algorithms. For the majority of QUIC
> connections, using ML-DSA as a drop-in replacement for classical signatures
> would more than double the number of transmitted bytes over the lifetime of
> the connection.


> It sounds quite bad if the vast majority of data transferred for a typical
> connection is just for the post-quantum certificates. It’s still only a
> proxy for what is actually important: the effect on metrics relevant to the
> end-user, such as the browsing experience (e.g. largest contentful paint)
> and the amount of data those certificates take from a user’s monthly data
> cap. We will continue to investigate and get a better understanding of the
> impact.


Best,

 Bas
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: Bytes server -> client

2024-11-07 Thread Kampanakis, Panos
Hi Bas,

That is interesting and surprising, thank you.

I am mostly interested in the ~63% of non-resumed sessions that would be 
affected by 10-15KB of auth data. It looks like your data showed that each QUIC 
conn transfers about 4.7KB which is very surprising to me. It seems very low.

In experiments I am getting here for top web servers, I see lots of conns which 
transfer hundreds of KB even over QUIC in cached browsers sessions. This aligns 
with the average KB from your blog is 551*0.6=~330KB, but not the median 4.7. 
Hundreds of KB also aligns with the p50 per page / conns per page in 
https://httparchive.org/reports/page-weight?lens=top1k&start=2024_05_01&end=latest&view=list
 . Of course browsers cache a lot of things like javascript, images etc, so 
they don’t transfer all resources which could explain the median. But still, 
based on anecdotal experience looking at top visited servers, I am noticing 
many small transfers and just a few that transfer larger HTML, css etc on every 
page even in cached browser sessions..

I am curious about the 4.7KB and the 15.8% of conns transferring <100KB in your 
blog. Like you say in your blog, if the 95th percentile includes very large 
transfers that would skew the diff between the median and the average. But I am 
wondering if there is another explanation. In my experiments I see a lot of 302 
and 301 redirects which transfer minimal data. Some pages have a lot of those. 
If you have many of them, then your median will get skewed as it fills up with 
very small data transfers that basically don’t do anything. In essence, we 
could have 10 pages which transfer 100KB each for one of their resources and 
have another 9 that are HTTP Redirects or transfer 0.1KB. That would make us 
think that 90% of the 10 pages will be blazing fast, but the 100KB resource in 
each page will take a good amount of time in a slow network.

To validate this theory, what would your data show if you queried for the % of 
conns that transfer <.5 or <1KB? If that is a lot, then there are many small 
conns that skew the median downwards. Or what if you run the query to exclude 
the very heavy conns and the very light (HTTP 301, 302 etc)? For example if you 
ran a report on the conns transferring 1KB Chrome is more cautious and set 10% as their target for maximum TLS handshake 
> time regression.
Is this public somewhere? There is no immediate link between TLS handshake and 
any of the Core Web Vitals Metrics or the CruX metrics other than the TTFB. 
Even for the TTFB, 10% in the handshake does not mean 10% TTFB; the TTFB is 
affected much less. I am wondering if we should start expecting the TLS 
handshake to slowly become a tracked web performance metric.


From: Bas Westerbaan 
Sent: Thursday, November 7, 2024 9:07 AM
To:  ; p...@ietf.org
Subject: [EXTERNAL] [TLS] Bytes server -> client


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi all,

Just wanted to highlight a blog post we just published. 
https://blog.cloudflare.com/another-look-at-pq-signatures/  At the end we share 
some statistics that may be of interest:

On average, around 15 million TLS connections are established with Cloudflare 
per second. Upgrading each to ML-DSA, would take 1.8Tbps, which is 0.6% of our 
current total network capacity. No problem so far. The question is how these 
extra bytes affect performance.
Back in 2021, we ran a large-scale experiment to measure the impact of big 
post-quantum certificate chains on connections to Cloudflare’s network over the 
open Internet. There were two important results. First, we saw a steep increase 
in the rate of client and middlebox failures when we added more than 10kB to 
existing certificate chains. Secondly, when adding less than 9kB, the slowdown 
in TLS handshake time would be approximately 15%. We felt the latter is 
workable, but far from ideal: such a slowdown is noticeable and people might 
hold off deploying post-quantum certificates before it’s too late.

Chrome is more cautious and set 10% as their target for maximum TLS handshake 
time regression. They report that deploying post-quantum key agreement has 
already incurred a 4% slowdown in TLS handshake time, for the extra 1.1kB from 
server-to-client and 1.2kB from client-to-server. That slowdown is 
proportionally larger than the 15% we found for 9kB, but that could be 
explained by slower upload speeds than download speeds.

There has been pushback against the focus on TLS handshake times. One argument 
is that session resumption alleviates the need for sending the certificates 
again. A second argument is that the data required to visit a typical website 
dwarfs the additional bytes for post-quantum certificates. One example is this 
2024 publication, where Amazon researchers have simulated the impact of large 
post-quantum certificates on data-heavy TLS connections. They

[TLS] Re: Adoption call for TLS 1.2 Update for Long-term Support

2024-11-07 Thread Alicja Kario

On Thursday, 7 November 2024 14:58:02 CET, Peter Gutmann wrote:

The current late-to-the-party response seems to be mostly a
chorus of "I haven't read it but I know I don't like it".


There is no need for personal attacks.
--
Regards,
Alicja (nee Hubert) Kario
Principal Quality Engineer, RHEL Crypto team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 115, 612 00, Brno, Czech Republic

___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: DTLS 1.3 replay protection of post-handshake messages?

2024-11-07 Thread John Mattsson

Hi Eric, Martin,

You suggested writing an RFC require replay protection in DTLS 1.3. I was just 
planning to start writing such a -00 draft but now I see there is “DTLS 
Clarifications - David Benjamin (15 min)” on the agenda. If that means 
RFC9147bis, it might be better to have it there. But based on Martin comments, 
I separate draft might have value anyway.

> I agree that it would be good to require replay protection at this
> time. Perhaps we should just publish a short RFC requiring it.

Cheers,
John

From: John Mattsson 
Date: Thursday, 28 December 2023 at 13:47
To: Martin Thomson , Eric Rescorla 
Cc: tls@ietf.org 
Subject: Re: [TLS] DTLS 1.3 replay protection of post-handshake messages?
On Wed, Nov 29, 2023, at 11:21, Eric Rescorla wrote:
> I agree that it would be good to require replay protection at this
> time. Perhaps we should just publish a short RFC requiring it.
I think that is a good idea and I would be happy to help with such an RFC. 
Either one mandating DTLS replay protection or one mandating replay protection 
at some layer.

I think it would maybe make most sense to mandate DTLS replay protection. 
Applications wanting to turn off DTLS replay protection should be required to 
register an extension.

I made a PR to RFC8446bis that shortly describes that 0-RTT replay can be used 
for server tracking.
https://github.com/tlswg/tls13-spec/pull/1334/files

Cheers,
John Preuß Mattsson

From: Martin Thomson 
Date: Wednesday, 29 November 2023 at 06:02
To: Eric Rescorla , John Mattsson 
Cc: tls@ietf.org 
Subject: Re: [TLS] DTLS 1.3 replay protection of post-handshake messages?
One thing that I observed when we were doing QUIC was that the canonical 
reference for how to do this sort of anti-replay is a section that is buried 
deep in an IPsec RFC.  Perhaps that short RFC is an opportunity to also 
document the process in a way that can be a reference to other documents.  
(That's a slightly longer RFC, but still fairly short; Section 3.4.3 of RFC 
4303 is just two pages in the old measure.)

On Wed, Nov 29, 2023, at 11:21, Eric Rescorla wrote:
> I agree that it would be good to require replay protection at this
> time. Perhaps we should just publish a short RFC requiring it.
>
> -Ekr
>
>
> On Tue, Nov 28, 2023 at 3:00 PM John Mattsson
>  wrote:
>> Hi,
>> __ __
>> Lack of replay also enables tracking of client and server. If the client or 
>> server is a device moving together with a person this enables tracking of 
>> the person.
>> __ __
>> An attacker can store a message from one location and then replay it to the 
>> client or server in another location. If the client or server accept the 
>> replayed message, the attacker knows that the device in the two locations 
>> are one and the same. It is best practice to assume that an attacker can 
>> always detect if a message was accepted. If the client or server send a 
>> response to the replayed message (like a replayed client authentication 
>> request) this is trivial.
>> __ __
>> This is different from the attack described in Section C.4 “Client and 
>> Server Tracking Prevention” of RFC8446bis, which describes the client 
>> reusing a ticket. A network attacker mounting a replay attack are described 
>> in Section 8 of RFC8446bis. I think a sentence or two should be added to 
>> Section C.4 to describe that a network attacker mounting a replay attack can 
>> be used for server tracking and that the mitigations in Section 8 help.
>> https://datatracker.ietf.org/doc/draft-ietf-tls-rfc8446bis/
>> __ __
>> Cheers,
>> John Preuß Mattsson
>> __ __
>> *From: *TLS  on behalf of John Mattsson 
>> 
>> *Date: *Tuesday, 28 November 2023 at 09:30
>> *To: *TLS@ietf.org 
>> *Subject: *Re: [TLS] DTLS 1.3 replay protection of post-handshake 
>> messages?
>> Hi,
>>  
>> Reading RFC 9147 (DTLS 1.3) I cannot find any other interpretation than that 
>> replay protection may be disabled for all records. This is not a problem for 
>> the initial lock-step handshake, alerts, KeyUpdate, and ACKs. It seems to be 
>> a major problem for NewSessionTicket, NewConnectionId, RequestConnectionId, 
>> and Post-handshake client authentication as the lack of replay protection 
>> might significantly affect availability. It seems to me that DTLS 1.3 forgot 
>> to update replay protection based on the new post-handshake messages. Let me 
>> know if I miss something.
>>  
>> It is a bit surprising that DTLS 1.3 published in 2022 allows the 
>> application to turn off replay protection at all. This very far from current 
>> best practice for security protocols. There are very good reasons why 
>> Datagram QUIC mandates replay protection and why TLS 1.3 has several pages 
>> discussing security considerations for 0-RTT data, which lacks replay 
>> protection. In general, turning off replay protection (even just for 
>> application data) might lead to loss of confidentiality, integrity, and 
>> availability, i.e., the whole CIA 

[TLS] Re: Bytes server -> client

2024-11-07 Thread Raghu Saxena

Dear Bas,

Thanks for sharing. I'm quite curious about this bit in particular:

On 11/7/24 10:06 PM, Bas Westerbaan wrote:


On average, around 15 million TLS connections are established with
Cloudflare per second. Upgrading each to ML-DSA, would take
1.8Tbps, which is 0.6% of our current total network capacity. No
problem so far. The question is how these extra bytes affect
performance.
Back in 2021, we ran a large-scale experiment to measure the
impact of big post-quantum certificate chains on connections to
Cloudflare’s network over the open Internet. There were two
important results. First, we saw a steep increase in the rate of
client and middlebox failures when we added more than 10kB to
existing certificate chains.

Would you be willing to share some numbers around the increase in 
failures? What do you think might've been the cause for increased 
failures at clients and middleboxes? One hypothesis I have is 
TLS-related DPI might allocate a certain buffer to capture the 
handshake, which was now being crossed.


Regards,

Raghu Saxena



OpenPGP_0xA1E21ED06A67D28A.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org