> Any numbers you have to showcase the regression and the relevant affected
web metrics?

Adding Kyber to the TLS handshake increased TLS handshake latency by 4% on
desktop [1] and 9% on Android at P50, and considerably higher at P95. In
general, Cloudflare found that every 1K of additional data added to the
server response caused median HTTPS handshake latency increase by around
1.5% [2].

> I have seen this claim before and, respectfully, I don’t fully buy it. A
mobile client that suffers with two packet CHs is probably already crawling
for hundreds of KBs of web content per conn.

There is a considerable difference between loading large amounts of data
for a single site, which is a decision that is controllable by a site, and
adding a fixed amount of latency to _all_ connections to all sites to
defend against a computer that does not exist [3].

[1]:
https://blog.chromium.org/2024/05/advancing-our-amazing-bet-on-asymmetric.html
[2]: https://blog.cloudflare.com/pq-2024/
[3]: https://dadrian.io/blog/posts/pqc-not-plaintext/




On Thu, Sep 12, 2024 at 4:11 PM Kampanakis, Panos <kpanos=
40amazon....@dmarc.ietf.org> wrote:

> Hi David,
>
>
>
> Note I am not against draft-ietf-tls-key-share-prediction. It is
> definitely better to not send unnecessary bytes on the wire.
>
>
>
> > Yup. Even adding one PQ key was a noticeable size cost (we still haven't
> shipped Kyber/ML-KEM to mobile Chrome because the performance regression
> was more prominent) so, yeah, we definitely do not want to send two PQ keys
> in the initial ClientHello.
>
>
>
> I have seen this claim before and, respectfully, I don’t fully buy it. A
> mobile client that suffers with two packet CHs is probably already crawling
> for hundreds of KBs of web content per conn. Any numbers you have to
> showcase the regression and the relevant affected web metrics?
>
>
>
>
>
> *From:* David Benjamin <david...@chromium.org>
> *Sent:* Wednesday, September 11, 2024 8:02 PM
> *To:* Ilari Liusvaara <ilariliusva...@welho.com>
> *Cc:* <tls@ietf.org> <tls@ietf.org>
> *Subject:* [EXTERNAL] [TLS] Re: draft-ietf-tls-key-share-prediction next
> steps
>
>
>
> *CAUTION*: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> On Wed, Sep 11, 2024 at 3:58 AM Ilari Liusvaara <ilariliusva...@welho.com>
> wrote:
>
> On Wed, Sep 11, 2024 at 10:13:55AM +0400, Loganaden Velvindron wrote:
> > On Wed, 11 Sept 2024 at 01:40, David Benjamin <david...@chromium.org>
> wrote:
> > >
> > > Hi all,
> > >
> > > Now that we're working through the Kyber to ML-KEM transition, TLS
> > > 1.3's awkwardness around key share prediction is becoming starkly
> > > visible. (It is difficult for clients to efficiently offer both
> > > Kyber and ML-KEM, but a hard transition loses PQ coverage for some
> > > clients. Kyber was a draft standard, just deployed by early
> > > adopters, so while not ideal, I think the hard transition is not
> > > the end of the world. ML-KEM is expected to be durable, so a
> > > coverage-interrupting transition to FancyNewKEM would be a problem.)
> > >
> >
> > Can you detail a little bit more in terms of numbers ?
> > -Did you discover that handshakes are failing because of the larger
> > ClientHello ?
> > -Some web clients aren't auto-updating ?
>
> The outright failures because of larger ClientHello are actually web
> server issues. However, even ignoring any hard failures, larger
> ClientHello can cause performance issues.
>
> The most relevant of the issues is tldr.fail (https://tldr.fail/),
> where web server ends up unable to deal with TCP-level fragmentation
> of ClientHello. Even one PQ key (1216 bytes) fills vast manjority of
> TCP fragment (and other stuff in ClientHello can easily push it over,
> as upper limit is around 1430-1460 bytes). There is no way to fit two
> PQ keys.
>
> Then some web servers have ClientHello buffer limits. However, these
> limits are almost invariably high enough that one could fit two PQ
> keys. IIRC, some research years back came to conclusion that the
> maximum tolerable key size is about 3.3kB, which is almost enough for
> three PQ keys.
>
> Then there are a lot of web servers that are unable to deal with TLS-
> level fragmentation of ClientHello. However, this is not really
> relevant, given that the limit is 16kB, which is easily enough for
> 10 PQ keys and more than enough to definitely cause performance issues
> with TCP.
>
>
>
> Yup. Even adding one PQ key was a noticeable size cost (we still haven't
> shipped Kyber/ML-KEM to mobile Chrome because the performance regression
> was more prominent) so, yeah, we definitely do not want to send two PQ keys
> in the initial ClientHello. Sending them in supported_groups is cheap, but
> as those options take a RTT hit, they're not really practical. Hence all
> the key-share-prediction work. (For some more background, so the earlier WG
> discussions around this draft, before it was adopted.)
>
>
>
> And it is possible for web server to offer both, so even with hard
> client transition both old and new clients get PQ coverage.
>
>
>
> Yup, although that transition strategy requires that *every* PQ server
> move before *any* client moves, if your goal is to never interrupt
> coverage. That's not really a viable transition strategy in the long run,
> once PQ becomes widely deployed.
>
>
>
> David
> _______________________________________________
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to tls-le...@ietf.org
>
_______________________________________________
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org

Reply via email to