Dear all,

        In yesterday’s working group meeting we had a bit of a
        discussion of the impact of the sizes of post-quantum key
        exchange on TLS and related protocols like QUIC. As we neglected
        to put Kyber’s key sizes in our slide deck (unlike the signature
        schemes), I thought it would be a good idea to get the actual
        numbers of Kyber onto the mailing list.____

    Before we dive into applying the NIST algorithms to TLS 1.3 let us
    first consider our security goals and recalibrate accordingly.____

    __ __

    That’s always a good idea.____

    __ __

    I have the luxury of having 0 users. So I can completely redesign my
    architecture if needs be. But the deeper I have got into Quantum
    Computer Cryptanalysis (QCC) PQC, the less that appears to be
    necessary.____

    __ __

    Respectfully disagree – though if all you got to protect is your
    TLS-secured purchases from Amazon.com, I’d concede the point.____

    __ __

    First off, let us level set. Nobody has built a QC capable of QCC
    and it is highly unlikely anyone will in the next decade. ____

    __ __

    You cannot know that, and “professional opinions” vary widely.


The relevant qualification in this case is to have experience of experimental physics. I spent eight years working in high energy physics. I can spot a science project when I see one. The folk who wrote the MIT Quantum Computing course are as skeptical about the scalability of the super-cold quantum machines as I am. The trapped ion approach is certainly a plausible threat but nobody has managed to make that work yet and the current progress is in the optical systems and not the hyperfine transition systems.

That is correct. At the moment, there does not seem to exist such a machine (a powerful "noise-free" quantum computer). There is a great paper from 2018 from John Preskill: https://arxiv.org/abs/1801.00862 highlighting some of the challenges faced nowasays from quantum computers. Yesterday, I indeed sat with a bunch of quantum computing researchers and the best timeline we could give was 25-35 years. Let's see.

    So, what we are concerned about today is data we exchange today
    being cryptanalyzed in the future. ____

    We can argue about that if people want but can we at least agree
    that our #1 priority should be confidentiality.____

    __ __

    Yes and yes.____

    __ __

    So the first proposal I have is to separate our concerns into two
    separate parts with different timelines:____

    __ __

    #1 Confidentiality, we should aim to deliver a standards based
    proposal by 2025.____

    __ __

    If we (well, some of us) got the data _today_ that mustn’t be
    disclosed a decade from now, then we do not have the luxury of
    waiting till 2025. Otherwise, see above.


I know people would like to have a solution much sooner but what people want and what they can reasonably expect are frequently very different things.

Let us not repeat the failure of the DPRIV working group which decided that the problem was so very very urgent that they had to have a solution in 12 months time then used that arbitrary and idiotic constraint to exclude any UDP transport scheme from consideration. As I predicted seven years ago, the TLS based solution they adopted which depended on TLS quickstart was never used because it was undeployable. We only got a deployable version of DPRIV a few months ago when the DNS over QUIC scheme went to last call.

My point here is that it is a really bad idea to set schedules for delivering standards according to what people assert is 'necessary'. in the DPRIV case, trying to make the process work faster actually made it much slower.

Given the amount of work required, the time taken to get people up to speed with the new approach, 2025 seems like a fairly optimistic date to deliver a spec even with it being a priority.

The other point to make in this respect is that yes, a heck of a lot of data that is generated today will have serious consequences if disclosed as a result of QCC. But that data is a really small fraction of the TLS traffic today which is vastly more than any adversary could store. Also, people who genuinely have such concerns need to be looking at Data at Rest security as their primary security mechanism. So given the almost complete lack of concern for Data at Rest security in the industry right now, I am tending to see the PQC concerns as being less about security and more about something else.

I agree on these points. It is vital that we are careful with the migration and we don't repeat the same errors we did in the past. The point you raise of UDP is indeed something to be taking extra considerarion of. So far we had had some few experiments on how actually PQC works over real data but those experiments are not a full view of the Internet as we use it in many applications nowadays. Perhaps this time of waiting for the NIST standard should be devoted to testing those cases: on connections with unrealiable bandwidth, protocols over UPD, stateless servers, devices with different configurations, and more. I highlighted some of these cases on a presentation I gave yesterday: https://claucece.github.io/slides/Summer_School_PQC_TLS.pdf


    #2 Fully QCC hardened spec before 2030.____

    __ __

    If by “fully…” you mean “including PQ digital signatures”, I’d
    probably agree.


Not just that. We have to go through and audit every part of the TLS/WebPKI system to check and it is a very large and very complex system. It is bad enough trying to get my head around all the possible issues with the Mesh which is a system with one specification and no legacy. TLS/PKIX/ACME is going to be a real bear.

Completely agree. This is also the case for DNSSEC, for example. We don't have a proper mapping of those operational issues. We have started research on the matter so if you are interested in contributing, let us know.

I am now thinking in terms of 'Post Quantum Hardened" and "Post Quantum Qualified". Hardening a system so it doesn't completely break under QCC is a practical near term goal. Getting to a fully qualified system is going to be a root-and-canal job.

There is a notion of being 'quantum annoyant' to a quantum computer: perhaps that might be an starting point for other schemes that do no have a post-quantum counterpart as of right now. For others, a hybrid approach should definitly be taken such that classical cryptography still protects data.

    Second observation is that all we have at this point is the output
    of the NIST competition and that is not a KEM. No sorry, NIST has
    not approved a primitive that we can pass a private key to and
    receive that key back wrapped under the specified public key. What
    NIST actually tested was a function to which we pass a public key
    and get back a shared secret generated by the function and a blob
    that decrypts to the key.____

    __ __

    The output of NIST PQC is _exactly_ KEM. And it’s fully specified.____

    __ __

    NIST did not approve 'KYBER' at least it has not done so yet. ____

    __ __

    NIST did – what it did _not_ do is finalizing the specs, which
    requires public review. Some people conjecture that Kyber will not
    need many changes to become a “full” standard.


And other people are claiming that Kyber has limitations, that it does not support non-interactive protocols, etc. etc. While I would be happy to see some qualified cryptographers come out and say that those people are wrong and misinformed etc., I am not seeing that pushback at this point.

My issue here is that opening the box voids the manufacturer's warranty and at this point we do not have a description of what the inner box is or what caveats might apply to using the inner mechanism.

Inherently, KEMs do not support non-interactive protocols. A KEM is not a (EC)DH key exchange: we don't have an scheme that targets all the properties that (EC)DH KEX give us (there is CSIDH, which security is heavily debated in the community but it is not broken by the recent torsion "glue and split" attack). A thing we have also being working on is this: a proper explanation of what a KEM gives you and where its limits are when compared to (EC)DH KEX. I'll try sharing that over the next weeks.

    TLS protocol includes derivation of “session” keys. Currently it
    employs asymmetric “pre-Quantum” crypto. That has to be replaced by
    PQ asymmetric crypto. That’s the most appropriate (and the only)
    point to deploy PQC in. I’ve no clue about Mesh, so exclude Mesh
    from my comment.____

    __ __

    The solution I am currently working with is to regard QCC at the
    same level as a single defection. So if Alice has established a
    separation of the decryption role between Bob and the Key Service,
    both have to defect (or be breached) for disclosure to occur. Until
    I get Threshold PQC, I am going to have to accept a situation in
    which the system remains secure against QCC but only if the key
    service does not defect.____

    __ __

    Skipping the above.____

    __ __

    Applying the same principles to TLS we actually have two key
    agreements in which we might employ PQC:____

    __ __

    1) The primary key exchange____

    2) The forward secrecy / ephemeral / rekey____

    __ __

    Looking at most of the proposals they seem to be looking to drop the
    PQC scheme into the ephemeral rekeying. That is one way to do it but
    does the threat of QCC really justify the performance impact of
    doing that?____

    __ __

    First, I don’t see performance impact from that. PQC KEMs are pretty
    fast. The main cost is in exchanging much larger bit blobs. Second –
    if your today’s data will maintain its value into 2030+, then
    definitely yes; otherwise – who cares.____

    __ __

    PQC hardening the initial key exchange should suffice provided that
    we fix the forward secrecy exchange so that it includes the entropy
    from the primary. This would bring TLS in line with Noise and with
    best practice. It would be a change to the TLS key exchange but one
    that corrects an oversight in the original forward secrecy
    mechanism.____

    __ __

    If your rekey depends on the initial key values, and/or uses only
    Classic crypto – how can you provide Forward Secrecy?


The TLS nomenclature is confused here. To me a session key is what I apply to data, i.e.

session = KDF (ephemeral-agreement)

My rekey uses the initial values plus the ephemeral exchange, ie

session = KDF (initial-exchange + ephemeral-agreement)

So the key I use to encrypt the data is secure if either the initial-exchange is secure or the ephemeral-agreement is secure. I have not proved that but any inability to produce such a proof should probably stand as indicating a limitation in the current state of the art in formal proofs of security than the protocol design.

What I propose using in a minimally PQC hardened exchange is:

session = KDF (initial-exchange + initial-PQC + ephemeral-agreement)

That is one option but not the only one. There are

0) Classic initial, no forward secrecy
1) Classic initial + PQC initial, no forward secrecy
2) Classic initial + PQC initial, classical forward secrecy
3) Classic initial, classical forward secrecy + PQC forward secrecy
4) Classic initial, PQC forward secrecy
5) Classic initial + PQC initial, PQC forward secrecy
6) Classic initial + PQC initial, classical forward secrecy+ PQC forward secrecy
etc.

Given that Google has spent the past five years telling people that security signals absolutely don't work, they are going to face a billion dollar anti-trust suit from certain CAs if they then try to provide a new security signal to show off support for PQC crypto. So persuading sites to deploy PQC support might be challenging.


The big difference between PQC initial and PQC forward secrecy is that if the PQC agreement is going to take place as an initial key agreement, the public key has to be attested by the TLS server certificate. It is this move that makes '0RTT' possible. As I keep saying, 0RTT is not really a thing, we just have clever ways to conceal parts of the protocol by moving them into a different protocol. If we want Kyber to work as 0RTT, we have to use the same techniques.

Not sure if I follow, so apologies in that. We already have a hybrid mechanism to add to the key exchange phase of TLS: https://datatracker.ietf.org/doc/draft-ietf-tls-hybrid-design/ The KDF functions used during the Key Schedule are not targeted by a quantum computer so if the initial master key is quantum-safe, so are the subsequent ones for the KDFs.

The term 'interactive' is used very differently in protocol design and cryptography. DH and ECDH are mutual key exchanges. Alice and Bob both receive a shared secret that both contribute to equally. Kyber is a unilateral key exchange, Alice encrypts to Bob's public key without using her key. If we want to have a mutually authenticated key exchange, Bob is going to have to encrypt something to Alice's public key.

That is correct. Such is the way KEMs work. We don't have that counterpart in post-quantum that we can attest to a high level of security. This is not so evidently needed in TLS but it is on Signal, OTR, and other protocols. I recently sent an email to the pqc mailing list around the matter: https://mailarchive.ietf.org/arch/msg/pqc/mW1r-57_OX7kAMGPef3noC4ZF_E/

Thank you,


--
Sofía Celi
@claucece
Cryptographic research and implementation at many places, specially Brave.
Chair of hprc at IRTF and anti-fraud at W3C.
Reach me out at: cheren...@riseup.net
Website: https://sofiaceli.com/
3D0B D6E9 4D51 FBC2 CEF7  F004 C835 5EB9 42BF A1D6

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to