Re: [TLS] crypto computations & lifetimes clarifications (was: TLS 1.3 - method to request uncached shared secrets)

2015-07-20 Thread Hugo Krawczyk
​​


On Mon, Jul 20, 2015 at 12:10 AM, Eric Rescorla  wrote:

>
>
> On Mon, Jul 20, 2015 at 9:04 AM, Dave Garrett 
> wrote:
>
>> On Monday, July 20, 2015 12:27:42 am Eric Rescorla wrote:
>> > I think perhaps you have misunderstood the forward secrecy properties
>> of the
>> > current draft. Unlike TLS 1.2 and previous, the current draft has a
>> separate
>> > resumption master secret which is independently derived from the master
>> > secret used for the connection keys in the original connection. This
>> means
>> > that if you don't resume the connection, you have forward secrecy for
>> the
>> > original connection regardless of whether the server stores the session
>> in
>> > the session cache or sends the client a ticket.
>>
>> We've got lots of keys and secrets now. Could you please clarify the
>> exact points where these are each to be discarded? If I am understanding it
>> correctly, the master_secret, prior intermediate secrets, and
>> finished_secret are to be discarded as soon as the keys, resumption_secret,
>> and possibly exporter_secret (which currently has no explanation in the
>> doc) are derived, the handshake is finished, and we're ready for
>> application traffic? It would help if you provided a table/chart laying out
>> the timeline of secret & key lifetimes, from derivation to discard. It
>> should state in the spec explicitly what needs to be kept around for how
>> long and require things be discarded as soon as viable.
>>
>
> Yes, I can do something along these lines.
>
>
>
>> I think these various values need to be named more consistently in the
>> doc to make searching for them easier. For example, "resumption_secret" is
>> used in the computation part but the words "resumption master secret" are
>> used when actually using this value. (also noted in issue #191 by Martin
>> Thomson) I've pushed a small PR to correct this case along with a few
>> tweaks that I think makes it a bit clearer.
>> https://github.com/tlswg/tls13-spec/pull/205
>>
>> Also, some other questions about various computations:
>>
>> https://tools.ietf.org/html/draft-ietf-tls-tls13-07#section-7.1
>> https://tlswg.github.io/tls13-spec/#key-schedule
>>
>> HKDF(,,,) doesn't seem to be fully defined here, just
>> HKDF-Expand-Label(,,,) which is based on HKDF-Expand(,,) from RFC 5869.
>> Could you please clarify this?
>>
>
> Yes.
>
>
>
>> Why is finished_secret derived from extracted static secret instead of
>> master_secret?
>
>
> The rationale here is that the Finished message also serves to authenticate
> the server's ephemeral DH share (when in known_configuration mode) and
> because the master secret depends on the ephemeral DH keys, this creates
> an odd authentication logic. Hugo can expand on this some more, perhaps.
>

​Eric's explanation is correct.
Your question boils down to: Why is finished_secret derived from SS only
and not from ES?

First note that the issue only arises in the known_configuration case since
in other cases ES and SS are the same.
For the known_configuration case there are t
wo important reasons
​ to build on SS and not on ES:
​
1. Only SS can authenticate the handshake as it is the only element to
involve the server's (semi) static private key.
2. One of the main elements to be authenticated by the server (via the
Finished message) is the ServerKeyShare, thus deriving the key for the
Finished message (i.e. finished_secret) from ES (calculated using
ServerKeyShare) would create a circularity issue in the logic of the
derivation.

Note that the derivation of application keys (and other key material
remaining after the end of the handshake) do involve both SS and ES, but in
that case involving ES is crucial to achieve forward secrecy.


Hugo
​


>
>
>
>> Are there two finished_secret in the event that the client sends a
>> certificate?
>>
>
> No, this shouldn't be necessarily. You just use the first one. I'll try to
> clarify.
>
>
>
>> The computation of verify_data could probably be moved up to the same
>> section so this is all in the same place. Am I correct in reading that it
>> could be simplified a bit? (e.g. HKDF-Expand-Label(master_secret,
>> finished_label, handshake_hash, L) without the extra HMAC currently defined
>> for verify_data)
>
>
> See above.
>
> -ekr
>
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fwd: Summary of today's discussion on server-side signing

2015-07-31 Thread Hugo Krawczyk
I am ok with whatever the WG decides, particularly when the reasons are
non-cryptographic but rather based on implementation considerations.

Still, for the record, I'd like to correct the statement "
​
KnownConfiguration is only useful with 0-RTT.
​"​.

KnownConfiguration could be used with 1-RTT even if the client does not
send early application data in the first flight.
This would have allowed to save a signature also in the 1-RTT case whenever
the client has cached a KnownConfiguration.
Saving a signature is a major performance benefit with RSA signatures (are
they really going away soon?) but is a benefit also with ECDSA as it avoids
the need to send a certificate chain (shortening the handshake) and the
need to verify these certificates. ECDSA also has a cost for the client.

Lastly, the protocol would be secure without the signature (in the case the
client uses a known configuration), a property that enables the use of the
protocol with offline signatures (to-be-specified).

Hugo

On Mon, Jul 27, 2015 at 7:26 AM, Sean Turner  wrote:

> All,
>
> I asked ekr to write a brief summary of the server-side signing issue.
> The summary provided matches the WG consensus as judged by the chairs.
> Please let us know if you object to the way forward by August 3rd.
>
> J&S
>
> Begin forwarded message:
>
> > From: Eric Rescorla 
> > Subject: Summary of today's discussion on server-side signing
> > Date: July 22, 2015 at 08:52:31 EDT
> > To: Sean Turner 
> >
> > Sean,
> >
> > Here's a summary of today's discussion on signing and KnownConfiguration.
> >
> > SUMMARY
> > The WG agreed that the server must sign whenever certificate
> authentication
> > is used (even if the KnownConfiguration is used).
> >
> >
> > BACKGROUND
> > The current draft requires the server to send a
> Certificate/CertificateVerify
> > whenever either:
> >
> > (a) The KnownConfiguration option is not in use.
> > (b) The server sends a ServerConfiguration
> >
> > but it does not need to sign if the KnownConfiguration option is in
> > use but no new ServerConfiguration is provided.  Several people (most
> > recently Martin Thomson) have suggested that it would be simpler to
> > just require the server to sign any time certificate-based
> > authentication is in use. The penalty for this is an extra sign/verify,
> > as shown in the following table:
> >
> > Scenario   Client   Server
> > --
> > 1-RTT  1 (EC)DHE + Verify 1 (EC)DHE + Sign
> >
> > 0-RTT (current, 2 (EC)DHE2 (EC)DHE
> >   no new config)
> >
> > 0-RTT (current,2 (EC)DHE + Verify 2 (EC)DHE + Sign
> >   new config)
> >
> > 0-RTT (proposed)   2 (EC)DHE + Verify 2 (EC)DHE + Sign
> >
> >
> > So, the performance difference here is between line 2 and line 4,
> > since whenever you provide a new config (line 3) you have to sign
> > anyway. The benefit is that it makes the server side of the handshake
> > essentially identical in both 0-RTT and 1-RTT, which is nice from an
> > implementation and analysis perspective.
> >
> >
> > SUMMARY OF WG DISCUSSION
> > During the WG discussion today, there was rough consensus to adopt
> > this change (i.e., always sign). A number of arguments were advanced
> > in favor of this change.
> >
> > (1) It's significantly simpler for implementors and (at least informal)
> > analysis. A side benefit is being able to merge the extension
> > logic for 0-RTT and KnownConfiguration, since
> ​​
> KnownConfiguration
> > is only useful with 0-RTT.
> >
> > (2) It extends the properties we were shooting for with online-only
> > signatures and requiring that the server always sign
> ServerConfiguration,
> > namely continuous proof of access to the signing key.
> >
> > (3) The performance cost of an extra ECDSA signature is small and
> > shrinking fast (per Ian Swett channelling Adam Langley), and
> > people who care about speed will cut over to ECDSA (certs are
> > readily available).
> >
> > (4) You can still do 0-RTT with PSK resumption, which is computationally
> > much faster.
> >
> > On balance the WG seemed to feel that these were more compelling than
> > the performance value of the optimization.
> >
> > There was also a recognition that signature amortization was valuable,
> > but the consensus was that instead of doing this here, it would be
> > better to adopt Hugo's suggeston from a while back to have a
> > certificate extension that allowed offline signatures. This allows
> > both amortization *and* delegation, while not constituting a threat
> > to existing TLS 1.2 implementations. We agreed that this could be
> > worked in in parallel but shouldn't hold up TLS 1.3.
> >
> > Per WG guidance, I'll be preparing a draft PR for this.
> >
> > -Ekr
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
> _

Re: [TLS] Key Hierarchy

2015-09-22 Thread Hugo Krawczyk
On Sun, Sep 20, 2015 at 9:56 PM, Brian Smith  wrote:

> On Sun, Sep 20, 2015 at 4:58 PM, Eric Rescorla  wrote:
>
>> https://github.com/tlswg/tls13-spec/pull/248
>>
>> Aside from some analytic advantages
>>
>
> What are the analytic advantages?
>

​The advantages are: a cleaner separation of keys derived from ES and SS, a
simpler proof argument (via the explicit functional separation of extract
and expand steps), and the ability to represent the whole key derivation
scheme via the extract/expand steps or via full HKDF calls, whichever is
more convenient (the latter gives significant flexibility to an
implementation depending on its API to the full HKDF function or to its
extract and expand components).


> Also, a question that applied even to the older design: I remember the an
> HKDF paper and the HKDF paper stating that before it is safe to use a value
> as an HKDF salt, it must be authenticated. But, in both the old and new
> designs it seems like an authenticated value is being used as the salt in
> the HKDF-Extract(mSS, mES) operation. What does this mean for the security
> analysis?
>

​It seems that when you say "an authenticated value" you actually mean "an
unauthenticated value". If I got it wrong let me know.
Assuming this interpretation of your question let me point out that the
value mSS is server-authenticated by virtues of g^s being authenticated
(via a server's signature or a server-configuration​) hence it complies
with the RFC (and paper).


> One of the notes in the new design draws some attention to the strange
> fact that we compress the output of the ECDHE operation to the length of a
> digest function that is independent of the length of the ECDH keys used.
> For example, if we used P-256 in the ECDHE operation for a AES-128-GCM
> cipher suite, we'd compress the output to 256 bits using HKDF-Extract with
> SHA-256. But, if we used P-521 in the ECDHE operation for the same cipher
> suite,  we'd still compress the output to 256 bits using HKDF-Extract with
> SHA-256. That seems wrong. I would guess it makes more sense to choose the
> HKDF digest algorithm based on the size of the ECDHE key. Note that in the
> NSA Suite B Profile for TLS, they fixed this by requiring a more rigid
> relationship between the ECDHE key size and the cipher suite than what TLS
> requires. See [1]. I think it's worth considering whether the current
> (older and newer) design makes is better or worse than a design like the
> NSA Suite B Profile in this respect.
>

​Ekr answered this. If you still feel something is wrong let us know.

Hugo
​


>
> [1] https://tools.ietf.org/html/rfc6460#section-3.1.
>
> Cheers,
> Brian
> --
> https://briansmith.org/
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] I-D: CipherSuites for Kerberos + DH

2015-10-11 Thread Hugo Krawczyk
On Sun, Oct 11, 2015 at 9:46 AM, Watson Ladd  wrote:

> On Sun, Oct 11, 2015 at 8:17 AM, Ilari Liusvaara
>  wrote:
> > On Sun, Oct 11, 2015 at 09:25:10AM +0200, Rick van Rein wrote:
> >> > *From:* internet-dra...@ietf.org
> >> >
> >> > Name:   draft-vanrein-tls-kdh
> >> > Revision:   00
> >>
> >> Hello TLS WG,
> >>
> >> I would like to propose new CipherSuites for TLS.  The cryptography is
> >> founded on Kerberos authentication and DH encryption, cryptographically
> >> bound together.  The mechanism uses mutual authentication, although
> >> clients may use anonymous tickets.
> >>
> >> Any feedback that you may have (technical, or WG-procedural) is kindly
> >> welcomed.  I will also send this to the Kitten WG.
> >
> > Some quick comments:
> > - The signed DH share does not look to be bound to anything (crypto
> >   parameters negotiation, randoms, server key exchange, etc..). I can't
> >   offhand say what that would lead to, but it looks even worse than
> >   TLS ServerKeyExchange, which has known vulernabilities due to
> >   lack of binding to things like ciphersuite.
> > - The ciphersuite list looks bad: 1) IDEA (bad idea), CBC
> >   (don't use), apparent SHA-1 prf-hash (REALLY bad idea)[1][2].
> > - Even use of DH is questionable.
>
> I would suggest piggybacking on the PSK mode, using the key Kerberos
> provides at both ends as the PSK key. This would address all of these
> issues in TLS 1.3
>

That's the right solution and this is why we need modular designs with
generic security (generic = not tailored to a specific use case). It allows
you to accommodate cases you did not necessarily think about when designing
the protocol.

Hugo



> Sincerely,
> Watson
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] OPTLS paper posted

2015-10-15 Thread Hugo Krawczyk
The OPTLS paper (preprint) explaining the rationale of the protocol and its
analysis is posted here: http://eprint.iacr.org/2015/978.

The OPTLS design provides the basis for the handshake modes specified in the
current TLS 1.3 draft including 0-RTT, 1-RTT variants, and PSK modes (client
authentication is not covered). OPTLS dispenses with elements that are not
essential to achieve the basic cryptographic security of the protocol.
By following such a "minimalistic" approach, the OPTLS design provides the
flexibility of building different protocol variants that provide varied
performance trade-offs and security features. Some of these variants give
rise
to the current TLS 1.3 modes while others may be useful in the future. In
the
latter class it is worth noting the ability to obtain a protocol that
completely
eliminates online signatures while keeping most of TLS 1.3 unchanged.

The analysis part of the paper covers the basics of key exchange security.
More comprehensive analyses including validation of TLS 1.3 specifications
and
implementations is expected to be covered by future work.

We would like to take this opportunity to thank the TLS Working Group for
insightful discussions and invaluable feedback that led to this work.

Hoeteck and Hugo
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Should we use proof-of-possession rather than signatures?

2015-11-24 Thread Hugo Krawczyk
On Tue, Nov 24, 2015 at 12:53 PM, Mike Hamburg  wrote:

>
>
> Sent from my phone.  Please excuse brevity and typos.
>
> On Nov 24, 2015, at 09:01, Eric Rescorla  wrote:
>
>
> On Tue, Nov 24, 2015 at 8:25 AM, Bill Cox  wrote:
>
>> Much of the world seems to have switched to Schnorr-signature inspired
>> ECC signature schemes such as ECDSA-P256 and Ed25519.  These schemes are
>> very fast, but require two point multiplications to do a Schnorr-style
>> verification.  A simpler proof-of-possession can be verified with only one
>> point multiplication.
>>
>> The server authentication scheme used in QUIC is for the server to prove
>> possession of the static key when it encrypts the new ephemeral key share.
>> The trick is to take advantage of the key shares that have already been
>> computed.  The client has already computed its ephemeral keyshare, and the
>> server just uses its static keyshare from the server config.  The
>> CertificateVerify message could be generated by the server computing the
>> ECDHE shared secret between its static secret and the client's ephemeral
>> keyshare, and then encrypt of the client random as it's proof.
>>
>
> This is insecure. You need to MAC the whole handshake, especially the
> server ephemeral.
>
> The client verifies the proof by decrypting the nonce.  As with Schnorr
>> signatures, creating the proof takes only one multiply: in this case the
>> server multiplies the client's keyshare by it's static keyshare secret.
>> Instead of having to do two scalar point multiplications, the client only
>> has to multiply the server's static keyshare by its ephemeral keyshare
>> secret.  The proof is also smaller: 32 bytes vs 72 for ECDSA-P256.
>>
>
> The size advantage of proof of possession is significant if the server
> actually has a DH cert, one for the same group as the client's ephemeral.
>
> This is a sort of complicated question.
>
> In general, servers have signature keys, not static DH keys. QUIC bridges
> this by
> having the server generate an offline signature over a static DH key, but
> TLS explicitly
> rejected this as a generic approach because of concerns about the impact
> of producing
> a long-term delegated credential, especially if generic TLS credentials
> could be used to
> do so (see the extensive discussion on the list a while back on the
> mailing list as well
> as [0]). So, the current design requires the server to prove present
> possession of the
> signing key, not just that it possessed it at some point.
>
> It's correct that demonstrating proof of possession of a long-term DH
> share is
> somewhat faster than signatures. There are two potential ways to do this
> with
> TLS while retaining the guarantees above:
>
>
> Correct-ish. For example, the current implementation of ed448 takes 463k
> skylake cycles (new cpu, top of the chart, I'm on a phone, sorry) to
> compute ecdh, which would need to happen twice. But it takes 162kcy to sign
> and 509k to verify, for a total of 671k vs 926k.  Signing favors the server
> while double DH favors the client; there are good reasons to go in either
> direction in this.
>
> Presumably the two server scalar multiplications could be combined with
> dual Shamir's trick, at which point double DH would be slightly faster than
> sigs, but I don't have an implementation of that lying around.  There is
> also a different calculation if the client has precomputed a table from the
> server's static key, but nobody does that and I'd guess the results are
> similar anyway.
>
> Proof of possession is a bigger win if you go with MQV.
>
> 1. Issue DH (or probably ECDH) certificates.
> 2. Have a certificate extension that indicates that the certificate can be
> used
> for offline signatures (following a suggestion by Hugo Krawczyk)
>
> The general sense of the WG is that while these are both good ideas, ECC
> is now
> so fast that they can be pursued separately rather than in the critical
> path. If you're
> interested in working on that, it would be great to get someone on it, so
> please
> contact me or the chairs offline :)
>
>
> I agree that the speed and size savings are not necessarily worth the
> complexity. If we were rolling a new protocol from scratch they probably
> would be though.
>

​The all-DH-based solution, with DH certificates, does not add complexity
but rather simplifies the protocol and analysis, and opens the option of
more efficient protocols (e.g. MQV-like ones). But the world does not seem
ready to depart from the beloved signature certificates.

​Hugo​


​


>
>
>
>> This proof-

Re: [TLS] bikeshed: Forward Security or Secrecy?

2015-11-30 Thread Hugo Krawczyk
The more common term is "forward secrecy" - indeed, the normal definition
[1] refers specifically to the secrecy of session keys or ephemeral key
material after being deleted. Other elements of security such as
authentication and integrity are irrelevant so "secrecy" seems to be the
more appropriate term. There are other notions in cryptography that use the
term "forward secure", see
http://www.cs.bu.edu/~itkis/pap/forward-secure-survey.pdf.

[1] "the compromise of long-term keys does not compromise past session
keys"

Hugo


On Mon, Nov 30, 2015 at 4:27 PM, Dave Garrett 
wrote:

> Which do we like better: "Forward Security" or "Forward Secrecy"? The TLS
> 1.3 draft uses both interchangeably. The term is clearly in a state of
> flux, seeing as we've seemingly collectively agreed to drop the word
> "perfect" from the term, already. Personally, I prefer "security" because
> "secrecy" is a less used word, and to "forward secure" something is
> grammatically OK but to "forward secret" something is not. (e.g. the doc
> says 0RTT data is not "forward secure" but "forward secret" isn't really
> the right phrase here) Everything could be rephrased to use either, but I'd
> like to change all our use to just "forward secure" and stick a note
> somewhere on the terminology.
>
>
> Dave
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Explicit use of client and server random values

2015-12-17 Thread Hugo Krawczyk
I have mentioned this in private conversations but let me say this here: I
would prefer that the nonces be explicitly concatenated to the handshake
hash.  That is,

handshake_hash = Hash(

client random||

server random||

Hash(handshake_messages) ||

Hash(configuration) || )


The reason is that nonces are essential for freshness and session
uniqueness and I want to see them explicitly included in the
signed/mac-ed/KDF-ed information. I can envision a future variant/mode of
the protocol where parties do not transmit nonces but have a synchronized
state that they advance separately and use as nonces (e.g., for key
refreshing) - in such case the nonces would not be included in the
handshake-hash computation.

So while the redundancy of having them twice in the handshake_hash
calculation may be annoying, this adds robustness to the security (and
analysis) of the protocol.

Another reason for including them (in particular as the leading values) in
the computation of handshake_hash is to have them always located at the
same position in the hashed stream. It is needed to make sure that these
streams are unique per session (in theory, and maybe in practice, an
attacker may play games changing the boundary of nonces by changing
surrounding bytes in the stream).

If this augmenting of handshake_hash is not adopted then there should be a
note cautioning against excluding the nonces from the transmitted messages.
If possible, it would be good to move them to a fixed position (from the
start of the input to the handshake_hash).

Hugo

On Thu, Dec 17, 2015 at 10:13 AM, John Foley  wrote:

> On 12/16/2015 04:28 PM, Dave Garrett wrote:
>
>> On Wednesday, December 16, 2015 04:15:00 pm John Foley wrote:
>>
>>> Thanks for answering my questions.  Have you considered adding KAT
>>> values for the key derivation steps?  This would be helpful to
>>> implementors.  RFC5869 already has KAT values for HKDF-Extract and
>>> HKDF-Expand.  But the TLS 1.3 spec has added HKDF-Expland-Label.
>>> Additionally, It would be useful to show intermediate KAT values for
>>> xSS, xES, mSS, and mES.
>>>
>> I suggest filing an issue or submitting a PR with a starting point set of
>> changes and discussing it with ekr.
>>
>>
> I've submitted https://github.com/tlswg/tls13-spec/issues/378.  If you
> give me a few days, I'll update this issue with KAT values per revision
> 10.  Since it sounds like there are changes forthcoming in this section of
> the draft, I'll hold off on the PR until later. Hopefully someone else will
> volunteer to verify my KAT values.
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Explicit use of client and server random values

2015-12-17 Thread Hugo Krawczyk
On Thu, Dec 17, 2015 at 5:33 PM, Mike Hamburg  wrote:

>
>
> On Dec 17, 2015, at 12:11 PM, Eric Rescorla  wrote:
>
>
>
> On Thu, Dec 17, 2015 at 3:02 PM, Hugo Krawczyk 
> wrote:
>
>> I have mentioned this in private conversations but let me say this here:
>> I would prefer that the nonces be explicitly concatenated to the handshake
>> hash.  That is,
>>
>> handshake_hash = Hash(
>>
>> client random||
>>
>> server random||
>>
>> Hash(handshake_messages) ||
>>
>> Hash(configuration) || )
>>
>>
>> The reason is that nonces are essential for freshness and session
>> uniqueness and I want to see them explicitly included in the
>> signed/mac-ed/KDF-ed information. I can envision a future variant/mode of
>> the protocol where parties do not transmit nonces but have a synchronized
>> state that they advance separately and use as nonces (e.g., for key
>> refreshing) - in such case the nonces would not be included in the
>> handshake-hash computation.
>>
>> So while the redundancy of having them twice in the handshake_hash
>> calculation may be annoying, this adds robustness to the security (and
>> analysis) of the protocol.
>>
>
> This change doesn't make implementation or specification significantly
> more difficult.
> Does anyone  else object or feel it makes analysis harder? :)
>
> -Ekr
>
>
> While I haven’t been following TLS 1.3 development all that closely, I
> will question this request.
>
> TLS is annoying to implement and analyze in part because it hashes
> more-or-less arbitrary parts of the handshake messages together, in
> arbitrary order, at arbitrary times.  Removal of all the explicit hashing
> of client/server random in TLS 1.3 makes it clearer what’s going on, and
> makes implementations simpler.
>

​How does removal of explicit hashing of the nonces make things clearer?
What are the things that are made clearer?

​


>  Some of the crypto operations still feel pretty arbitrary (particularly
> Finished), but things seem to be improving overall.  In this context, it
> feels like
> ​​
> adding client random and server random back to the hash is a regression.
>

​What do you mean by ​

​"
​
 adding client random and server random back to the hash is a regression
​"?
Why "back"? Were they removed? What's the regression?
You are probably not suggesting to omit them, right? Are you worried about
the redundancy of being hashed twice? Is it a security issue or an
implementation issue?
​
.​

>
> From an analysis point of view, the client and server random are parseable
> from Hash(handshake messages) because they are concatenated with framing
> information.
>

​They are parseable, but I am not sure they are *uniquely* parseable - a
fixed location in the stream does make them uniquely parseable.

​


>  But here, they are concatenated without framing information.
>

​The nonces are the main framing information - they are the (honest
parties') unique identifier of the handshake.

​


> So I don’t understand Hugo’s contention that the old scheme leads to
> trouble if the nonce changes sizes in a later version, and that the new
> scheme does not.  It seems to me that the reverse is more likely to be true.
>

​I'm clearly not following your argument.

​Hugo
​

>
> Cheers,
> — Mike
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Explicit use of client and server random values

2015-12-18 Thread Hugo Krawczyk
On Fri, Dec 18, 2015 at 3:55 PM, Mike Hamburg  wrote:

> Whoops, big-R to reply all...
>
> On Dec 17, 2015, at 9:39 PM, Hugo Krawczyk  wrote:
>
>
> On Thu, Dec 17, 2015 at 5:33 PM, Mike Hamburg  wrote:
>
>>
>>
>> On Dec 17, 2015, at 12:11 PM, Eric Rescorla  wrote:
>>
>>
>>
>> On Thu, Dec 17, 2015 at 3:02 PM, Hugo Krawczyk 
>> wrote:
>>
>>> I have mentioned this in private conversations but let me say this here:
>>> I would prefer that the nonces be explicitly concatenated to the handshake
>>> hash.  That is,
>>>
>>> handshake_hash = Hash(
>>>
>>> client random||
>>>
>>> server random||
>>>
>>> Hash(handshake_messages) ||
>>>
>>> Hash(configuration) || )
>>>
>>>
>>> The reason is that nonces are essential for freshness and session
>>> uniqueness and I want to see them explicitly included in the
>>> signed/mac-ed/KDF-ed information. I can envision a future variant/mode of
>>> the protocol where parties do not transmit nonces but have a synchronized
>>> state that they advance separately and use as nonces (e.g., for key
>>> refreshing) - in such case the nonces would not be included in the
>>> handshake-hash computation.
>>>
>>> So while the redundancy of having them twice in the handshake_hash
>>> calculation may be annoying, this adds robustness to the security (and
>>> analysis) of the protocol.
>>>
>>
>> This change doesn't make implementation or specification significantly
>> more difficult.
>> Does anyone  else object or feel it makes analysis harder? :)
>>
>> -Ekr
>>
>>
>> While I haven’t been following TLS 1.3 development all that closely, I
>> will question this request.
>>
>> TLS is annoying to implement and analyze in part because it hashes
>> more-or-less arbitrary parts of the handshake messages together, in
>> arbitrary order, at arbitrary times.  Removal of all the explicit hashing
>> of client/server random in TLS 1.3 makes it clearer what’s going on, and
>> makes implementations simpler.
>>
>
> ​How does removal of explicit hashing of the nonces make things clearer?
> What are the things that are made clearer?
>
> ​
>
>
>>  Some of the crypto operations still feel pretty arbitrary (particularly
>> Finished), but things seem to be improving overall.  In this context, it
>> feels like
>> ​​
>>  adding client random and server random back to the hash is a regression.
>>
>
> ​What do you mean by ​
>
> ​"
> ​
>  adding client random and server random back to the hash is a regression
> ​"?
> Why "back"? Were they removed? What's the regression?
> You are probably not suggesting to omit them, right? Are you worried about
> the redundancy of being hashed twice? Is it a security issue or an
> implementation issue?
> ​
> .​
>
>>
>> From an analysis point of view, the client and server random are
>> parseable from Hash(handshake messages) because they are concatenated with
>> framing information.
>>
>
> ​They are parseable, but I am not sure they are *uniquely* parseable - a
> fixed location in the stream does make them uniquely parseable.
>
> ​
>
>
>>  But here, they are concatenated without framing information.
>>
>
> ​The nonces are the main framing information - they are the (honest
> parties') unique identifier of the handshake.
>
> ​
>
>
>> So I don’t understand Hugo’s contention that the old scheme leads to
>> trouble if the nonce changes sizes in a later version, and that the new
>> scheme does not.  It seems to me that the reverse is more likely to be true.
>>
>
> ​I'm clearly not following your argument.
>
> ​Hugo
> ​
>
>>
>> Cheers,
>> — Mike
>>
>
> Sorry Hugo, I misread the proposal as using the nonces and handshake hash
> when signing, but not elsewhere.  On rereading, it is still entirely
> consistent, and isn’t being added “back” anywhere, except perhaps to some
> memory structure in the handshake code.  It also definitely isn’t a
> security problem, and it isn’t as annoying for implementations as I had
> figured (but it does require extra memory, which I had hoped to see
> removed).
>
> However, I’m somewhat confused about your statement that the framing
> information is the honest parties’ unique identifier of the handshake.  I
> had thought that TL

Re: [TLS] Proposal: don't change keys between handshake and application layer

2016-02-18 Thread Hugo Krawczyk
I want to point out that the benefits of using the application key output
by the
handshake protocol also for handshake traffic protection are not clear cut.
I cannot comment at the level of implementation simplification that
motivates
this change but I can comment on the cryptographic implications of this
change.

Yes, TLS 1.3, can probably be proved secure even with this key merge, but
it is
a proof of a *weaker* guarantee. When the application key is not used
during
the exchange you can claim that the handshake protocol provide *generic* key
exchange security. Here "generic" means that you can use the key with
*any* application that requires a secret shared key. In contrast, using the
application key during the key exchange process itself can still guarantee
security for a *specific* application but not in general. That is, we can
now
prove security for the specific application of this key to protecting TLS
record
layer traffic (achieved via TLS message type separation). But if you want to
take that same key and use it for a variant of the protocol or in a
different
application [1] then you need to go and re-analyze [2] the protocol with
that
specific application in mind.

Given the history of weaknesses in TLS I would prefer a more conservative
design. TLS evolves and is used in places and ways not contemplated by the
designers. For example, no one contemplated originally that people will use
a
MAC value or a session key as collision-resistant identifiers for sessions
but
people did. Models and requirements need to be strengthened over time, not
weakened. The merge of handshake and application key is a weakening.

Finally, a side effect of this change is that it limits the generality of
the
protocol. Till now the protocol was ready for an immediate variant where the
server uses a Diffie-Hellman certificate. It would require a minimal
adaptation
of the current specification where the CertificateVerify message is omitted
and
an entry in the key derivation table is added. But to encrypt the DH
certificate
you'd need to derive a key from g^xy only (which is not the case for the
application key). The DH variant is not currently contemplated but one that
can
be handy in the future (it is a simpler protocol than what we have now and
can
have benefits in a "post-quantum transition").

Hugo

PS: I wrote more about this topic here
https://www.ietf.org/mail-archive/web/tls/current/msg13625.html
(it was in the context of the need to encrypt the Finished message but the
issue
I was commenting on was the same as here, namely, the re-use of the
application
key during the handshake)

[1] Given the wide availability of TLS implementations, it makes sense for
different applications to use the TLS handshake as their key-exchange
protocol.
Fortunately, there is the exporter key that is still secure in the stronger
generic sense and can be used for that purpose. Even then, I would expect
people to use the application key as the natural session key, or be tied to
TLS
in a way that requires using the same application key. And, in any case,
changes
to the record layer protocol would require a re-assessment of security in
the
combined handshake-record protocol.

[2] The analysis of a protocol that breaks the generic and modular security
principles is doable but more complex and needs to be re-assessed with
changes
to the application protocol.


On Thu, Feb 18, 2016 at 9:54 AM, Eric Rescorla  wrote:

>
> On Thu, Feb 18, 2016 at 7:32 AM, Wan-Teh Chang  wrote:
>
>> On Wed, Feb 17, 2016 at 7:49 PM, Eric Rescorla  wrote:
>> >
>> > TL;DR.
>> > I propose that we should not change keys between the handshake
>> > and application traffic keys.
>>
>> Hi Eric,
>>
>> I'm not sure if I understand your one-sentence summary, because
>> "change keys between the foo and bar keys" is hard to understand. Are
>> you proposing that we should use the same keys to encrypt handshake
>> messages and application data?
>
>
> Yes, precisely.
>
> -Ekr
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Proposal: Simplified Key Schedule

2016-02-18 Thread Hugo Krawczyk
I agree that once you remove the requirement to derive a key from g^xy (=ES)
for protecting a static DH key then the KDF scheme can be simplified as
shown
(or even further - see below).

Note that this is (almost) exactly the original KDF scheme of OPTLS as I
presented in Dallas
https://www.ietf.org/proceedings/92/slides/slides-92-tls-4.pdf
See slide 4 for the KDF scheme and slide 3 for the key names; it can be
confusing as what I call ES (for Ephemeral-Static, or g^xs) is now called
SS and what I call EE (for Ephemeral-Ephemeral, or g^xy) is now called ES.
Also note that I simplified the presentation in these slides by referring to
both HKDF-extract and HKDF-expand by PRF.

Anyway, from here you can see that the last HKDF in your scheme (with 0
salt)
is not needed. You can derive the RMS, EMS keys directly from the second
HKDF
(as siblings of 1-RTT Traffic Keys). Am I missing something?
If the third HKDF is only to accommodate a DH-cert variant then I don't
think
it's worth it (if in that variant we will need a special key derivation
step
anyway then no need to plan for it now).
If this third HKDF is needed for other reasons let me know.

What I do miss in this scheme is the derivation of the Finished keys. I hope
you do not intend to use the application key for this!

I also want to stress, for the record, that this simplification has nothing
to
do with using the application keys for handshake protection. That
"optimization"
is orthogonal to this KDF simplification and hopefully will be reverted (*).

(*) You may say I'm a dreamer, But I may not be the only one :-)

Hugo

PS: I have a disagreement with you in terms of the protocol now being
"signature based". Yes there are signatures in the protocol but not all
modes
use them and they are not always needed. In my eyes the logic of the
protocol
is best seen as DH-based with authentication occurring through the server
Finished MAC (with a key derived from SS which can take the values of g^xs,
g^xy or PSK). That is common to *all* modes, including 0-RTT and PSK which
do
not build on signatures. This is how the protocol was built originally and
that structure remains in spite of the added signatures.


On Thu, Feb 18, 2016 at 4:05 PM, Eric Rescorla  wrote:

> Hi folks,
>
> TL;DR.
> Let's simplify the key schedule.
>
>
> DETAILS
> This is the second in a series of proposed simplifications to TLS 1.3
> based on implementation experience and analysis once the protocol
> starts to harden.  The following suggestion comes out of conversations
> with Richard Barnes, Karthik Bhargavan, Antoine Delignat-Lavaud,
> Cedric Fournet, Markus Kohlweiss, Martin Thomson, Santiago Zanella and
> others.
>
> The current key schedule is elegant but it is actually more than we
> need in that it allows SS to be known either before or after ES. If we
> assume (as is always true in the current TLS 1.3 modes) that SS is
> known before or at the same time as ES, then we can design a simpler
> scheme which looks more like a ladder. Something like:
>
> 0
> |
>  SS -> HKDF [ClientHello + Context]
> |  \
> |   \
> vv
> X1   0-RTT Traffic Keys *
> |
> |
> v
>  ES -> HKDF [ClientHello, ServerHello]
> |  \
> |   \
> vv
> X21-RTT Traffic Keys *
> |
> |
> v
>   0 -> HKDF [ClientHello...ClientFinished]
> |
> |
> v
> RMS, EMS
>
> As should be apparent, this key schedule is well-suited to the
> simplified key change schedule in my previous message.
>
> Note 1: It might be attractive to not even bother with the first stage
> if you aren't doing 0-RTT. It's not necessary then.  However, this is
> just an optimization. Also, if you don't want to extract an RMS or an
> EMS you can skip the last stage (this is compatible).
>
> Note 2: The IKM for the final HKDF is 0, but in principle we could use
> it to add some sort of new keying material, for instance g^xs if we
> were using static DH certificates (see below).
>
>
> In line with Hugo's message earlier today, the major argument against
> this design is that it is more oriented towards a signature-based
> system, which TLS 1.3 is today, than towards a DH certificate-based
> system. To elaborate on this a bit, in a DH certificate-based system,
> the server authenticates by proving knowledge of g^xs (and hence
> s). In the current TLS 1.3 design you can do this trivially by
> replacing the signature in the server CertificateVerify with a MAC
> over the transcript (using g^xs as the key) [again, analysis needed.]
> This is s

Re: [TLS] Proposal: Simplified Key Schedule

2016-02-19 Thread Hugo Krawczyk
Couple of comments below.

On Fri, Feb 19, 2016 at 9:14 AM, Eric Rescorla  wrote:

>
>
> On Fri, Feb 19, 2016 at 2:12 AM, Karthikeyan Bhargavan <
> karthik.bharga...@gmail.com> wrote:
>
>>
>> Note that this is (almost) exactly the original KDF scheme of OPTLS as I
>> presented in Dallas
>>
>>
>> Indeed, Ekr’s proposed scheme looks much like you original diagram.
>>
>
> I would like to clarify that this isn't *my* scheme, though I think it's a
> good
> one. It came out of a long discussion with the people listed in my original
> message and probably is most closely derived from something Karthik
> drew. Sorry if I gave an impression to the contrary!
>
> In any case, I'm glad to see that it's close to Hugo's original diagram,
> since that's a good indication that we're on the right track!
>
>
> Anyway, from here you can see that the last HKDF in your scheme (with 0
>> salt)
>> is not needed. You can derive the RMS, EMS keys directly from the second
>> HKDF
>> (as siblings of 1-RTT Traffic Keys). Am I missing something?
>>
>>
>> The purpose of the third HKDF is to bind the handshake context (the full
>> transcript) into
>> the resumption and exporter keys. It adds no new key material.
>>
>
​I am still confused about this. Call K the key output by the Extract part
of the second HKDF​

​.
You derive 1-RTT Traffic Keys as an output from Expand(K, whatever).
Why can't you use the same K to derive RMS and EMS?
Namely,
 compute RMS=Expand(K, label="RMS Derivation",
ClientHelloClientFinished)​.
​Similarly for EMS.
This would include all the information you wanted in the derivation of each
key,
and it is still a valid HKDF computation (namely, it goes through extract
and expand).
Am I missing something?


> Yes, that was my understanding as well. I see that Ilari suggested that
> this might
> not be necessary (presumably as a consequence of having the full shares
> included in the stage 2 transcript rather than just the nonces as in TLS
> 1.2.
> The imperative here is to avoid creating an attack like triple handshake or
> the Cremers et al. attack on the PSK-resumption. In any case, having the
> EMS and RMS keys derived at the end also helps enforce the logic that you
> should never be exporting/resuming until you have actually completed the
> handshake.
>
> What I do miss in this scheme is the derivation of the Finished keys. I
>> hope
>> you do not intend to use the application key for this!
>>
>>
>> Indeed those are missing in the picture, but I believe Ekr means to
>> derive separate
>> finished keys alongside the 1-RTT Traffic keys (right after the
>> ServerHello)
>>
>
> Affirmative. Error in my diagram.
>
> I also want to stress, for the record, that this simplification has
>> nothing to
>> do with using the application keys for handshake protection.
>>
>>
>> I agree, this key schedule simplification seems orthogonal.
>>
>
> Yes, but... In the current 1-RTT mode, there is a derivation stage
> that does not appear on this diagram, one that includes the transcript up
> to the
> server CertificateVerify and is used to derive the application traffic
> keys. So,
> if part of the intent of this diagram is to reduce the number of points at
> which
> we derive, we will need to derive the application traffic keys at stage 2
> even if they are separate from the handshake traffic keys. I believe this
> something you proposed, but I just want to clarify that this diagram would
> entail
> a change in the timing of key derivation, even if not the number of keys.
>
>
>
>> For what it’s worth, I am building a symbolic model of this new key
>> schedule and will
>> report my analysis results at TRON. It’s not a cryptographic proof, but
>> it should shake
>> out early logical bugs in the design, if any.
>>
>> Best,
>> Karthik
>>
>>
>> That "optimization"
>> is orthogonal to this KDF simplification and hopefully will be reverted
>> (*).
>>
>> (*) You may say I'm a dreamer, But I may not be the only one :-)
>>
>> Nothing to be reverted yet, it's just a proposal. If it's a bad idea (and
> being prohibitively
> hard to analyze is one reason it might be) then we shouldn't adopt it.
>

​It complicates analysis, breaks generality and modularity, weakens the
security guarantee,
but d
efinitely not "
​
prohibitively
​ ​
hard to analyze​".

​Hugo
​


>
> Hugo
>>
>> PS: I have a disagreement with you in terms of the protocol now being
>> "signature based". Yes there are signatures in the protocol but not all
>> modes
>> use them and they are not always needed. In my eyes the logic of the
>> protocol
>> is best seen as DH-based with authentication occurring through the server
>> Finished MAC (with a key derived from SS which can take the values of
>> g^xs,
>> g^xy or PSK). That is common to *all* modes, including 0-RTT and PSK
>> which do
>> not build on signatures. This is how the protocol was built originally and
>> that structure remains in spite of the added signatures.
>>
>>
> Sorry about the misleading terminology. I just meant tha

Re: [TLS] Proposal: don't change keys between handshake and application layer

2016-02-19 Thread Hugo Krawczyk
On Fri, Feb 19, 2016 at 12:58 PM, Cedric Fournet 
wrote:

> As pointed out by Karthik, we are not strongly advocating this
> simplification, but we do not think it would weaken the security of TLS.
> Details below.
>

I am glad you are not strongly advocating this.
I strongly advocate not using the application keys to protect handshake
messages.
I want a well-delimited point where the output session key is as good as a
fresh, never-used secret key, and that session key be used to protect
subsequent record layer traffic.

As I said, I cannot weigh this against implementation advantages of the
freshness-violation approach; but these have to be *very* significant to be
worth the security weakening.

And yes, I think this is a security weakening.
We are talking about two levels of guarantee.
One, where the TLS key exchange protocol outputs a secret key that is good
for *any* application that requires two parties to share a secret key.
The other, where the use of the key produced by this protocol is secure
enough for a *specific* application, namely, TLS record layer (at least in
its current definition).

The second is clearly weaker. It has a more limited (secure) use scope.
For example, it means that changes to the record layer protocol could
require a re-assessment of the security of the handshake, and certainly
this is the case for a different application that somehow modifies the
record layer or makes assumptions different than the intended use of the
record layer.
With a "generically secure" handshake this reassessment of the handshake
would not be needed, you would only need to verify that your application is
secure under an (ideally) shared secret key.

I am sure we all agree that considering future changes to the record layer,
changes to applications using the handshake, etc is not a theoretical
concern.
The last thing you want to assume is that the TLS environment and the
protocol evolution is static.

I think that a conservative take on this important point is the right way
to go.

H
​ugo​


>
> -Cédric, with the miTLS team
>
>
>
>
>
> In the following, I only consider the record layer keys, which are used
> for authenticated encryption; I ignore all other derived key materials. The
> TLS layered design goes as follows:
>
> - The handshake provides fresh keys to the record layer (depending on its
> internal state machine) and keeps running;
>
> - The record layer uses the current keys for encrypting *all* traffic,
> mixing handshake messages, alerts, and application data (once enabled).
>
>
>
> These keys are used sequentially, and only by the record-layer. They are
> not directly used within the handshake protocol itself, or by the TLS
> application.
>
> - Since the sequence of keys is meant to protect the whole stream of TLS
> fragments, one needs to authenticate each point in the stream where there
> is a key change, to prevent any traffic truncation. As discussed in another
> thread, we believe this is correctly handled in draft#11, but each key
> change remains a source of complication [1].
>
> - There are excellent reasons to change keys between 0-RTT and 1-RTT
> (stronger key materials, forward secrecy).
>
> - Otherwise, what matters is the provenance of the key (how it was
> derived, what identities are associated with it) and there is no point
> changing keys with the same provenance.
>
>
>
> The situation is particularly clear in 0-RTT: in draft#11, the client
> always derives two record-layer keys from the same materials, and uses each
> of these keys in turn to encrypt half of its 0-RTT flight, with a Finished
> message in-between that signals the key change. This key change does not
> degrade security, but seems unnecessary.
>
>
>
> In 1-RTT, this is less obvious because, in draft#11, the handshake keys
> and application-data keys are not exactly derived from the same materials.
> We believe this should be fixed (see the “simplified key schedule” thread).
> Then the same argument applies: the key change is not terrible, but it is
> an unnecessary complication.
>
>
>
> I disagree with Hugo that using the same record-layer key for handshake
> and application data yields a proof of a weaker guarantee. As far as I can
> tell, this is a technicality in the cryptographic proof of the handshake,
> and we still get the same security guarantees for TLS users [2]. I also
> like simple proofs of generic security for the handshake core, but I do not
> see how they can cover the other features of TLS [e.g. 3, and late
> handshake messages] with or without the simplification. Given the choice
> between a simpler protocol that it is easier to implement correctly, and
> simpler proofs for a core, I would rather simplify the overall protocol and
> do the extra proof work.
>
>
>
> One may argue about what would happen if some of the record-layer keys
> were mis-used, against the TLS specification (despite its export mechanism)
> and conclude that we need more, not less key changes [3]. But I would not
> c

Re: [TLS] 0.5 RTT

2016-02-23 Thread Hugo Krawczyk
On Tue, Feb 23, 2016 at 3:49 PM, Karthikeyan Bhargavan <
karthik.bharga...@gmail.com> wrote:

> There are some fears about 0.5-RTT data that do not necessarily apply to
> post-client authentication, at which point at least both parties have sent
> their Finished messages.
>
> When the server is sending 0.5-RTT data, this is effectively false-start;
> the client hasn’t confirmed its choice of ciphersuites yet, and downgrade
> attacks may become possible.
> To be principled, we should look at the current browser best practices for
> false start and  make sure that 0.5-RTT data abides by them.
> For example, one may argue that 0.5-RTT is actually a bit worse than
> false-start in TLS 1.2  where at least the peer’s presence and DH key has
> been authenticated before false start data is sent.
> There is no such guarantee in 0.5-RTT.
>
> The question is whether this is just a server-side concern, or does the
> client need to be aware of 0.5-RTT.
> I don’t know the answer to that, but if we wanted to setup a 0.5-RTT rule,
> I would say that it should *only*
> be sent during PSK-resumption handshakes, because the PSK authenticates
> the peer, and because
> the server is likely responding to some 0-RTT data sent by the client/
>
> Again maybe this breaks some server push scenarios that I am not aware of.
>
> Best,
> Karthik
>
> PS: The OPTLS proof does not require ClientFinished, but they do not
> consider downgrades or client auth.
>

​That's right, we do not consider downgrades or client authentication but
Martin's suggestion explicitly only applies to the case​ where the server
does not require client authentication so the analysis holds in that case.
As for downgrades, this will be discovered by the server when receiving the
client's Finished message. So the only problem I see is that the server
might have been "tricked" to send the 0.5-RTT data with less protection
than intended by the (honest) client. But for that there is no need for
downgrade. The attacker could have generated the exchange with the weaker
ciphersuite by himself (acting as a client). If the server accepts that
ciphersuite it means he is willing to send that particular data with that
level of security to *anyone*. That is the meaning of not requiring client
authentication.

​One useful feature of client's finished is to catch 0-RTT replays. But
even then I am not sure what damage can be done to the 0.5 data. Either the
attacker knows the client's keying material (say PSK) and can generate the
client finished by himself or he doesn't know that keying material but then
it cannot decrypt 0.5 data.

Am I missing something on these particular points?



   On the whole, cryptographers including the authors of OPTLS would be
> happier with 0.5-RTT keys
>not being the same as 1-RTT keys. Again, so far, this is a matter
> of taste and proof modularity.
>


​Agreed.

Hugo

​


> > On 23 Feb 2016, at 11:27, Martin Thomson 
> wrote:
> >
> > Karthik raised some concerns here, and I think that we have some
> > thinking to do.  But I don't think that it is intractable, nor even
> > hard, to reason about this problem.
> >
> > The only thing that the client's second flight provides is
> > authentication.  The Finished isn't needed if there is no client auth
> > [P].  Hugo's presentation at TRON did not include a client Finished in
> > the earlier, simpler examples.
> >
> > Thus, based on Watson's observation that the client authentication is
> > removable, we might conclude that the handshake is complete from the
> > perspective of a server that does not require client authentication.
> > There are still reasons we might like to keep the client
> > authentication in the handshake, but those are decisions we can make
> > on engineering grounds.
> >
> > If post-handshake client authentication is OK, then 0.5 RTT is equally
> > OK [X].  I would assert that any decision about changing keys after
> > the client Finished applies to post-handshake client auth (or vice
> > versa).
> >
> > If that logic is sound, then I see no reason we can't have some very
> > simple advice:
> >
> >  1. if the server does not request client authentication, it can send
> > application data immediately following its Finished
> >
> >  2. if the server requests client authentication, it MUST NOT send
> > application data until it receives and validates the client's first
> > flight.  UNLESS the server is certain that the data it sends does not
> > depend on the client's identity (that is, it would send this
> > application data to anyone).
> >
> >> From an API perspective, I believe that we should recommend that there
> > be a separate function for sending in condition 2, just as we are
> > going to recommend that there is a separate function for sending 0-RTT
> > data (as well as there being one to receive on the server end).
> >
> > Based on this, we should recommend different points in time for the
> > server API to report that the handshake is "complete" at a server.  In

Re: [TLS] 0.5 RTT

2016-02-23 Thread Hugo Krawczyk
Karthik, I think that what you are pointing to are cases where the client
*is* authenticated via its PSK.

There is an important distinction between PSKs that have been authenticated
by the client (in a previous exchange) and those that are not.
Any PSK-based handshake that uses a (previously) client-authenticated PSK
needs to be treated as client-authenticated and replay needs to be dealt
with utmost care, including the need to validate via the client finished
that the current exchange is not a replay. In the case where the PSK is
client-unauthenticated (e.g. a resumption from a server-only authenticated
handshake) and the server does not request client authentication then the
need for client finished is less crucial.

Let me be clear, I prefer a conservative design to a more liberal one so if
we can do without 0.5 data then much better.

Hugo




On Tue, Feb 23, 2016 at 4:58 PM, Karthikeyan Bhargavan <
karthik.bharga...@gmail.com> wrote:

> ​That's right, we do not consider downgrades or client authentication but
> Martin's suggestion explicitly only applies to the case​ where the server
> does not require client authentication so the analysis holds in that case.
> As for downgrades, this will be discovered by the server when receiving the
> client's Finished message. So the only problem I see is that the server
> might have been "tricked" to send the 0.5-RTT data with less protection
> than intended by the (honest) client. But for that there is no need for
> downgrade. The attacker could have generated the exchange with the weaker
> ciphersuite by himself (acting as a client). If the server accepts that
> ciphersuite it means he is willing to send that particular data with that
> level of security to *anyone*. That is the meaning of not requiring client
> authentication.
>
>
> Yes Hugo, you’re right that when there is no client auth, the situation is
> less problematic.
>
> However, let’s note there may still be implicit kind of authentication,
> for example, what if the client hello requests PSK-based 1-RTT with a
> old-broken cipher that the real client would never use.
> The server should not, in this case, send user-specific data under the
> old-broken cipher until it receives the client finished.
> Of course, this could be worked around by having a nice whitelist of
> ciphers, or possibly other designs.
> I am mainly pointing out that we need to be careful that the guarantees
> for 0.5-RTT seem to be strictly weaker than that for 1-RTT.
>
>
> ​One useful feature of client's finished is to catch 0-RTT replays. But
> even then I am not sure what damage can be done to the 0.5 data. Either the
> attacker knows the client's keying material (say PSK) and can generate the
> client finished by himself or he doesn't know that keying material but then
> it cannot decrypt 0.5 data.
>
>
> Right, this is the other concern. Suppose a passive adversary records a
> clients 0-RTT data (under a PSK that is bound to an authenticated client).
> He can then go home and replay this 0-RTT request as many times as he
> wants and record the server’s 0.5-RTT responses.
> They will be encrypted, sure, but maybe even the length of those responses
> may give the attacker useful dynamic information (e.g. he can tell whether
> the user’s bank balance went up or down by a digit).
>
> Yes, this attack is always possible for a persistent passive adversary,
> and we can mitigate it with length-hiding techniques, but it gives us an
> example of how 0.5-RTT may provide new avenues for attacking encrypted
> connections.
>
> Best,
> Karthik
>
>
>
> Am I missing something on these particular points?
>
>
>
>On the whole, cryptographers including the authors of OPTLS would
>> be happier with 0.5-RTT keys
>>not being the same as 1-RTT keys. Again, so far, this is a matter
>> of taste and proof modularity.
>>
>
>
> ​Agreed.
>
> Hugo
>
> ​
>
>
>> > On 23 Feb 2016, at 11:27, Martin Thomson 
>> wrote:
>> >
>> > Karthik raised some concerns here, and I think that we have some
>> > thinking to do.  But I don't think that it is intractable, nor even
>> > hard, to reason about this problem.
>> >
>> > The only thing that the client's second flight provides is
>> > authentication.  The Finished isn't needed if there is no client auth
>> > [P].  Hugo's presentation at TRON did not include a client Finished in
>> > the earlier, simpler examples.
>> >
>> > Thus, based on Watson's observation that the client authentication is
>> > removable, we might conclude that the handshake is complete from the
>> > perspective of a server that does not require client authentication.
>> > There are still reasons we might like to keep the client
>> > authentication in the handshake, but those are decisions we can make
>> > on engineering grounds.
>> >
>> > If post-handshake client authentication is OK, then 0.5 RTT is equally
>> > OK [X].  I would assert that any decision about changing keys after
>> > the client Finished applies to post-handshake clie

Re: [TLS] 0.5 RTT

2016-02-23 Thread Hugo Krawczyk
On Tue, Feb 23, 2016 at 5:08 PM, Martin Thomson 
wrote:

> On 23 February 2016 at 14:01, Karthikeyan Bhargavan
>  wrote:
> > The main downgrade concern, I think, is for the 0.5-RTT data’s
> confidentiality; i.e. it may have been sent encrypted under a broken cipher.
>
> Hmm, that's a good point.  So Antoine's analogy is closer to correct
> than I had thought, and the need for Finished remains.
>
> There's an argument that says that 0.5RTT data isn't confidential
> because the server would send it to anyone, but I don't agree with
> that viewpoint.


​I would be interested to hear why you don't agree with that viewpoint.
It seems to imply that you are attaching some "client-specific semantics"
even to keys that were not authenticated by the client.
Understanding such semantics would be a good guide for when 0.5 data may be
safe.

​But the truth is that whatever these semantics are, under your above
viewpoint you should never send 0.5 data.​
​Matching vague semantics to cryptographic security is too hard a problem
to get it right.

(In particular, if these semantics may be based on stuff that happens
outside TLS, as Karthik and Watson were pointing out, then maybe we really
put a "Surgeon General" warning on 0.5 data of equal size to that of 0-RTT.)

Hugo​



And we're potentially also handling 0-RTT data before
> sending 0.5 data.
>
> Like I said on the weekend, we don't have to solve every problem.
> None of the cipher suites in TLS 1.3 would fail to qualify as broken
> currently, but if they did, then logic similar to what we recommend
> for false start seems reasonable to me.  Other than that, we can
> simply document the shortcoming.  I don't think that any of this
> justifies a stronger response than that, and that includes extra key
> updates.
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Remove DH-based 0-RTT

2016-02-23 Thread Hugo Krawczyk
On Tue, Feb 23, 2016 at 8:57 PM, Dave Garrett 
wrote:

> On Tuesday, February 23, 2016 02:03:53 pm Martin Thomson wrote:
> > I propose that we remove DH-based 0-RTT from TLS 1.3.
> >
> > As ekr's previous mail noted, the security properties of PSK-based
> > 0-RTT and DH-based 0-RTT are almost identical.  And DH-based 0-RTT is
> > much more complex.
> >
> > For those who love DH-based 0-RTT, and I know that some people are
> > fans, here's something that might make you less sad about removing it
> > from the core spec.  You can use DH out of band to negotiate a PSK.
> > You might even do this as an extension to TLS, but that's of less
> > value.
>
> I think there is a good argument for moving DH 0RTT into a TLS extension.
> Implementations that are explicitly not going to use it should not be
> expected to implement it and risk screwing it up. If we accept that premise
> that online DH 0RTT will be unlikely in practice, then we would be
> specifying it at least primarily for out-of-band use, and doing it via an
> extension will probably be cleaner and safer.
>

​Combining this comment (which I agree with) with the following comment
from Watson in another thread:

​> ​
 If they rely on extensions then either those
​> ​
extensions need to be included in the security proofs, or we need to
​> ​
make clear that they are not as secure as TLS 1.3, and that
​> ​
implementations which enable both of them can get completely wrecked
>in new and exciting ways.
​

​We get that even if it is decided not to include the DH-based 0-RTT​ as
part of mandatory-to-implement TLS 1.3, it should be defined as an
extension and as part of the TLS 1.3 document. Leaving the reduction from
this case to PSK (as Martin suggested) to popular imagination is too
dangerous. By including it as part of TLS 1.3 official text we also
encourage the groups that are currently analyzing the protocol to include
this specific mechanism in their analysis. If they find something wrong
with it then even more reason to do the analysis.



> I would still prefer it be defined in the TLS 1.3 specification document,
> though optional.
>

​I suggest to also define TLS 1.3-EZ.
A subset of core safe functionality that should address the majority of the
usage cases.

Hugo
 ​


>
>
> Dave
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 0.5 RTT

2016-02-23 Thread Hugo Krawczyk
I was trying to articulate what does the analysis in OPTLS that does not
include the client's Finished message (or client authentication) means in
practical terms for 0.5-RTT data. I think that one way to put it is that
for the server it guarantees confidentiality against passive (only)
attackers and for the client it provides data authentication (proof of
origin and integrity).

Note that  confidentiality against passive attackers is the same type of
assurance we provide to the encrypted server's identity. The same way a
server needs to "understand" that any active attacker can learn its
identity from a TLS handshake, it also needs to understand that 0.5 data is
open to any active attacker. Any expectations of 0.5 data being directed to
a specific client need to be eliminated.

Hugo


On Tue, Feb 23, 2016 at 5:52 PM, Martin Thomson 
wrote:

> On 23 February 2016 at 14:37, Hugo Krawczyk 
> wrote:
> > It seems to imply that you are attaching some "client-specific semantics"
> > even to keys that were not authenticated by the client.
>
> It's primarily a privacy concern, though it's a pretty weak concern.
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 0.5 RTT

2016-02-25 Thread Hugo Krawczyk
As I said in another email, without client authentication (which is the
scenario in the Karthik quote), data sent by the server should be
considered secure only against passive adversaries. Any additional
assumption on confidentiality (i.e., restricting the power of an active
attacker) must consider some form of client authentication, either implicit
or explicit. Both cases must be dealt with with care, especially the
implicit ones (e.g. authentication implied by application mechanisms and
semantics).


On Thu, Feb 25, 2016 at 7:29 AM, Martin Rex  wrote:

> Karthikeyan Bhargavan wrote:
> >
> > Yes Hugo, you?re right that when there is no client auth,
> > the situation is less problematic.
>
> I'm not so sure.
>
> There might be the desire of the server to keep some data confidential,
> and your argument is that if the data wasn't confidential to begin with,
> the server is not "breaking" confidentiality--although the server is
> clearly doing this.
>
> But what about the client and the client's desire to keep confidential,
> which particular "public data" it is just requesting and receiving
> from the server.
>
>
> -Martin
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 0.5 RTT

2016-02-26 Thread Hugo Krawczyk
On Thu, Feb 25, 2016 at 10:20 PM, Watson Ladd  wrote:

> On Thu, Feb 25, 2016 at 11:54 AM, Hugo Krawczyk 
> wrote:
> > As I said in another email, without client authentication (which is the
> > scenario in the Karthik quote), data sent by the server should be
> considered
> > secure only against passive adversaries. Any additional assumption on
> > confidentiality (i.e., restricting the power of an active attacker) must
> > consider some form of client authentication, either implicit or explicit.
> > Both cases must be dealt with with care, especially the implicit ones
> (e.g.
> > authentication implied by application mechanisms and semantics).
>
> I think this is unnecessarily pessimistic/ignores the ways in which
> higher-level applications may have authentication and what we need
> from them.


​It is not pessimistic - it says that you either assume some form of client
authentication verified by the server, implicitly or explicitly, or
otherwise the server can only treat its 0.5 data as open to active attacks.
​


> The below should be taken as a clever guess, rather than
> representative of the truth.
>

​You know that these intuitive argument *almost* always work.
Problem is that all attacks are hiding behind the "almost" part ;-)

I am not saying that something like this cannot be formalized, but it will
not be easy, particularly since the ways of authentication are varied and
depend on applications and semantics that are hard to reason about, let
alone formalize mathematically.

People can and will build on such assumptions but they should do that with
a lot of care and with as much mathematical analysis behind it as possible.

Hugo
​


>
> First, we know that negotiation doesn't work in 0-RTT. But that's ok:
> we need to limit to only strong ciphers anyway, because negotiation
> doesn't work/general depreciation. This means that clients can be
> fooled by attackers into using the weakest notion they support for
> continued authentication of PSKs: this applies to resumption with
> tickets as well.
>
> We also know that each PSK is only shared between one server and one
> other client from the unpredictability of the negotiated keys. And so
> when when a server decrypts a ticket, it knows that whatever data it
> saved in the ticket negotiated in the original negotiation applies to
> whoever sent it this 0-RTT data, and it knows its response will go
> only to someone who *at one point* sent this 0-RTT data. And so if
> there is a cookie or url parameter authenticating the client that is
> not tied to the PSK, so long as the PSK is secure there is only one
> party that could have sent that cookie, and the response will go to
> it. The same is true for prior client authentication, saved in the
> ticket.
>
> Formalizing the above is likely to be a bit tricky, but I don't see
> why it wouldn't be possible.
>
> >
> >
> > On Thu, Feb 25, 2016 at 7:29 AM, Martin Rex  wrote:
> >>
> >> Karthikeyan Bhargavan wrote:
> >> >
> >> > Yes Hugo, you?re right that when there is no client auth,
> >> > the situation is less problematic.
> >>
> >> I'm not so sure.
> >>
> >> There might be the desire of the server to keep some data confidential,
> >> and your argument is that if the data wasn't confidential to begin with,
> >> the server is not "breaking" confidentiality--although the server is
> >> clearly doing this.
> >>
> >> But what about the client and the client's desire to keep confidential,
> >> which particular "public data" it is just requesting and receiving
> >> from the server.
> >>
> >>
> >> -Martin
> >
> >
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
> >
>
>
>
> --
> "Man is born free, but everywhere he is in chains".
> --Rousseau.
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Static DH timing attack

2020-09-10 Thread Hugo Krawczyk
Dan,

What you suggest, namely,  DH for both static and ephemeral keys is what
OPTLS was about and this approach is now specified in
https://tools.ietf.org/html/draft-ietf-tls-semistatic-dh-01.

I was never too happy with the name semi-static for such protocol, and
people may think that if static is bad then semi-static is semi-bad :-)
So maybe it should be replaced with something else

It is essentially a DH-KEM, namely a KEM implemented via DH. It has the
advantage over generic KEMs that  g^x sent by the client acts as an
ephemeral KEM public key (producing g^xy) and also as an encapsulation
under the server's public key (producing g^xs).  This shaves one full round
trip relative to a generic KEM-based protocol (this applies to the
protocols with and without client authentication).

Hugo


On Thu, Sep 10, 2020 at 11:19 AM Dan Brown  wrote:

> *From:* TLS  *On Behalf Of *Salz, Rich
> > Do we need a short RFC saying “do not use static DH” ?
>
>
>
> Don’t TLS 0-RTT and ESNI/ECH via HPKE use a type of (semi)static ECDH? If
> so, then an RFC to ban static (EC)DH in TLS would need to be very clear
> about not referring to these use cases of static ECDH.
>
>
>
> My 2c. What about combining static ECDH (instead of signatures) with
> ephemeral ECDH, e.g. for more fully deniable authentication?  (ECMQV does
> this.)  (Perhaps this is also similar to the KEMTLS proposal for PQC,
> https://ia.cr/2020/534 - still need to study that.)
>
>
> --
> This transmission (including any attachments) may contain confidential
> information, privileged material (including material protected by the
> solicitor-client or other applicable privileges), or constitute non-public
> information. Any use of this information by anyone other than the intended
> recipient is prohibited. If you have received this transmission in error,
> please immediately reply to the sender and delete this information from
> your system. Use, dissemination, distribution, or reproduction of this
> transmission by unintended recipients is not authorized and may be unlawful.
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS Opaque

2021-04-01 Thread Hugo Krawczyk
Thanks Bob for pointing to the "real" ongoing specification of OPAQUE in
https://tools.ietf.org/html/draft-irtf-cfrg-opaque-03
and its careful specification of OPAQUE-3DH, including test vectors (and
sorry Scott for the typos in the other draft).
draft-irtf-cfrg-opaque is still work in process and comments on it are
welcome. It is intended as a standalone specification of OPAQUE.

In contrast, draft-sullivan-tls-opaque-01 is a very preliminary document to
show ways in which OPAQUE can be combined within and transported by TLS
1.3, e.g., using the exported authentication mechanisms from
draft-ietf-tls-exported-authenticator. It will be developed into a
document  compatible with the definition of OPAQUE in
draft-irtf-cfrg-opaque.

Hugo

On Thu, Apr 1, 2021 at 10:51 AM Rob Sayre  wrote:

> Sorry, I was thinking of the wrong draft. See:
>
> https://tools.ietf.org/html/draft-irtf-cfrg-opaque-03#section-4.2.2
>
> and
>
> https://tools.ietf.org/html/draft-irtf-cfrg-opaque-03#appendix-C
>
> thanks,
> Rob
>
>
> On Thu, Apr 1, 2021 at 6:08 AM Scott Fluhrer (sfluhrer) <
> sfluh...@cisco.com> wrote:
>
>>
>>
>> On Tue, Mar 30, 2021 at 9:39 PM Joseph Salowey  wrote:
>>
>>
>>
>> There is at least one question on the list that has gone unanswered for
>> some time [1].
>>
>>
>>
>> [1]
>> https://mailarchive.ietf.org/arch/msg/tls/yCBYp10QuYPSu5zOoM3v84SAIZE/
>>
>>
>>
>> I've found most of the OPAQUE drafts are pretty confusing / incorrect /
>> or typo'd when it comes to lines like these. Describing these calculations
>> seems difficult in ASCII, so I don't fault anyone for making mistakes here.
>> The authors have also been pretty responsive in adding test vectors and
>> such.
>>
>>
>>
>> If the answer is “it’s a typo”, that’s fine – I agree that RFCs are a
>> horrid format for expressing equations.  However, it would be good if there
>> were to state what is the correct relationship here (and possibly update
>> the draft with the corrected versions)
>>
>>
>>
>>
>>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Key Hierarchy TLS 1.3 RFC8446(bis)

2023-12-17 Thread Hugo Krawczyk
See full thread here
https://mailarchive.ietf.org/arch/msg/tls/cS4vdMvENOGdpall7uos9iwZ5OA/

See also how this helped analysis here (search for reference [73]
https://inria.hal.science/hal-01528752v3/file/RR-9040.pdf

On Sat, Dec 16, 2023 at 1:16 PM Muhammad Usama Sardar <
muhammad_usama.sar...@tu-dresden.de> wrote:

> Hi all,
> In the key schedule (section 7.1) of RFC8446(bis), what is the rationale
> for using *Derive-Secret(., "derived", "")* in the derivations of
> Handshake and Master Secrets? Since this change was made in draft 19, I
> expect there should be some reasoning of why this was added. Specifically,
> what are the security implications if this step is missed, i.e.,
>
>- if Early Secret is directly used as the Salt argument for
>HKDF-Extract of Handshake Secret;
>- and similarly if Handshake Secret is directly used as the Salt
>argument for HKDF-Extract of Master Secret.
>
> Regards,
>
> Usama
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fwd: New Version Notification for draft-barnes-tls-pake-04.txt

2018-07-18 Thread Hugo Krawczyk
+1 for this work.

If you are one of those that think, as I did 20 years ago, that password
authentication is dying and practical replacements are just around the
corner, do not support this document. Otherwise, please do.

Asymmetric or augmented PAKE (aPAKE) protocols provide secure password
authentication in the common client-server case (where the server stores a
one-way mapping of the password) without relying on PKI - except during
user/password registration. Passwords remain secure regardless of which
middleboxes or endpoints spy into your decrypted TLS streams.  The server
never sees the password, not even during password registration.

To see real deployment of such protocols, they need to be integrated with
TLS which is what Barnes's draft facilitates. Not only this improve
significantly the protection of passwords and password authentication, but
aPAKE protocols also provide an hedge against PKI failures by enabling
mutual client-server authentication without relying on regular server
certificates.

Hugo


On Wed, Jul 18, 2018 at 1:18 PM, Richard Barnes  wrote:

> Hey TLS WG,
>
> In response to some of the list discussion since the last IETF, Owen and I
> revised our TLS PAKE draft.  In the current version, instead of binding to
> a single PAKE (SPAKE2+), it defines a general container that can carry
> messages for any PAKE that has the right shape.  And we think that "right
> shape" covers several current PAKEs: SPAKE2+, Dragonfly, SRP, OPAQUE, ...
>
> The chairs have graciously allotted us 5min on the agenda for Thursday,
> where I'd like to ask for the WG to adopt the document.  So please speak up
> if you think this is an interesting problem for the TLS WG to work on, and
> if you think the approach in this document is a good starting point.  Happy
> for comments here or at the microphone on Thursday!
>
> Thanks,
> --Richard
>
>
> -- Forwarded message -
> From: 
> Date: Mon, Jul 16, 2018 at 3:25 PM
> Subject: New Version Notification for draft-barnes-tls-pake-04.txt
> To: Richard Barnes , Owen Friel 
>
>
>
> A new version of I-D, draft-barnes-tls-pake-04.txt
> has been successfully submitted by Richard Barnes and posted to the
> IETF repository.
>
> Name:   draft-barnes-tls-pake
> Revision:   04
> Title:  Usage of PAKE with TLS 1.3
> Document date:  2018-07-16
> Group:  Individual Submission
> Pages:  11
> URL:https://www.ietf.org/internet-
> drafts/draft-barnes-tls-pake-04.txt
> Status: https://datatracker.ietf.org/doc/draft-barnes-tls-pake/
> Htmlized:   https://tools.ietf.org/html/draft-barnes-tls-pake-04
> Htmlized:   https://datatracker.ietf.org/
> doc/html/draft-barnes-tls-pake
> Diff:   https://www.ietf.org/rfcdiff?url2=draft-barnes-tls-pake-04
>
> Abstract:
>The pre-shared key mechanism available in TLS 1.3 is not suitable for
>usage with low-entropy keys, such as passwords entered by users.
>This document describes an extension that enables the use of
>password-authenticated key exchange protocols with TLS 1.3.
>
>
>
>
> Please note that it may take a couple of minutes from the time of
> submission
> until the htmlized version and diff are available at tools.ietf.org.
>
> The IETF Secretariat
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Elliptic Curve J-PAKE

2019-03-26 Thread Hugo Krawczyk
Hi Hannes,

J-PAKE is a symmetric PAKE. Both parties store the same password. It is not
suitable for most client-server scenarios where using J-PAKE would mean
that an attacker that breaks into the server simply steals all plaintext
passwords. OPAQUE is an asymmetric (or augmented) PAKE where user remembers
a password (and nothing else, including no public key of the server) while
the server stores a one-way image of the password. Security requires that
if the server is compromised, the attacker needs to run an offline
dictionary attack for each user in the database to find the password.

If what you need is a symmetric PAKE then there are better candidates than
J-PAKE such as SPAKE2 described in draft-irtf-cfrg-spake2-08. SPAKE2 is
*much* more efficient than J-PAKE and while both J-PAKE and SPAKE2 have
proofs of security, SPAKE2 is proven in a stronger security model relative
to J-PAKE.

I am not aware of any advantage of J-PAKE over SPAKE2 - but I may be
missing something. Maybe the PAKE presentation in cfrg will clarify these
issues further.

Hugo




On Tue, Mar 26, 2019 at 1:03 PM Hannes Tschofenig 
wrote:

> Hi all,
>
> in context of the OPAQUE talk by Nick today at the TLS WG meeting I
> mentioned that the Thread Group has used the Elliptic Curve J-PAKE for IoT
> device onboarding.
> Here is the draft written for TLS 1.2:
> https://tools.ietf.org/html/draft-cragie-tls-ecjpake-01
>
> The mechanism is described in https://tools.ietf.org/html/rfc8236
>
> @Nick & Richard: Have a look at it and see whether it fits your needs.
>
> Ciao
> Hannes
>
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Security of OPAQUE in TLS

2019-03-26 Thread Hugo Krawczyk
In the TLS meeting on Tuesday, Kenny asked about the analysis of OPAQUE in
the context of TLS. One important property of OPAQUE is that its design and
analysis is modular. It applies to the composition of *any* OPRF with *any*
(KCI-secure) key exchange. This is why we can integrate OPAQUE with
different KE protocols including TLS 1.3 and get a combined proof of
security. Of course, these high level analyses do not take into account all
the details in a complex protocol like TLS 1.3 so any more specific
analysis, including those using automated tools (Tamarin, Everest, etc)
would be more than welcome.  However, if there is interest in defining an
asymmetric PAKE for TLS to replace old designs such as SRP then we can
start moving towards that goal with the draft Nick presented.  This will
also motivate more analysts (including those based on tools like the above)
to look into this question  more seriously.

Hugo
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS1.3 HkdfLabel - value of label<0..6> ?

2019-03-31 Thread Hugo Krawczyk
What Illari describes is in accordance to TLS 1.3, which uses HKDF-Expand
correctly (as defined in RFC 5869 and the related extract-then-expand
scheme from
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-56Cr1.pdf,
Fig 1 in pg 18).
That is, it uses the "secret" as a key to HMAC (where secret is derived, in
most cases, from the preceding HKDF-Extract step, not shown in Illari
description) and the label as replacing the message input in HMAC.

The salt, to which you allude, is used in HKDF-Extract that applies HMAC
with the HMAC key replaced by a random salt if available and zero salt
otherwise, and with the keying material as the input, i.e., replacing the
message in HMAC. No label or other information is concatenated to the
keying  material,

Hugo

On Sun, Mar 31, 2019 at 1:49 PM Blumenthal, Uri - 0553 - MITLL <
u...@ll.mit.edu> wrote:

> A naive question: is HKDF implemented placing secret in the "salt"
> argument, and label in the "key" argument, as NIST 800-56B says? Or putting
> label into "salt" and secret into "key"?
>
> Regards,
> Uri
>
> Sent from my iPhone
>
> > On Mar 31, 2019, at 09:46, Ilari Liusvaara 
> wrote:
> >
> >> On Sun, Mar 31, 2019 at 08:38:47PM +0800, M K Saravanan wrote:
> >> Hi,
> >>
> >> https://tools.ietf.org/html/rfc8446
> >> 
> >> 7.1.  Key Schedule
> >>
> >>   The key derivation process makes use of the HKDF-Extract and
> >>   HKDF-Expand functions as defined for HKDF [RFC5869], as well as the
> >>   functions defined below:
> >>
> >>   HKDF-Expand-Label(Secret, Label, Context, Length) =
> >>HKDF-Expand(Secret, HkdfLabel, Length)
> >>
> >>   Where HkdfLabel is specified as:
> >>
> >>   struct {
> >>   uint16 length = Length;
> >>   opaque label<7..255> = "tls13 " + Label;
> >>   opaque context<0..255> = Context;
> >>   } HkdfLabel;
> >> 
> >>
> >> In this struct, what is the value of label<0..6>?
> >
> > The syntax "opaque label<7..255>" means that label is octet string of
> > at least 7 and at most 255 octets, and that its length is encoded using
> > 1 octet. The string "tls13 " is 6 octets long, so that impiles that
> > Label is at least 1 octet and at most 249 octets long.
> >
> > So for example if Label is "s hs traffic", then the label (including
> > the length field) is:
> >
> > \x12 "tls13 s hs traffic"
> >
> > The \x12 is the octet with value 18 (which happens to be ASCII DC2)
> > because the remainder is 18 octets.
> >
> > And as another example if Label is "c e traffic", then the label
> > (again including length field is:
> >
> > \x11 "tls13 c e traffic"
> >
> > The \x11 is the octet with value 17 (which happens to be ASCII DC1)
> > because the remainder is 17 octets.
> >
> >
> > In the first case, if the ciphersuite has SHA-256 hash, then the
> > whole HkdfLabel looks like the following (in hex):
> >
> > 00 20#32 octets of output (2 octets)
> > 12#18 octets label length (1 octet)
> > "tls13 s hs traffic"#The actual label (18 octets)
> > 20#32 octet transcript input (1 octet).
> > hash(client_hello+server_hello)#Transcript (32 octets).
> >
> > In total, this is 54 bytes (64 bytes after adding HKDF and SHA-256
> > internal overhead). There is only one output block, and the input
> > fits into one block so evaluating the HKDF-Expand takes 4 SHA-256
> > block operations.
> >
> >
> > The one ciphersuite using SHA-384 would instead give (in hex):
> >
> > 00 30#48 octets of output (2 octets)
> > 12#18 octets label length (1 octet)
> > "tls13 s hs traffic"#The actual label (18 octets)
> > 30#48 octet transcript input (1 octet).
> > hash(client_hello+server_hello)#Transcript (48 octets).
> >
> > In total, 70 bytes. Again, only one output block and only one input
> > block, so evaluation takes 4 SHA-384 block operations.
> >
> >
> >
> > -Ilari
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-tls13-cert-with-extern-psk

2019-05-21 Thread Hugo Krawczyk
A clarification on the text suggest below by Russ.

The way I see it, the external PSK as used in
draft-ietf-tls-tls13-cert-with-extern-psk is not intended as a means of
authentication but as a way of regaining forward secrecy in case the
(EC)DHE mechanism is ever broken (e.g., by cryptanalysis or by a quantum
computer). Indeed, as long as the future attacker does not learn the PSK
and the key derivation remains unbroken (e.g., HMAC is still a secure PRF
(*)), that attacker cannot derive the session secrets even if it can
compute the (EC)DHE value. When looking at the mechanism in this way,
questions like whether this is a PSK-with-ECDHE mechanism with added
signatures or a regular 1.5 RTT with added PSK authentication are easily
answered. It is a regular 1.5 RTT where a PSK has been added to the key
derivation for the sake of quantum-resistant forward secrecy. In this case,
one should see authentication as fully relying on certificate-based
signatures. So, even if parties other than the communicating client and
server know the PSK, this endangers the secrecy of PSK and its forward
security effect, but not the authenticity of the handshake.

The above is my interpretation of the draft based on a superficial reading.
I have not studied it carefully enough to validate the design or understand
the full implication to TLS 1.3. Yet, it seems that the above conceptual
approach may help understanding the goal of the draft and analyzing its
security.

(*) The design of HKDF has as an explicit goal to support the case of
secret salt in HKDF-Extract in which case the security of HKDF relies
solely on HMAC being a secure PRF (which is the most studied aspect of
HMAC). Using the PSK as input to the KDF achieves this property.

Hugo

On Tue, May 21, 2019 at 3:46 PM Russ Housley  wrote:

>
>
> > On May 20, 2019, at 8:25 PM, Geoffrey Keating  wrote:
> >
> > Joseph Salowey  writes:
> >
> >> The last call has come and gone without any comment.  Please indicate if
> >> you have reviewed the draft even if you do not have issues to raise so
> the
> >> chairs can see who has reviewed it.  Also indicate if you have any
> plans to
> >> implement the draft.
> >
> > I looked at the draft.
> >
> > My understanding of the draft (and I think it would have helped if it
> > contained a diagram showing the resulting TLS handshake) is that it's
> > specifying the existing psk_dhe_ke flow, to which it adds a
> > certificate-based signature over the handshake, which it doesn't
> > specify but works the same way as in RFC 8446 when there is no PSK.
> >
> > This is somewhat confusing because the draft is written as if it
> > starts with a certificate-based TLS flow and somehow adds a PSK; it
> > repeats all the RFC 8446 PSK machinery, but doesn't explain how the
> > certificate interacts with it, and raises questions like "are there
> > two DH operations or just one?".  I think the draft could have been a
> > lot shorter.
> >
> > Conversely, one area where the draft could have been longer would be to
> > explain how exactly this produces quantum-resistance in the presence
> > of a secret shared key.  It appears that it relies on the HKDF-Expand
> > function being quantum-resistant.  That seems like an important thing
> > to document, given that we don't have fully functional quantum
> > cryptanalysis yet and so don't know exactly what might be
> > quantum-resistant or not.
> >
> > However, once you're past that, the resulting protocol seems quite
> > simple (as an addition to psk_dhe_ke) and I have no objections to it.
>
> I think that the necessary property is that HKDF reman a PRF.
>
> I suggest the following additions to the Security Considerations:
>
>   If the external PSK is known to any party other than the client and
>   the server, then the external PSK MUST NOT be the sole basis for
>   authentication.  The reasoning is explained in [K2016] (see
>   Section 4.2).  When this extension is used, authentication is based
>   on certificates, not the external PSK.
>
>   In this extension, the external PSK regains the forward secrecy if the
>   (EC)DH key agreement is ever broken by cryptanalysis or the future
>   invention of a large-scale quantum computer.  As long as the attacker
>   does not know the PSK and the key derivation algorithm remains
>   unbroken, the attacker cannot derive the session secrets even if they
>   are able to compute the (EC)DH shared secret.
>
>   TLS 1.3 key derivation makes use of the HKDF algorithm, which depends
>   upon the HMAC construction and a hash function.  This extension
>   provides the desired protection for the session secrets as long as
>   HMAC with the selected hash function is a pseudorandom function (PRF)
>   [GGM1986].
>
>
>   [GGM1986]  Goldreich, O., Goldwasser, S., and S. Micali, "How to
>  construct random functions", J. ACM 1986 (33), pp.
>  792-807, 1986.
>
>   [K2016]Krawczyk, H., "A Unilateral-to-Mutual Authentication
>  Compiler for Key Exch

Re: [TLS] [Cfrg] NIST crypto group and HKDF (and therefore TLS 1.3)

2020-05-11 Thread Hugo Krawczyk
There is no flaw if you use HMAC and HKDF as intended. See details below.

The bottom line advise is: If you are using related (not random) salt
values in HKDF, you are probably using it with  domain separation
functionality. In HKDF, domain separation is enforced via the info field
not the salt. Read the HKDF RFC and paper for background and rationale.

More below.


On Mon, May 11, 2020 at 1:19 PM Phillip Hallam-Baker 
wrote:

> I will forward this to the official comment address as well.
>
> I don't support making HKDF a NIST standard in its current form because
> there is a flaw.
>
> Consider the case in which the initial keying material is formed by
> concatenating two items, the second of which is a variable length string.
>
> As currently specified, the values for o_key and i_key are the same for a
> key k and that same key with a zero byte concatenated. This might not seem
> like a big deal but the whole point of defining common building blocks for
> crypto is to eliminate all the bear traps we can.
>

I assume that by o_key and i_key you mean the key xor-ed with opad and
ipad, respectively. Is that right?
It is true that in HMAC, and then in HKDF, you do not distinguish between a
key K shorter than a block and the same key with appended 0s.
This is OK in HMAC because as a MAC or PRF you must use HMAC with random
keys for which the probability to have the above property (choosing two
keys of different lengths, one  being the prefix of the other) is virtually
0.
For HKDF, this issue applies to the salt value. Salt values (read the RFC)
should either be random or set to the all-zero string. Hence no issue there.

You can read the HKDF paper for extensive rationale for these choices.
https://eprint.iacr.org/2010/264


> The problem can be eliminated as follows:
>
> 1) Generate i_key by padding the key value with byte x before XORing with
> the i_mask
>
> 2) Generate o_key by padding the key value with byte y (y <> x) before
> XORing with the o_mask
>
> This ensures that the values of o_key and i_key will change if zeros are
> appended.
>

It will not solve the issue if one of the keys is of a block length
Historical anecdote: The original design of HMAC had the concatenation
scheme as you suggest, Adi Shamir suggested changing it to xoring method
that was eventually adopted (see the Acknowledgment section in the HMAC
1996 paper),
https://link.springer.com/content/pdf/10.1007/3-540-68697-5_1.pdf
So you can see that thought was put into these issues.


>
> This would require us to issue a revised version of HKDF. But I think we
> should do just that. Crypto utility functions should be robust under all
> known forms of mistreatment. I am probably not the only person who has
> multiple salt inputs to a KDF. Fortunately my unit tests caught the issue.
>

You should not have multiple salt inputs if they are not independently
random.
If you want to use them for "domain separation", use the info field.

___
> Cfrg mailing list
> c...@irtf.org
> https://www.irtf.org/mailman/listinfo/cfrg
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] NIST crypto group and HKDF (and therefore TLS 1.3)

2020-05-11 Thread Hugo Krawczyk
Hi Quynh, see a couple of remarks below,

On Mon, May 11, 2020 at 8:10 AM Dang, Quynh H. (Fed)  wrote:

> Hi Rich, Sean and all,
>
> 1) Traditionally, a HKDF-Extract is used to extract entropy from a DH type
> shared secret. However, the first HKDF-Extract in the key schedule takes a
> PSK instead of a DH shared secret.
>

The whole point of HKDF is to define a KDF that can be applied to different
uses and settings for key derivation. A single KDF mechanism that can
address a DH setting as well as a PSK, or  anything else, with
functionalities of PRF, random oracle, randomness extraction, etc., and
with some theoretical basis. Btw, TLS 1.3 is an application where
essentially all these aspects of HKDF show up.

NIST has adopted the view of HKDF in a very general sense via
the standardization of the extract-then-expand approach at the basis of
HKDF . This applies equally to extracting entropy from a DH value, the
output of an RNG, a PSK, etc.

>
> We don't see security problems with this instance in TLS 1.3. NIST
> requires the PSK to have efficient amount of entropy (to achieve a security
> strength required by NIST) when it is externally generated. When it is
> externally generated, one of NIST's approved random bit generation methods
> in SP 800-90 series must be used.
>
> When the PSK is a resumption key, then its original key exchange and its
> key derivation function(s) must meet the security strength desired/required
> for the PSK.
>
> NIST plans to allow/approve the function in SP 800-133r2, Section 6.3,
> item # 3 on pages 22 and 23:
> https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-133r2-draft.pdf
>
> 2) Traditionally, HKDF is extract-then-expand. However, in TLS 1.3, we
> have extract-then-multiple expands.
>

The HKDF RFC 5869 says that in the case where the amount of required key
bits, L, is no more than HashLen, one could use PRK directly as the OKM.

In the usage by TLS, you are using the output of extract as a key to a PRF
(implemented as HKDF-expand, essentially HMAC) with multiple, different
inputs (different thanks to the use of the info field) which is perfectly
secure. Also, note that the composition of the extract step in TLS with
each of the subsequent expands can be represented as a full HKDF
application. This was part of the design for cases (maybe  hardware) where
one may have an interface to HKDF but not a separate interface to
the extract and expand steps.


> We don't see security problems for this new version of HKDF as specified
> in TLS 1.3.  NIST plans to approve a general method for this approach in SP
> 800-56C revision 2, section 5.3:
> https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-56Cr2-draft.pdf
>

Agreed. The above comment summarizes why there should be no security
problems with these mechanisms.

I haven't looked at the revisions. But in previous versions you needed
lawyer skills to go through the language to see that RFC 5869 was indeed
compliant with the NIST recommendation. It would be nice if this time it
would make very explicit that RFC 5869 is compliant with this
Recommendation.

Hugo

>
> NIST plans to handle the issues above that way to avoid repeating the work
> when one or both of the same HKDF instances or new variant(s) for one or
> both of them is/are used in different application(s).
>
> The other KDFs are already compliant with NIST's existing KDFs.
>
> Regards,
> Quynh.
> Recommendation for Key-Derivation Methods in Key-Establishment Schemes
> 
> 23 . Draft NIST Special Publication 800-56C 24 . Revision 2 25
> Recommendation for Key-Derivation 26 . Methods in Key-Establishment Schemes
> 27 . 28 . 29 Elaine Barker 30 Lily Chen 31 Computer Security Division 32 ..
> Information Technology Laboratory
> nvlpubs.nist.gov
>
> --
> *From:* Cfrg  on behalf of Salz, Rich  40akamai@dmarc.ietf.org>
> *Sent:* Friday, May 8, 2020 4:21 PM
> *To:* tls@ietf.org ; c...@ietf.org 
> *Subject:* [Cfrg] NIST crypto group and HKDF (and therefore TLS 1.3)
>
> If you don’t care about FIPS-140, just delete this message, and avoid the
> temptation to argue how bad it is.
>
> NIST SP 800-56C (Recommendation for Key-Derivation Methods in
> Key-Establishment Schemes) is currently a draft in review. The document is
> at https://csrc.nist.gov/publications/detail/sp/800-56c/rev-2/draft
> Email comments can be sent to 800-56c_comme...@nist.gov with a deadline
> of May 15.  That is not a lot of time.  The NIST crypto group is currently
> unlikely to include HKDF, which means that TLS 1.3 would not be part of
> FIPS. The CMVP folks at NIST understand this, and agree that this would be
> bad; they are looking at adding it, perhaps via an Implementation Guidance
> update.
>
> If you have a view of HKDF (and perhaps TLS 1.3), I strongly encourage you
> to comment at the above address.  Please do not comment here. I know that
> many members o

Re: [TLS] [Cfrg] NIST crypto group and HKDF (and therefore TLS 1.3)

2020-05-12 Thread Hugo Krawczyk
t regards,
>
> Dan
>
>
>
> PS
>
>
>
> The HKDF RFC clearly excludes an adversary causing related salts, so
> that’s good.
>
>
>
> I really like both defense in depth and provable security, but I would
> also like it to be clear that is the main motivation for HKDF in key
> derivation.  To wit, HMAC itself internally derives two closely-related
> keys using XOR ipad and XOR opad.   You have proved this turns out fine,
> despite the relatedness of the two keys, because the robust property of
> hash function.  My point here, is if we assumed the derived keys are used
> in robust algorithms, e.g. AES-GCM, could they tolerate simpler ways of
> deriving keys, i.e. XORing a key with a non-random separation string?  To
> repeat, I am totally fine to use robust key derivation, like HKDF, but I
> would want the reason to be clear.  E.g. TLS 1.3 handshake uses HKDF as
> hedge against possible related-key attacks on the symmetric-key crypto in
> the record layer, or for better or simpler security proofs (e.g. compared
> to past key derivation methods).
>
>
>
> As many know, hashes were, once upon a time, only used for message digests
> in signatures. Message-extension in hashes did not result in signature
> forgery.   But then people naturally wanted to use hashes (as a “utility
> function” in Phillip’s terms) for a MAC.  But hashes with message extension
> suffer from MAC forgery. Along came HMAC to the rescue, saving the day, and
> the rest is history.  (To exaggerate: it is, on a minute scale, repeating;)
>
>
>
>
>
> *From:* TLS  *On Behalf Of *Phillip Hallam-Baker
> *Sent:* Tuesday, May 12, 2020 1:49 AM
> *To:* Hugo Krawczyk 
> *Cc:* Dang, Quynh H. (Fed) ;
> c...@ietf.org; tls@ietf.org; Salz, Rich  >
> *Subject:* Re: [TLS] [Cfrg] NIST crypto group and HKDF (and therefore TLS
> 1.3)
>
>
>
>
>
>
>
> On Mon, May 11, 2020 at 4:36 PM Hugo Krawczyk 
> wrote:
>
> There is no flaw if you use HMAC and HKDF as intended. See details below.
>
>
>
> One time pads aren't flawed if you use them right. When they become a two
> time pad, there is a problem.
>
>
>
> My point is that if we are developing schemes that are supposed to be used
> as utility building blocks, we need to consider all the ways they might be
> used and not just limit ourselves to the ones we expect. That was the
> argument made for defining authenticated encryption modes, it holds here as
> well.
>
>
>
> The bottom line advise is: If you are using related (not random) salt
> values in HKDF, you are probably using it with  domain separation
> functionality. In HKDF, domain separation is enforced via the info field
> not the salt. Read the HKDF RFC and paper for background and rationale.
>
>
>
> I am already using the info field for domain separation. I use info to
> generate separate keys for encryption, authentication, any IVs etc. by
> concatenating the IANA protocol name and the encryption function. So I
> don't want to put any more in there.
>
>
>
> It is easy enough to fix if you are aware of it. I noticed the issue while
> I was implementing HMAC by hand. But the person who is using the function
> using a library call (i.e. myself five years from now) might not be aware..
>
>
>
> Saying 'read my paper' really isn't an argument. I know the design
> rationale. I am saying it is the wrong one for the future. And regardless,
> I don't see mention of the issue in section 4.2 of the paper you cite nor
> is there mention of the issue in RFC 2104.
>
>
>
> If I have to go hunting to find security issues with a standard, that is a
> problem in itself.
>
>
>
>
>
> BTW, the reason it came up with DARE was an attempt to address the problem
> of 'encrypting the subject header' and other metadata separately from the
> content data. But under the same key. Bearing in mind that we want to be
> able to encrypt multiple data items under a single key exchange.
>
>
>
> So starting from the result of the key agreement, I add in a per envelope
> salt which is typically 128 bits. That allows for erasure of the message by
> overwriting the salt value. The main data content is encrypted under a KDF
> with the IKM and envelope salt. If additional encrypted data sequences are
> required, they are encrypted under the IKM, salt and an additional counter.
>
>
>
> Now I can fix my designs, but others won't. Considering the EDS counter to
> be an extension to the key led to the unexpected result that the EDS and
> content were encrypted with the same key. Now, it is arguably better
> considered to be a part of the salt which is where I think the current code
> 

Re: [TLS] Call for consensus: Removing DHE-based 0-RTT

2016-03-31 Thread Hugo Krawczyk
On Tue, Mar 29, 2016 at 9:11 AM, Sean Turner  wrote:

> All,
>
> To make sure we’ve got a clear way forward coming out of our BA sessions,
> we need to make sure there’s consensus on a couple of outstanding issues.
> So...
>
> There also seems to be (rougher) consensus not to support 0-RTT via DHE
> (i.e., semi-static DHE) in TLS 1.3 at this time leaving the only 0-RTT mode
> as PSK. The security properties of PSK-based 0-RTT and DHE-based 0-RTT are
> almost identical,


​I am not offering an opinion about what the WG should decide regarding
keeping
DHE-based 0-RTT in the base TLS 1.3 document, but just wanted to note that
the
above claim "The security properties of PSK-based 0-RTT and DHE-based 0-RTT
are
almost identical" is not quite right (nothing I say here is new, I just felt
that I had to "object" to this statement as written).

There are some significant differences - in some cases even "fundamental
differences" - between keeping secret state (in the PSK case) and keeping
non-secret state (in the DHE case) or even not keeping state at all (in the
DHE case) and retrieving the server key g^s from some external source (with
integrity but not secrecy).  In addition, using DHE 0-RTT would require the
client to send a key share g^x leading to a PFS 1-RTT exchange while with
PSK
it may be "tempting" to omit PFS.  Moreover,  if the server's configuration
key g^s is refreshed often (say each 5 minutes) then the g^xs key used by
the
client to protect its 0-RTT data already has some good level of forward
secrecy (the attacker has a 5 minute window to find s and after that forward
security is guaranteed).  The latter point touches on an important aspect
which is the key management complexity of ticket encryption/decryption keys
(as needed in the PSK case) vs managing secret DH key s (in the DHE case).
I am not sure what would be done better (more secure) in practice.

But really it seems that the discussion boils down to identifying cases of
enough interest where avoiding the original 1-RTT trip for establishing a
session ticket is important. I am puzzled by the fact that the Google team
seems ok with something that essentially voids the main feature and design
basis of of QUIC.

Hugo​

​​

> but 0-RTT PSK has better performance properties and is simpler to specify
> and implement. Note that this does not permanently preclude supporting
> DHE-based 0-RTT in a future extension, but it would not be in the initial
> TLS 1.3 RFC.
>
> If you think that we should keep DHE-based 0-RTT please indicate so now
> and provide your rationale.
>
> J&S
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Call for consensus: Removing DHE-based 0-RTT

2016-04-01 Thread Hugo Krawczyk
On Thu, Mar 31, 2016 at 11:49 PM, Eric Rescorla  wrote:

>
>
> On Thu, Mar 31, 2016 at 8:39 PM, Hugo Krawczyk 
> wrote:
>
>>
>>
>> On Tue, Mar 29, 2016 at 9:11 AM, Sean Turner  wrote:
>>
>>> All,
>>>
>>> To make sure we’ve got a clear way forward coming out of our BA
>>> sessions, we need to make sure there’s consensus on a couple of outstanding
>>> issues.  So...
>>>
>>> There also seems to be (rougher) consensus not to support 0-RTT via DHE
>>> (i.e., semi-static DHE) in TLS 1.3 at this time leaving the only 0-RTT mode
>>> as PSK. The security properties of PSK-based 0-RTT and DHE-based 0-RTT are
>>> almost identical,
>>
>>
>> ​I am not offering an opinion about what the WG should decide regarding
>> keeping
>> DHE-based 0-RTT in the base TLS 1.3 document, but just wanted to note
>> that the
>> above claim "The security properties of PSK-based 0-RTT and DHE-based
>> 0-RTT are
>> almost identical" is not quite right (nothing I say here is new, I just
>> felt
>> that I had to "object" to this statement as written).
>>
>> There are some significant differences - in some cases even "fundamental
>> differences" - between keeping secret state (in the PSK case) and keeping
>> non-secret state (in the DHE case) or even not keeping state at all (in
>> the
>> DHE case) and retrieving the server key g^s from some external source
>> (with
>> integrity but not secrecy).  In addition, using DHE 0-RTT would require
>> the
>> client to send a key share g^x leading to a PFS 1-RTT exchange while with
>> PSK
>> it may be "tempting" to omit PFS.
>>
>
> The current plan of record is to allow the server to specify which cipher
> suites it is
> willing to accept, so it could refuse this
>
>
​I was just wondering if this is what will happen in practice.
But I should have separated this consideration (and the next) from the more
fundamental point of public vs secret state.


>
>
> Moreover,  if the server's configuration
>> key g^s is refreshed often (say each 5 minutes) then the g^xs key used by
>> the
>> client to protect its 0-RTT data already has some good level of forward
>> secrecy (the attacker has a 5 minute window to find s and after that
>> forward
>> security is guaranteed).  The latter point touches on an important aspect
>> which is the key management complexity of ticket encryption/decryption
>> keys
>> (as needed in the PSK case) vs managing secret DH key s (in the DHE
>> case).
>> I am not sure what would be done better (more secure) in practice.
>>
>
> Can you expand on the difference here? Say that the server implements
> tickets
> by storing a DH private key and then encrypting the ticket under the
> corresponding
> public key. How does this provide different PFS properties?
>

​It doesn't. And you don't need a public key for this. If the server
rotates the ticket encrypting key ​

​often (say each 5 minutes as in the example) then you get the same effect.
The point was about which case, g^s or symmetric ticket encryption key will
be managed better in practice. I don't have an answer.

Hugo


> -Ekr
>
>
>> But really it seems that the discussion boils down to identifying cases of
>> enough interest where avoiding the original 1-RTT trip for establishing a
>> session ticket is important. I am puzzled by the fact that the Google team
>> seems ok with something that essentially voids the main feature and design
>> basis of of QUIC.
>>
>> Hugo​
>>
>> ​​
>>
>>> but 0-RTT PSK has better performance properties and is simpler to
>>> specify and implement. Note that this does not permanently preclude
>>> supporting DHE-based 0-RTT in a future extension, but it would not be in
>>> the initial TLS 1.3 RFC.
>>>
>>> If you think that we should keep DHE-based 0-RTT please indicate so now
>>> and provide your rationale.
>>>
>>> J&S
>>>
>>> ___
>>> TLS mailing list
>>> TLS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/tls
>>>
>>
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
>>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Consensus call for keys used in handshake and data messages

2016-06-17 Thread Hugo Krawczyk
I am abstaining on the choice of alternative 1 and 2 since I do not
understand
enough the engineering considerations and ramifications of the different
choices. Also, I have not put any thought into the privacy issues related to
hiding content type and I certainly did not do any formal analysis of these
aspects.

I do want to say that from a cryptographic design and analysis point of
view,
key separation is of great importance. It allows to have more modular
analysis
and design, and prove composability properties of protocols and
applications.
I believe it has significant practical advantages particularly with respect
to
"maintaining" a protocol, namely, making sure that changes and extensions
to a
protocol are secure and do not jeopardize its security. The more modular the
design the easier it is to reason about changes and additions to the
protocol
(and for a protocol like TLS, future changes and adaptations to different
settings are unavoidable).

As for the specific cryptographic considerations in TLS 1.3, I would have
"fought" greatly to preserve key separation at the level of the basic
handshake
protocol (three first flights). It guarantees that the application key is
*fresh* at its first use for protecting application data. Freshness means
that
the key can serve any purpose for which a shared random secret key can be
used.
(In contrast, a non-fresh key, e.g. one used during the handshake itself,
can
only be used with applications that are aware of how exactly the key was
used
in the handshake.) Since my understanding is that the current base-handshake
specification does preserve the freshness of the application key, I am happy
with that design.

The issue of using the application key to protect post-handshake messages is
more involved. First, post-handshake client authentication authenticates ("a
posteriori") the application key but only after this key has already been
used.
In this sense this mechanism cannot possibly achieve key freshness for the
application key. The best one can hope for is that this post-authentication
authenticates the key without jeopardizing its use in the particular
application
it is intended for, namely, protecting TLS record layer data. Luckily, this
can
be proved. So now the question is whether using the application key to
encrypt
the very messages that provide post-handshake authentication (e.g., client's
signature) may lower the security of this key. The answer is that it does
not.
That is, the security of the key for protecting record layer data is not
jeopardized by using it to encrypt post-handshake messages.

I feel moderately confident about the above paragraph as I have been
working on
the analysis of client authentication, including (encrypted) post-handshake
authentication. On the other hand, I have not studied the effects of
encrypting
other post-handshake messages such as New Ticket or re-keying messages so I
don't have an "educated conclusion" here. I do expect that there are
analytical
advantages for not using the application key for encrypting these messages
but I
cannot say for sure.

In all, it is good and prudent practice (not just theory) to enforce key
separation wherever possible, and I would be happier if there was no
instance
where the application key is applied to non-application data. But I also
know
that one has to weigh other engineering considerations and in this case the
trade-off does not seem obvious to me. Hence, as said, I abstain.

Hugo


On Mon, Jun 13, 2016 at 3:00 PM, Joseph Salowey  wrote:

> For background please see [1].
>
> Please respond to this message indicating which of the following options
> you prefer by Monday June, 20, 2016
>
> 1. Use the same key for handshake and application traffic (as in the
> current draft-13)
>
> or
>
> 2. Restore a public content type and different keys
>
> Thanks,
>
> J&S
>
>
> [1] https://www.ietf.org/mail-archive/web/tls/current/msg20241.html
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] judging consensus on keys used in handshake and data messages

2016-07-07 Thread Hugo Krawczyk
I do not have an objection to option 1 if re-phrased as
Option 1 - use the same key for protecting both *post*-handshake and
applications messages..

I believe this is what was intended by that option anyway. Let me clarify.

I understand the question as relating *only* to post-handshake messages and
not
to the main handshake (three initial flights). For the latter we have key
separation in the sense that none of these main-handshake messages is
encrypted
under the application key but rather under dedicated handshake keys. This
should
not be changed as it provides key indistinguishability to the application
key, a
desirable design and analysis (=proof) modularity property.

On the other hand, for post-handshake messages, and particularly for
encrypting
post-handshake client authentication messages, preserving key
indistinguishability is not relevant since at the time of post-handshake
client authentication, the application key has already lost its
indistinguishability
by the mere fact that the key was used to encrypt application data. Key
indistinguishability is the main reason to insist in key separation and this
principle does not apply here anymore hence removing the objection to 1.

I'd note that the best one could hope for in the post-handshake setting is
that
as a result of post-handshake client authentication the application key
becomes
a secure mutually-authenticated key for providing "secure channels"
security.
As pointed out by others in previous posts I have an analysis showing that
this
delayed mutual authentication guarantee is achieved even if one uses the
application key to encrypt the post-handshake messages. I have circulated a
preliminary version of the  paper among cryptographers working on TLS 1.3
and  I will post a public copy next week so this can be scrutinized further.

Hugo


On Thu, Jul 7, 2016 at 1:10 AM, Karthikeyan Bhargavan <
karthik.bharga...@gmail.com> wrote:

> If we are left with 1 or 3, the miTLS team would prefer 1.
>
> On the cryptographic side, Hugo has a recent (draft) paper that seems to
> provide
> some more justification for (1), at least for client authentication.
>
> I know this is a bit off-topic, but the miTLS team would also like to get
> rid of 0-RTT ClientFinished
> if that is the only message left in the 0-RTT encrypted handshake flight.
> That should remove
> another Handshake/Data key separation from the protocol, leaving only 3
> keys: 0-RTT data,
> 1-RTT handshake, and 1-RTT data.
>
> Best,
> -Karthik
>
>
> On 07 Jul 2016, at 02:49, David Benjamin  wrote:
>
> On Wed, Jul 6, 2016 at 5:39 PM Eric Rescorla  wrote:
>
>> On Wed, Jul 6, 2016 at 5:24 PM, Dave Garrett 
>> wrote:
>>
>>> On Wednesday, July 06, 2016 06:19:29 pm David Benjamin wrote:
>>> > I'm also curious which post-handshake messages are the problem. If we
>>> were
>>> > to rename "post-handshake handshake messages" to "post-handshake bonus
>>> > messages" with a distinct bonus_message record type, where would there
>>> > still be an issue? (Alerts and application data share keys and this
>>> seems
>>> > to have been fine.)
>>>
>>> Recasting all the post-handshake handshake messages as not something
>>> named "handshake" does make a degree of sense, on its own. (bikeshedding:
>>> I'd name it something more descriptive like "secondary negotiation"
>>> messages or something, though.) Even if this doesn't directly help with the
>>> issue at hand here, does forking these into a new ContentType sound like a
>>> useful move, in general?
>>
>>
>> I'm not sure what this would accomplish.
>>
>
> Me neither. To clarify, I mention this not as a suggestion, but to
> motivate asking about the type of message. If the only reason the proofs
> want them in the handshake bucket rather than the application data bucket
> is that they say "handshake" in them then, sure, let's do an
> inconsequential re-spelling and move on from this problem.
>
> But presumably something about the messages motivate this key separation
> issue and I'd like to know what they are.
>
> David
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Should exporter keys be updated with post-handshake authentication and/or KeyUpdate?

2016-07-11 Thread Hugo Krawczyk
On Mon, Jul 11, 2016 at 9:13 PM, Martin Thomson 
wrote:

> Not taking any position on the question, which I think is a fine thing
> to ask, but...
>
> I'd just like to point out that the example is flawed in the sense
> that the system permits both users to share state.  When Alice logs
> out, that needs to include any state that might have been accumulated
> to Alice.  This necessarily includes any sessions, including that TLS
> connection.
>
> If we imagine that this is a browser, then you also need to flush
> caches and remove cookies before the system is usable by another user.
> There might be operating system level things as well.  Machines in
> internet cafes often create temporary accounts, or even rebuild the
> entire machine between users for this reason.
>
> Back to the question...
> One challenge with this is that exporters are often used to compare
> things.  For instance, one side signs an exported value, the other
> validates the signature by independently exporting the same value.
> Getting different values for a particular exporter will cause some
> classes of things to fail in subtle ways.
>

​This is unrelated
 to the issue
​s​
raised by Douglas
, but  if the exporter *key* is intended for use as a unique session
identifier

​(or ​a sort of "channel binding") then calling it a "key" is misleading.
For example, while a key of 128 bits is perfectly fine (e.g. for AES-128),
such length is insufficient as a channel binding string (where resistance
to birthday attacks seems necessary). I do not see a note on this in the
TLS document or RFC 5705.

Hugo


>
> On 12 July 2016 at 05:39, Douglas Stebila  wrote:
> > Some of the discussions I've had with people about post-handshake client
> authentication have raised the question of whether application traffic
> secrets should be updated automatically upon post-handshake client
> authentication: the thinking being that every change in context should be
> accompanied by a change in keying material.  I used to think that was a
> good idea for TLS 1.3, although it was recently argued to me that if we
> view the application traffic secrets as being "internal" to the TLS
> protocol, then the change in client authentication status doesn't change
> the confidential or integrity properties of the record layer, it just
> serves as a "marker" to the application that certain portions of the
> application data were associated with certain authentication contexts.  I
> was convinced that this can be safely accomplished without a change in
> application traffic secret key material.
> >
> > But I'm not sure that the same applies to *exporter* keys.  Should
> exported keying material change as the authentication context changes?
> >
> > Consider a long-lived TLS connection, where different users come and
> go.  For example, a web browser on a public terminal may have established a
> long-lived TLS connection to a particular website, and send subsequent
> requests to the same website over the same TLS connection.  Now imagine two
> users use the terminal one after another:
> >
> > 1: initial handshake on a public terminal
> > 2: [time passes]
> > 3: Alice starts browsing
> > 4: Alice does post-handshake client authentication
> > 5: Alice purchases something
> > 6: Alice hits "logout" at the application layer
> > 7: [time passes]
> > 8: Bob starts using the terminal
> > 9: Bob does post-handshake client authentication
> > 10: Bob purchases something
> > 11: Bob hits "logout" at the application layer
> >
> > TLS 1.3 will tell the application about events 4 and 9.  Events 6 and 11
> happen at the application layer rather than the TLS layer (since I don't
> think TLS 1.3 has a client-de-authentication option).  But putting this all
> together, the application will learn all the correct authentication
> contexts: 1-3 is anonymous, 4-5 is Alice, 6-8 is anonymous, 9-10 is Bob,
> 11-onwards is anonymous.
> >
> > Now imagine that we use keying material exporters in on lines 5 and 10:
> >
> > 1: initial handshake on a public terminal
> > 2: [time passes]
> > 3: Alice starts browsing
> > 4: Alice does post-handshake client authentication
> > *5: Alice presses the "export keying material" button
> > 6: Alice hits "logout" at the application layer
> > 7: [time passes]
> > 8: Bob starts using the terminal
> > 9: Bob does post-handshake client authentication
> > *10: Bob presses the "export keying material" button
> > 11: Bob hits "logout" at the application layer
> >
> > Since the exporter master secret is not updated when client
> authentication changes, Alice and Bob will export the same keying material
> at steps 5 and 10.  If the intended goal of this exported key is for Alice
> to obtain confidentiality in some other use, this will not be achieved,
> since Bob will obtain the same exported key.
> >
> > Now, a proviso is that RFC 5705 allows for the application to mix a
> "context value" into the export, which could mitigate this, but that is
> optional.
> >
> > So it seems t

Re: [TLS] Why is resumption_context hashed?

2016-07-16 Thread Hugo Krawczyk
​

Here are some (second) thoughts on the derivation of resumption_context.

The purpose of this value is to bind the resumed session to the data in the
original connection, namely, to "ClientHello...Client Finished" (and, in
particular, to the server's identity).
The right way to do this binding is by defining
resumption_context = CR(ClientHello...Client Finished)
where CR is a collision resistant function.

This CR can be the TLS hash function Hash or it can be implemented by a
series
of HKDF computations as currently done.
Specifically, we now have two steps:

resumption_secret = Derive-Secret(Master Secret, "resumption master secret",
   ClientHello...Client Finished)
resumption_context = HKDF-Expand-Label(resumption_secret,
   "resumption context", "", Hash.Length)

Due to the use of HMAC in HKDF, if the underlying Hash is collision
resistant
then one can argue that the above two-step derivation is also collision
resistant (as long as there is not truncation anywhere in the derivation
process!).

Still, it would be nicer if the collision resistance property would be more
direct/explicit. The most explicit way would be just to define:
resumption_context = Hash(ClientHello...Client Finished).
However, this deterministic derivation from the original handshake
transcript
has disadvantages, including possible leakage on the handshake transcript
values
and the danger of reusing the same value for other purposes.

The next alternative would be to define resumption_context as a "sibling"
of
resumption_secret, namely,
resumption_context = Derive-Secret(Master Secret, "resumption context",
   ClientHello...Client Finished)
This does not make explicit the use of a collision resistant hash function
in
the derivation but at least shows the direct relationship of
resumption_context
to the handshake transcript. Collision resistance can still be argued on the
basis of HKDF properties.

Finally, we can stick to the current definition which, as said, can be
argued
via collision resistance properties of HKDF, but with one more level of
indirection (and less explicit relation to the handshake transcript).

In all cases, we need some text in the security considerations about the
collision
resistance assumption on HKDF (which is not an integral part of its key
derivation
definition); this applies also to the exporter key as mentioned in a
previous email
and any key/value that can be used as a "binding value".

Hugo

PS: To the question of whether we should apply Hash to resumption_context
when
concatenating it to other hashes, the answer is that this does not make a
difference. It does not add or subtract from the collision resistance
property
of resumption_context.

​

On Fri, Jul 15, 2016 at 7:59 AM, Eric Rescorla  wrote:

> On Fri, Jul 15, 2016 at 11:39 AM, David Benjamin 
> wrote:
>
>> Every time resumption_context is used, it's fed into the PRF hash.
>> Handshake Context gets hashed since that actually expands to the full
>> concatenation and we want to be able to maintain a rolling hash.
>> But resumption_context is always a short value and is already the size of
>> the PRF hash. (If not resuming, it is the zero key, which is sized
>> appropriately. If resuming, it is the size of the PRF hash of the original
>> connection. But we require that resumptions use the same PRF, so that
>> too will be the right size.)
>>
>> Was there some other reason we needed to hash it, or is a guarantee of
>> constant size sufficient to use it directly? If it still needs to be
>> hashed, it seems we ought to redefine resumption_context to be
>> Hash(HKDF-Expand-Label(...)) instead, mostly as a hint to implementors that
>> one may as well store the final value in the ticket.
>>
>
> I didn't have a good reason. It was just giving me the heebie jeebies
> (technical term) to append something that wasn't hashed to something that
> was.
>
> -Ekr
>
>
>>
>> David
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
>>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] judging consensus on keys used in handshake and data messages

2016-07-18 Thread Hugo Krawczyk
On Thu, Jul 7, 2016 at 6:44 AM, Hugo Krawczyk 
wrote:

> I do not have an objection to option 1 if re-phrased as
> Option 1 - use the same key for protecting both *post*-handshake and
> applications messages..
>
> I believe this is what was intended by that option anyway. Let me clarify.
>
> I understand the question as relating *only* to post-handshake messages
> and not
> to the main handshake (three initial flights). For the latter we have key
> separation in the sense that none of these main-handshake messages is
> encrypted
> under the application key but rather under dedicated handshake keys. This
> should
> not be changed as it provides key indistinguishability to the application
> key, a
> desirable design and analysis (=proof) modularity property.
>
> On the other hand, for post-handshake messages, and particularly for
> encrypting
> post-handshake client authentication messages, preserving key
> indistinguishability is not relevant since at the time of post-handshake
> client authentication, the application key has already lost its
> indistinguishability
> by the mere fact that the key was used to encrypt application data. Key
> indistinguishability is the main reason to insist in key separation and
> this
> principle does not apply here anymore hence removing the objection to 1.
>
> I'd note that the best one could hope for in the post-handshake setting is
> that
> as a result of post-handshake client authentication the application key
> becomes
> a secure mutually-authenticated key for providing "secure channels"
> security.
> As pointed out by others in previous posts I have an analysis showing that
> this
> delayed mutual authentication guarantee is achieved even if one uses the
> application key to encrypt the post-handshake messages. I have circulated a
> preliminary version of the  paper among cryptographers working on TLS 1.3
> and  I will post a public copy next week so this can be scrutinized
> further.
>

​Here is the promised posted paper
http://eprint.iacr.org/2016/711
"
A Unilateral-to-Mutual Authentication Compiler for Key Exchange (with
Applications to Client Authentication in TLS 1.3)
​"​

Its analysis of client authentication is relevant to the different modes of
TLS 1.3 and, in particular, to post-handshake authentication. As said
above, the results support the use of the application key to encrypt the
post-handshake messages, hence avoiding the need to use a dedicated key for
this and the consequent need for trial decryption (or related techniques).

Comments on and off list are welcome.

Hugo
​


>
> Hugo
>
>
> On Thu, Jul 7, 2016 at 1:10 AM, Karthikeyan Bhargavan <
> karthik.bharga...@gmail.com> wrote:
>
>> If we are left with 1 or 3, the miTLS team would prefer 1.
>>
>> On the cryptographic side, Hugo has a recent (draft) paper that seems to
>> provide
>> some more justification for (1), at least for client authentication.
>>
>> I know this is a bit off-topic, but the miTLS team would also like to get
>> rid of 0-RTT ClientFinished
>> if that is the only message left in the 0-RTT encrypted handshake flight.
>> That should remove
>> another Handshake/Data key separation from the protocol, leaving only 3
>> keys: 0-RTT data,
>> 1-RTT handshake, and 1-RTT data.
>>
>> Best,
>> -Karthik
>>
>>
>> On 07 Jul 2016, at 02:49, David Benjamin  wrote:
>>
>> On Wed, Jul 6, 2016 at 5:39 PM Eric Rescorla  wrote:
>>
>>> On Wed, Jul 6, 2016 at 5:24 PM, Dave Garrett 
>>> wrote:
>>>
>>>> On Wednesday, July 06, 2016 06:19:29 pm David Benjamin wrote:
>>>> > I'm also curious which post-handshake messages are the problem. If we
>>>> were
>>>> > to rename "post-handshake handshake messages" to "post-handshake bonus
>>>> > messages" with a distinct bonus_message record type, where would there
>>>> > still be an issue? (Alerts and application data share keys and this
>>>> seems
>>>> > to have been fine.)
>>>>
>>>> Recasting all the post-handshake handshake messages as not something
>>>> named "handshake" does make a degree of sense, on its own. (bikeshedding:
>>>> I'd name it something more descriptive like "secondary negotiation"
>>>> messages or something, though.) Even if this doesn't directly help with the
>>>> issue at hand here, does forking these into a new ContentType sound like a
>>>> useful move, in general?
>>>
>>>
>>> I'm not sure what this would accomplish.
>

Re: [TLS] Resumption Contexts and 0-RTT Finished

2016-07-19 Thread Hugo Krawczyk
Without taking a position on the implementation issues, I am in favor of
Option A with a dedicated context value (and an explicit name "PSK
Context") as this makes clear the function of this value. Relying in
Finished makes it more fragile and open to be dropped in the future when
its binding role is "forgotten" or when deriving some other protocol or
variant.

Also, I insist on the need to remark about the need for collision
resistance of any value with a binding functionality. If this value is
produced with HKDF (or HMAC as in he case of Finished) the need for
collision resistance is not explicit and can lead to truncation (which is
perfectly fine when just deriving keys or when used with a regular PRF
functionality).

Actually, I would suggest that for any such value, we add "collision
resistance" to the label for that derivation - this would apply to
resumption/PSK context and to Exporter key (and possibly others)

Hugo

On Tue, Jul 19, 2016 at 10:45 AM, Antoine Delignat-Lavaud <
anto...@delignat-lavaud.fr> wrote:

> Dear all,
>
> Here is an extended summary of the early Finished / resumption context
> discussion at the WG.
>
> 1. Signature forwarding with external PSK
>
> Currently, resumption context is only defined for resumption-based PSK,
> which means that external PSKs are not protected against transcript
> synchronization attacks.
>
> At a high level, this means that there is no crypographic binding between
> the handshake log (what is signed for authentication) and the PSK used in
> the key exchange.
> In the resumption case, the resumption context fills this role (which is
> the
> reason why it was introduced); however, all external PSKs currently have
> their resumption context set to 0 (and thus, all logs are equivalent w.r.t.
> all non-resumption PSK).
> Since the handshake log is what get signed during certificate-based
> authentication, the lack of binding
> means that a signature from a session s1 can be used in a session s2 as
> long
> as the PSK identifier are the same, which is a very weak constraint.
>
> Before Cas Cremer's paper, this was particularly bad because the Finished
> (which is bound cryptographically to the PSK) was not part of the handshake
> log; therefore, all signatures could potentially be replayed.
> However, even though Finished are now part of the log, the server signature
> in PSK 1-RTT remains vulnerable (because the log is then ClientHello,
> ServerHello, Certificate and thus has no Finished message to save us from
> transcript synchronization). For those of you who were at TRON, you may
> remember that Karthik made this point quite clearly in his talk (he even
> proposed to swap CV and Finished for this very reason!).
>
> An attack against external PSKs easily follows: we assume that an attacker
> A
> wants to impersonate a server S to some
> IoT-like client C that can only do pure PSK for key exchange.
>
> 1. C registers to A under the PSK identifier Xc. The PSK between C and A is
> Kca
> 2. A, acting as a client, registers to S under the same identifier Xc. The
> PSK between A and S is Kas
> 3. C connects to A using (Xc, Kca). A forwards the ClientHello to S
> 4. S sends back ServerHello, [EE, Cert(S), CertVerify, Finished]_hs where
> hs
> depends on Kas and the log [ClientHello, ServerHello]
> 5. A forwards ServerHello from S to A and re-encrypts [EE, Cert(S),
> CertVerify] under hs' which depends on Kca and [ClientHello, ServerHello]
> 6. Since all logs are synchronized and nothing in the CertVerify depends on
> Kca or Kas, C authenticates A as S.
>
> 2. Proposals to bind log and PSK
>
> As pointed out by Karthik months ago (and by myself years ago, in the
> context of Triple Handshake), implicit authentication (such as PSK or
> resumption) is difficult to mix correctly with transcript-based signatures.
>
> We have considered several ways to ensure uniform security for all PSK
> modes.
> The simplest solution (let's call it option A) is to rely on the draft 13
> resumption context infrastructure also in the case of external PSK.
> Put it simply, externally-provided PSKs are treated as if they were
> resumption master secrets: if K is the external key, we first compute
> Ek = extract(K, 0), then PSK=expand(Ek, "external psk") and RC=expand(Ek,
> "external psk context").
> Resumption context is renamed PSK context and PSK and RC are used as in the
> current draft13 key schedule.
>
> While option A does the job, we think it is over-complicated, as we observe
> that the early finished can actually play exactly the same role as the
> current resumption context (saving the concatenation with rc in every
> expansion with an handshake log).
>
> As an option B, we propose to always include an early finished in the log,
> regardless of the handshake mode.
> This comes with some complications that were discussed during the WG
> meeting. Very notably, if this early finished is assumed to always come
> after the ClientHello, then it must either a. mangled with the Cli

Re: [TLS] Finished stuffing

2016-09-07 Thread Hugo Krawczyk
I don't  understand the proposal.
Are you proposing to eliminate resumption_context (RC) from All its current
uses and replace it with the hello_finished extension? Or is this to affect
only certain uses of RC? Which ones?

One important property of RC is that it serves as a binding with the
original context that generated a resumption PSK, in particular a binding
with the server's identity (certificate). This is not achieved by the
hello_finished extension, is it?

I also have a problem with names. "Resumption context" is very explicit
about providing, well, resumption context.
"Hello_Finished", in turn, means nothing.
Also, RC may better match the notion of "binder" hence more naturally
requiring collision resistance, while all Finished uses in TLS (1.3 and
before) have a MAC functionality (for which, say, 128 bits are good enough)
and it would be better not to abuse them for other uses.

Anyways, maybe this is just the result of my misunderstanding of the
proposal.

Hugo


On Wed, Sep 7, 2016 at 11:26 AM, Eric Rescorla  wrote:

>
>
> On Wed, Sep 7, 2016 at 8:25 AM, David Benjamin 
> wrote:
>
>> On Wed, Sep 7, 2016 at 11:11 AM Eric Rescorla  wrote:
>>
>>> On Wed, Sep 7, 2016 at 6:54 AM, Antoine Delignat-Lavaud <
>>> anto...@delignat-lavaud.fr> wrote:
>>>
 Regarding whether the placeholder zeros should be part of the
 transcript for the stuffed finished, an argument against it is that it
 violates the incremental nature of the session hash. If the hash stops
 before the placeholder, it can be resumed with the computed finished;
 otherwise, it must be rolled back.

>>>
>>> This isn't a big deal for me (or I think any other implementor) either
>>> way, because of the actual way we compute the hash.
>>>
>>
>> To expand on that, because the final PRF hash is not known at the time we
>> send ClientHello, most implementations I've seen just buffer the full
>> transcript before this point.
>>
>> But one could also keep a rolling hash of all the supported PRFs (there's
>> all of two of them if you lose TLS 1.1 and below), so I think that's a good
>> argument for using the prefix rather than zeros. For implementations
>> keeping a buffer, I don't think it matters, so let's keep both strategies
>> happy.
>>
>
> This is certainly fine with me.
>
> -Ekr
>
>
>>
>> David
>>
>>
>>
>>> -Ekr
>>>
>>>

 Best,

 Antoine


 Le 2016-09-07 05:49, Joseph Salowey a écrit :

> Hi Folks,
>
> The chairs want to make sure this gets some proper review.   Please
> respond with comments by Friday so we can make some progress on this
> issue.
>
> Thanks,
>
> J&S
>
> On Tue, Sep 6, 2016 at 11:57 AM, David Benjamin
>  wrote:
>
> I think this is a good idea. It's kind of weird, but it avoids
>> giving the early Finished such a strange relationship with the
>> handshake transcript. Also a fan of doing away with multiple PSK
>> identities if we don't need it.
>>
>> As a bonus, this removes the need to route a "phase" parameter into
>> the traffic key calculation since we'll never derive more than one
>> epoch off of the same traffic secret. Combine that with the
>> two-ladder KeyUpdate and we no longer need any concatenation or
>> other label-munging at all. Simply use labels "key" and "iv" and the
>> record-layer just exposes a single UseTrafficSecret function which
>> saves the traffic secret (for KeyUpdate), derives the traffic keys,
>> and engages the new AEAD in one swoop without mucking about with
>> phases, traffic directions, whether we are client or server, etc.
>>
>> David
>>
>> On Thu, Sep 1, 2016 at 6:19 PM Eric Rescorla  wrote:
>>
>> I should also mention that this makes the implementation a fair bit
>> simpler because:
>>
>> 1. You can make all the decisions on the server side immediately
>> upon receiving the ClientHello
>> without waiting for Finished.
>> 2. You don't need to derive early handshake traffic keys.
>>
>> >From an implementor's perspective, this outweighs the messing around
>> with the ClientHello buffer.
>> -Ekr
>>
>> On Thu, Sep 1, 2016 at 3:04 PM, Eric Rescorla  wrote:
>>
>> Folks,
>>
>> I have just posted a WIP PR for what I'm calling "Finished Stuffing"
>>
>> https://github.com/tlswg/tls13-spec/pull/615 [1]
>>
>>
>> I would welcome comments on this direction and whether I am missing
>> anything important.
>>
>> OVERVIEW
>> This PR follows on a bunch of discussions we've had about the
>> redundancy
>> of Finished and resumption_ctx. This PR makes the following changes:
>>
>> - Replace the 0-RTT Finished with an extension you send in the
>> ClientHello *whenever* you do PSK.
>> - Get rid of resumption context (because it is now replaced by
>> the ClientHello.hello_finished.
>>
>> RATIONALE
>>>

Re: [TLS] Finished stuffing

2016-09-07 Thread Hugo Krawczyk
On Wed, Sep 7, 2016 at 7:18 PM, Eric Rescorla  wrote:

>
>
> On Wed, Sep 7, 2016 at 4:02 PM, Hugo Krawczyk 
> wrote:
>
>> I don't  understand the proposal.
>> Are you proposing to eliminate resumption_context (RC) from All its
>> current uses and replace it with the hello_finished extension?
>>
>
> Yes.
>
>
>
>> Or is this to affect only certain uses of RC? Which ones?
>>
> One important property of RC is that it serves as a binding with the
>> original context that generated a resumption PSK, in particular a binding
>> with the server's identity (certificate). This is not achieved by the
>> hello_finished extension, is it?
>>
>
> It is supposed to do so. The reasoning is:
>
> - RMS is unchanged and therefore derived from the server certificate
> - The HelloFinished computation is HKDF(F(RMS), ) and
> therefore also is derived from the server certificate.
>
> Is that insufficient.
>

​Maybe it is, but in a very convoluted way.  There is no crypto-logic
reason in the world to think of RMS as a collision resistant binding value
​. These non-explicit requirements/assumptions are very dangerous. A recipe
for future trouble.


>
> I also have a problem with names. "Resumption context" is very explicit
>> about providing, well, resumption context.
>> "Hello_Finished", in turn, means nothing.
>> Also, RC may better match the notion of "binder" hence more naturally
>> requiring collision resistance, while all Finished uses in TLS (1.3 and
>> before) have a MAC functionality (for which, say, 128 bits are good enough)
>> and it would be better not to abuse them for other uses.
>>
>
> Two points about this:
>
> 1. The Finished in TLS 1.3 is always Hash.length, and our minimum hash is
> SHA-256, so I believe we have enough strength here. We could of course
> require a minimum size.
>

​If you keep this, you definitely need to have a minimum size specification
in boldface.
​


> 2. I wouldn't object to changing names here, of course.
>

​I think that's a must. "Finished" says absolutely nothing about the
functionality of this extension (it may actually mislead to think of it as
a MAC of some sorts).
Call it something that can be understood as  "PSK Creation Binder" and make
sure to specify (and explain in English) that all the values in the key
derivation chain to lead to this value are collision resistant mappings of
the original handshake context (including the server's certificate).

BTW, what would this "original handshake context" be in a setting where
everything is derived from a PSK without server certificates?

Hugo

PS: A mostly unrelated comment:  Finished in 0-RTT was potentially useful
to enable client authentication of 0-RTT traffic but since the latter is
not in (current) scope I guess this should not be a consideration to keep
that Finished message (which can be added when deciding to do such client
authentication).

​


>
> Best,
> -Ekr
>
>
>> Anyways, maybe this is just the result of my misunderstanding of the
>> proposal.
>>
>> Hugo
>>
>>
>> On Wed, Sep 7, 2016 at 11:26 AM, Eric Rescorla  wrote:
>>
>>>
>>>
>>> On Wed, Sep 7, 2016 at 8:25 AM, David Benjamin 
>>> wrote:
>>>
>>>> On Wed, Sep 7, 2016 at 11:11 AM Eric Rescorla  wrote:
>>>>
>>>>> On Wed, Sep 7, 2016 at 6:54 AM, Antoine Delignat-Lavaud <
>>>>> anto...@delignat-lavaud.fr> wrote:
>>>>>
>>>>>> Regarding whether the placeholder zeros should be part of the
>>>>>> transcript for the stuffed finished, an argument against it is that it
>>>>>> violates the incremental nature of the session hash. If the hash stops
>>>>>> before the placeholder, it can be resumed with the computed finished;
>>>>>> otherwise, it must be rolled back.
>>>>>>
>>>>>
>>>>> This isn't a big deal for me (or I think any other implementor) either
>>>>> way, because of the actual way we compute the hash.
>>>>>
>>>>
>>>> To expand on that, because the final PRF hash is not known at the time
>>>> we send ClientHello, most implementations I've seen just buffer the full
>>>> transcript before this point.
>>>>
>>>> But one could also keep a rolling hash of all the supported PRFs
>>>> (there's all of two of them if you lose TLS 1.1 and below), so I think
>>>> that's a good argument for using the prefix rather than zeros. For
>>>> implementa

Re: [TLS] Finished stuffing

2016-09-08 Thread Hugo Krawczyk
On Thu, Sep 8, 2016 at 5:29 AM, Ilari Liusvaara 
wrote:

> On Wed, Sep 07, 2016 at 07:43:53PM -0400, Hugo Krawczyk wrote:
> > On Wed, Sep 7, 2016 at 7:18 PM, Eric Rescorla  wrote:
> >
> > >
> > >
> > > On Wed, Sep 7, 2016 at 4:02 PM, Hugo Krawczyk 
> > > wrote:
> > >
> > > I also have a problem with names. "Resumption context" is very explicit
> > >> about providing, well, resumption context.
> > >> "Hello_Finished", in turn, means nothing.
> > >> Also, RC may better match the notion of "binder" hence more naturally
> > >> requiring collision resistance, while all Finished uses in TLS (1.3
> and
> > >> before) have a MAC functionality (for which, say, 128 bits are good
> enough)
> > >> and it would be better not to abuse them for other uses.
> > >>
> > >
> > > Two points about this:
> > >
> > > 1. The Finished in TLS 1.3 is always Hash.length, and our minimum hash
> is
> > > SHA-256, so I believe we have enough strength here. We could of course
> > > require a minimum size.
> > >
> >
> > ​If you keep this, you definitely need to have a minimum size
> specification
> > in boldface.
>
> Well, the PRF hash is already assumed to be CR, and if HKDF is used with
> certain restrictions, it preserves CR:
>
> - The hash has output length at most input length (true for all SHA-2
> variants)
>

Just curious: Can you explain the need for this property? Note that if a
key to HMAC is  ​larger than the (compression) function output size then
this key is first hashed into a full output hence preserving CR.


- HKDF-extract salt length is constant (in current draft, always hash_olen)
> - HKDF-expand PRK length is constant (in current draft, always hash_olen)
> - The HKDF-expand output output length is at least hash output length
>   (in current draft, hash_olen except in key expansions).
>

These are a lot of restrictions that no one has spelled out as conditions
on the KDF and they do not follow from the natural properties of KDFs.
Collision resistance is never needed as far as I can tell for generation of
keys or to compute PRF and/or MAC values (e.g., for the original
functionality of Finished that is essentially a MAC/PRF on the transcript).
The reason we find ourselves considering the CR properties of HKDF is that
we are using it to derive *strings* that serve as binders/digests of past
transcripts. Luckily, HKDF with the right hash functions can provide that
functionality but it is not a native KDF functionality.

I do not mean this as an academic discussion (although we could have that
too) but as a warning for future (if not present) misuse and an obstacle in
replacing HKDF in the future.
I would be much happier if we had a clear distinction between PRF/MAC
​computations, KDF computations and digest computations., even if we
currently used HKDF for all these functions.


> Furthermore Finished construction uses HMAC. There for CR-preserving,
> one needs key to be of constant length (it always is hash_olen).
>
>
> Then there's things that are "nonces":
>
> - Exporter master secret
> - Resumption master secret
> - hello_finished
> - Some outputs of TLS exporter (depending on application).
>
> So I would be more concerned about some future extension changing the
> way things are computed, breaking CR-preserving, or someone adding a
> weak PRF hash. ​
>
>
​Agreed.
​
​


>
> (Of course, if SHA-2 breaks, we have really messy practical problem
> too...)
>
>
> > > 2. I wouldn't object to changing names here, of course.
> > >
> >
> > ​I think that's a must. "Finished" says absolutely nothing about the
> > functionality of this extension (it may actually mislead to think of it
> as
> > a MAC of some sorts).
> > Call it something that can be understood as  "PSK Creation Binder" and
> make
> > sure to specify (and explain in English) that all the values in the key
> > derivation chain to lead to this value are collision resistant mappings
> of
> > the original handshake context (including the server's certificate).
>
> It is a bit more problematic than that:
>
> The hello_finished / "PSK Creation Binder" derives from the PSK key.
>
> Deriving separate value in context of dynamic PSK provisioning does not
> work properly as "static" PSKs lack this value, and if one then tries to
> use such key with combined authentication, you got an attack[1].
>
>
> [1] Apparently combined authentication is to be in separate spec[2], but
> the main spec needs to be safe with it.
>
> [2] There apparently the server signature_algorithms to "support" that,
> except I can't figure out _any_ use for that except a footgun[3].
>
> [3] If using PSK, it has ill-defined semantics. If not, it pretty much
> only useful for attacking the client.
>

​I could not follow this argument.

Hugo
​


>
> ​
>
>
> -Ilari
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Finished stuffing

2016-09-09 Thread Hugo Krawczyk
On Fri, Sep 9, 2016 at 4:22 AM, Ilari Liusvaara 
wrote:

> On Thu, Sep 08, 2016 at 09:59:22PM -0400, Hugo Krawczyk wrote:
> > On Thu, Sep 8, 2016 at 5:29 AM, Ilari Liusvaara <
> ilariliusva...@welho.com>
> > wrote:
> >
> > > On Wed, Sep 07, 2016 at 07:43:53PM -0400, Hugo Krawczyk wrote:
> > >
> > > - The hash has output length at most input length (true for all SHA-2
> > > variants)
> > >
> >
> > Just curious: Can you explain the need for this property? Note that if a
> > key to HMAC is  ​larger than the (compression) function output size then
> > this key is first hashed into a full output hence preserving CR.
>
> Simply me not bothering to figure out what the heck HMAC does if this
> isn't true (or if it is even well-defined in all cases).
>
> > - HKDF-extract salt length is constant (in current draft, always
> hash_olen)
> > > - HKDF-expand PRK length is constant (in current draft, always
> hash_olen)
> > > - The HKDF-expand output output length is at least hash output length
> > >   (in current draft, hash_olen except in key expansions).
> > >
> >
> > These are a lot of restrictions that no one has spelled out as conditions
> > on the KDF and they do not follow from the natural properties of KDFs.
> > Collision resistance is never needed as far as I can tell for generation
> of
> > keys or to compute PRF and/or MAC values (e.g., for the original
> > functionality of Finished that is essentially a MAC/PRF on the
> transcript).
> > The reason we find ourselves considering the CR properties of HKDF is
> that
> > we are using it to derive *strings* that serve as binders/digests of past
> > transcripts. Luckily, HKDF with the right hash functions can provide that
> > functionality but it is not a native KDF functionality.
>
> So I presume some more text for Security Considerations...
>
> > I do not mean this as an academic discussion (although we could have that
> > too) but as a warning for future (if not present) misuse and an obstacle
> in
> > replacing HKDF in the future.
> > I would be much happier if we had a clear distinction between PRF/MAC
> > ​computations, KDF computations and digest computations., even if we
> > currently used HKDF for all these functions.
>
> Unfortunately there are things like exporter outputs, that need to be
> both "secret" (i.e. "keys") and "nonces" (i.e. "binders"). At least if
> application wants so... Dropping either would cause MAJOR security
> problems.
>
> I think those and the dynamically provisioned PSKs are the only ones
> that have that property of being both key and binder.
>

​I would much prefer to have two elements associated with such keys. One is
the key itself and the other is a binder (or whatever other name one
chooses for it) that consists of a context string or digest associated to
that key. Then, you would use the key to key crypto algorithms and use the
descriptor as a binder to the key's original context, usually as input to a
crypto algorithm (and not as a key). This will make the functionality of
each element (key or binder) more explicit and will make it clear when is
that we need collision resistance and when we don't.

Hugo


> > > It is a bit more problematic than that:
> > >
> > > The hello_finished / "PSK Creation Binder" derives from the PSK key.
> > >
> > > Deriving separate value in context of dynamic PSK provisioning does not
> > > work properly as "static" PSKs lack this value, and if one then tries
> to
> > > use such key with combined authentication, you got an attack[1].
> > >
> >
> > ​I could not follow this argument.
>
> Basically, one can't make a distinction between static ("non-resumption)
> and dynamic ("resumption") PSKs here. Because such distinction would
> run into security problems with some other features.
>
>
>
> -Ilari
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-09-22 Thread Hugo Krawczyk
If the problem is the use of forward secrecy then there is a simple
solution, don't use it.
That is, you can, as a server, have a fixed key_share for which the secret
exponent becomes the private key exactly as in the RSA case. It does
require some careful analysis, though.

But maybe I misunderstood the problem and maybe I should not be putting
these surveillance-friendly ideas in people's minds...

Hugo




On Thu, Sep 22, 2016 at 1:19 PM, BITS Security <
bitssecur...@fsroundtable.org> wrote:

> To:  IETF TLS 1.3 Working Group Members
>
> My name is Andrew Kennedy and I work at BITS, the technology policy
> division of the Financial Services Roundtable (
> http://www.fsroundtable.org/bits).  My organization represents
> approximately 100 of the top 150 US-based financial services companies
> including banks, insurance, consumer finance, and asset management firms.
>
> I manage the Technology Cybersecurity Program, a CISO-driven forum to
> investigate emerging technologies; integrate capabilities into member
> operations; and advocate member, sector, cross-sector, and private-public
> collaboration.
>
> While I am aware and on the whole supportive of the significant
> contributions to internet security this important working group has made in
> the last few years I recently learned of a proposed change that would
> affect many of my organization's member institutions:  the deprecation of
> RSA key exchange.
>
> Deprecation of the RSA key exchange in TLS 1.3 will cause significant
> problems for financial institutions, almost all of whom are running TLS
> internally and have significant, security-critical investments in
> out-of-band TLS decryption.
>
> Like many enterprises, financial institutions depend upon the ability to
> decrypt TLS traffic to implement data loss protection, intrusion detection
> and prevention, malware detection, packet capture and analysis, and DDoS
> mitigation.  Unlike some other businesses, financial institutions also rely
> upon TLS traffic decryption to implement fraud monitoring and surveillance
> of supervised employees.  The products which support these capabilities
> will need to be replaced or substantially redesigned at significant cost
> and loss of scalability to continue to support the functionality financial
> institutions and their regulators require.
>
> The impact on supervision will be particularly severe.  Financial
> institutions are required by law to store communications of certain
> employees (including broker/dealers) in a form that ensures that they can
> be retrieved and read in case an investigation into improper behavior is
> initiated.  The regulations which require retention of supervised employee
> communications initially focused on physical and electronic mail, but now
> extend to many other forms of communication including instant message,
> social media, and collaboration applications.  All of these communications
> channels are protected using TLS.
>
> The impact on network diagnostics and troubleshooting will also be
> serious.  TLS decryption of network packet traces is required when
> troubleshooting difficult problems in order to follow a transaction through
> multiple layers of infrastructure and isolate the fault domain.   The
> pervasive visibility offered by out-of-band TLS decryption can't be
> replaced by MITM infrastructure or by endpoint diagnostics.  The result of
> losing this TLS visibility will be unacceptable outage times as support
> groups resort to guesswork on difficult problems.
>
> Although TLS 1.3 has been designed to meet the evolving security needs of
> the Internet, it is vital to recognize that TLS is also being run
> extensively inside the firewall by private enterprises, particularly those
> that are heavily regulated.  Furthermore, as more applications move off of
> the desktop and into web browsers and mobile applications, dependence on
> TLS is increasing.
>
> Eventually, either security vulnerabilities in TLS 1.2, deprecation of TLS
> 1.2 by major browser vendors, or changes to regulatory standards will force
> these enterprises - including financial institutions - to upgrade to TLS
> 1.3.  It is vital to financial institutions and to their customers and
> regulators that these institutions be able to maintain both security and
> regulatory compliance during and after the transition from TLS 1.2 to TLS
> 1.3.
>
> At the current time viable TLS 1.3-compliant solutions to problems like
> DLP, NIDS/NIPS, PCAP, DDoS mitigation, malware detection, and monitoring of
> regulated employee communications appear to be immature or nonexistent.
> There are serious cost, scalability, and security concerns with all of the
> currently proposed alternatives to the existing out-of-band TLS decryption
> architecture:
>
> -  End point monitoring: This technique does not replace the pervasive
> network visibility that private enterprises will lose without the RSA key
> exchange.  Ensuring that every endpoint has a monitoring agent 

Re: [TLS] Industry Concerns about TLS 1.3

2016-09-22 Thread Hugo Krawczyk
One of the most interesting chapters in the ultra-interesting history of
public key cryptography is that all of the Fathers of Public Key
Cryptography,  Diffie, Hellman, Rivest, Shamir and Adelman missed the
observation that from a (unauthenticated) DH key exchange you can get an
encryption scheme just by fixing one of the exponents. It was Taher
ElGamal, a few years later that made that observation and that is why this
encryption is known as ElGamal encryption.

As for the comment below:


On Thu, Sep 22, 2016 at 7:50 PM, Colm MacCárthaigh 
wrote:

>
>
> On Thu, Sep 22, 2016 at 4:41 PM, Hugo Krawczyk 
> wrote:
>
>> If the problem is the use of forward secrecy then there is a simple
>> solution, don't use it.
>> That is, you can, as a server, have a fixed key_share for which the
>> secret exponent becomes the private key exactly as in the RSA case. It does
>> require some careful analysis, though.
>>
>
> I think that this may be possible for TLS1.3 0-RTT data, but not for other
> data where an ephemeral key will be generated based also on a parameter
> that the client chooses.
>

The key_share contributed by the client is indeed ephemeral and it replaces
the random key chosen by the client in the RSA-based scheme.

Hugo​



> --
> Colm
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Finished stuffing/PSK Binders

2016-10-09 Thread Hugo Krawczyk
On Fri, Oct 7, 2016 at 1:08 PM, Eric Rescorla  wrote:

>
>
> On Fri, Oct 7, 2016 at 10:03 AM, Ilari Liusvaara  > wrote:
>
>> On Fri, Oct 07, 2016 at 09:35:40AM -0700, Eric Rescorla wrote:
>> > On Fri, Oct 7, 2016 at 8:26 AM, Ilari Liusvaara <
>> ilariliusva...@welho.com>
>> > wrote:
>> >
>> > > On Fri, Oct 07, 2016 at 08:01:43AM -0700, Eric Rescorla wrote:
>> > > > 4. I've taken a suggestion from David Benjamin to move the
>> negotiation
>> > > > of the PSK key exchange parameters out of the PSK itself and into a
>> > > > separate message. This cleans things up and also lets us drop the
>> > > > currently non-useful auth_mode parameter.
>> > >
>> > > Eeh... From the text, it seems to currently require the kex modes
>> > > extension if PSK extension is present. Which seems worse than useless
>> > > if the meaning is to get rid of the kex mode parameter from PSK
>> > > extension (since you will have the value anyway, but need to dig it
>> > > from another extension... Blech).
>> >
>> > I guess this is a matter of taste, but what convinced me was that:
>> >
>> > 1. It put all the logic on the server side.
>> > 2. It removed the auth mod parameter.
>> >
>> > Maybe david can say more.
>>
>> I mean if server is to accept PSK, it must now go fishing for another
>> extension, check that it is present and pay attention to values there.
>> As opposed to having the data in where it is needed.
>>
>
> This is a reasonable argument (and the reason I stuffed the binder here).
> However, David's argument was that this applied to *all* PSKs even new
> ones.
>
>
> -Ekr
>
> > > Also, didn't notice what prevents pathology like this (I presume this
>> > > is not allowed):
>> > >
>> > > (Assume PSK with 0RTT allowed, using AES-128-GCM-SHA256)
>> > >
>> > > ClientHello[Ciphers=CHACHA20-POLY1305-SHA256, EarlyDataIndication]
>> --->
>> > > [0-RTT data, encrypted using AES-128-GCM-SHA256]
>> > > <-- ServerHello[Cipher=CHACHA20-POLY1305-SHA256]
>> > > <-- EncryptedExtensions[EarlyDataIndication]
>> > >
>> > > Note the record protection algorithm mismatch.
>> > >
>> >
>> > Yes, this is forbidden by the combination of:
>> >
>> > "The parameters for the 0-RTT data (symmetric cipher suite,
>> > ALPN, etc.) are the same as those which were negotiated in the
>> connection
>> > which established the PSK.  The PSK used to encrypt the early data
>> > MUST be the first PSK listed in the client's "pre_shared_key"
>> extension."
>> > (though I think I just recently added cipher suite).
>> >
>> > and:
>> > "Any ticket MUST only be resumed with a cipher suite that is identical
>> > to that negotiated connection where the ticket was established."
>>
>> If 0-RTT is used with manually provisioned PSKs (might not be allowed
>> currently, but might be allowed soon), does that still hold?
>>
>> Also, I think it is problematic that externally provisioned PSKs can
>> be used with any protection with given prf-hash, while NST-provisioned
>> PSKs can only be used with one protection and prf-hash.
>>
>> 0-RTT requirements are separate matter, since those would apply to all.
>>
>> The original purpose of resumption-as-PSK was AFAIK to unify the two
>> mechanisms to simplify things. Therefore those two should be as similar
>> as possible.
>>
>> >
>> > Also, to straightforwardly prove that collision resistance of HKDF and
>> > > HMAC (as used) follows from collision resistance of the underlying
>> hash
>> > > function, yon need to take the output to be at least the hash output
>> > > size. As otherwise it is not guaranteed that any collision in HKDF or
>> > > HMAC can be reduced into collision of the underlying hash.
>> > >
>> >
>> > Right. I have some text here but please feel free to suggest more.
>>
>> Yes, but the text says 256 bit output is enough. One isn't guaranteed
>> to be able to reduce such collision to collision of >256 bit hash.
>>
>> (In fact, if the hash is e.g. 384 bit, 256-bit collisions are extremely
>> unlikely to reduce).
>>
>
> Right. I can update.
>


​
​I think that allowing truncation (e.g. for SHA-512) with at least 256-bit
output should be fine too without forcing implementations to work with,
say, 512-bit keys.
While I agree that we don't have generic reductions from collision
resistance of a hash function to its truncations, such (long enough)
truncations are believed to inherit collision resistance. For example,
SHA-512 is "officially" allowed to be truncated and it is the way SHA-384
is defined. Also, a collision on a 256-bit truncated output would be a
MAJOR weakness for any hash function, in particular "breaking" the
treatment of the function as a random oracle (such weakness must lead to
abandoning that hash function).

What do cryptanalysts think?

Hugo
​
 ​


>
> -Ekr
>
>
>>
>>
>>
>> -Ilari
>>
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/

Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-11-21 Thread Hugo Krawczyk
If it wasn't because we don't need more noise in this discussion I would
have suggested SSL 5.0 which seems to be the logical conclusion from the
reasoning people are using. Clearly, everyone thinks that the battle of
replacing "SSL" with "TLS" in the popular and technical references to the
standard has been lost and there is not much hope to win it in the future.
So if the mountain won't come to  Muhammad then go back to SSL and call it
SSL 5.0 leaving SSL 4.0 as an historic parallel/re-naming of TLS 1.0. (Also
note that the two 'S' of SSL already hint to the number 5 and L is 50 in
Roman numerals.)

On a more serious note, I would keep a minor option in whatever is chosen
(e.g. 4.0). The reason is that I can see more resistance in the future to
minor revisions if such revision needs to be called TLS 5 rather than 4.1.
However, minor but crucial revisions may be needed sooner than one hopes
for and delaying them for when more changes are accumulated is not a good
thing.

Hugo
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [SUSPECTED URL!]Re: Requiring that (EC)DHE public values be fresh

2017-01-01 Thread Hugo Krawczyk
There is more than one way to "backdoor" the use of DH (i.e., to not
enforce forward secrecy) and some of these ways are completely undetectable
(in particular, they would not repeat DH values). One has to be careful not
to give a false sense of security by the illusion that not detecting DH
repetition guarantees forward secrecy (FS). The value of a MUST NOT that
cannot be enforced is debatable even though it may serve as a strong
message to implementers that FS is an important property to comply with
(which is the domain of SHOULD). Another consideration is that there are
applications where FS is of little value, e.g. anything where the
requirement is authentication and not secrecy or where the secrecy
requirement itself is ephemeral.

Just to make it clear: I have been a supporter and promoter of forward
secrecy since the early days of IPsec and I think it is the most important
new feature of TLS 1.3, so the above should be understood as lukewarm
feelings towards FS. Just that we have to understand that we cannot have an
"anti-backdoor" guarantee and that there may be applications with
legitimate reasons not to use FS.

On Sat, Dec 31, 2016 at 1:36 PM, Adam Langley 
wrote:

> (Note: I'm reordering Brian's paragraphs here so that I can address
> all of those on a single topic at a time.)
>
> On Thu, Dec 29, 2016 at 4:29 PM, Brian Smith  wrote:
> > It would make self-hosting a small that can survive a spike in traffic
> > ("slashdot effect") more difficult, thus encouraging sites to use
> > CDNs, which decrease the integrity, confidentiality, and privacy
> > properties of all connections between the client and the origin
> > server. Unfortunately I don't know how to quantify that.
> >
> > Another unintended consequence is that it makes the PSK modes
> > relatively more attractive. For some very small devices doing ECDH
> > agreement on every connection and only doing a occasionally doing a
> > ECDH keygen could be at the edge of their performance/power limits,
> > and needing to do the ECDH keygen for every connection could easily
> > make using ECDH prohibitive. OTOH, we don't want these devices to use
> > have a fixed ECDH key burned into their ROM, or similar, either.
> >
> > Another unintended consequence is that it makes resumption (formulated
> > as a sort of server-generated PSK in TLS 1.3) more necessary. Although
> > many implementations will probably implement resumption, I think there
> > are a lot of cases where one might prefer to avoid implementing and/or
> > enabling it. Also, even when it is enabled, making ECDH more expensive
> > relative to PSK/resumption would encourage building more complex
> > server software to distribute PSKs across machines. And/or the server
> > may choose to reuse PSKs longer. And/or the server may choose to use
> > the non-ECDHE form of PSK-based resumption (IIUC). Again, though, I
> > can't quantify the effect here.
> >
> > With all this in mind, absent other information. In the case of
> > servers hosting websites, I do think a limit of less than a minute is
> > reasonable. However, I think for the small device (IoT) case, the
> > limit should be longer, perhaps even hours.
>
> In practice, at the moment, sites are generally using RSA
> certificates, so a small device getting slashdotted would be dominated
> by the RSA signing costs rather than the ECDHE.
>
> But let's posit EdDSA certificates, where ECDHE generation might then
> account for a 1/3 of the handshake cost. You make a good point that we
> don't want to push low-end devices into PSK. Also, small devices are
> less likely to be able to afford the tables that make fixed-base
> operations fast.
>
> I don't actually object to allowing servers to cache a ECDHE public
> value for a small about of time. (QUIC currently does so for 30
> seconds.) But I was worried about the arguments over the duration if I
> specified one :)
>
> Consider the motivations here:
>
> 1) We know that some implementations have gotten this wrong with TLS
> 1.2 and cached values for far too long. Presumably if they were to be
> naively extended to TLS 1.3 this issue would carry over.
>
> 2) We probably disagree with this banking industry desire to be able
> to backdoor their TLS connections, but we're not the police and fixing
> DH values is probably how they would do it. If it's going to be A
> Thing then it's much more likely that things will get misconfigured
> and it'll be enabled in places where it shouldn't be. If we have no
> detection mechanism then what we'll probably end up with is a Blackhat
> talk in a few years time about how x% of banks botched forward
> security at their frontends.
>
> Say that a value of an hour makes sense for some device and we feel
> that an hour's forward-security window is reasonable for security. The
> issue is that it significantly diminishes our detection ability
> because clients need to remember more than an hour's worth of values
> and I don't know if we can dedicate that amount of storag

Re: [TLS] [SUSPECTED URL!]Re: Requiring that (EC)DHE public values be fresh

2017-01-02 Thread Hugo Krawczyk
Typo correction below.

On Jan 1, 2017 12:43 PM, "Hugo Krawczyk"  wrote:

There is more than one way to "backdoor" the use of DH (i.e., to not
enforce forward secrecy) and some of these ways are completely undetectable
(in particular, they would not repeat DH values). One has to be careful not
to give a false sense of security by the illusion that not detecting DH
repetition guarantees forward secrecy (FS). The value of a MUST NOT that
cannot be enforced is debatable even though it may serve as a strong
message to implementers that FS is an important property to comply with
(which is the domain of SHOULD). Another consideration is that there are
applications where FS is of little value, e.g. anything where the
requirement is authentication and not secrecy or where the secrecy
requirement itself is ephemeral.

Just to make it clear: I have been a supporter and promoter of forward
secrecy since the early days of IPsec and I think it is the most important
new feature of TLS 1.3, so the above should be understood as lukewarm
feelings towards FS


I guess it is clear from the context but the above sentence misses a NOT:
"The above should NOT be understood as..."

. Just that we have to understand that we cannot have an "anti-backdoor"
guarantee and that there may be applications with legitimate reasons not to
use FS.

On Sat, Dec 31, 2016 at 1:36 PM, Adam Langley 
wrote:

> (Note: I'm reordering Brian's paragraphs here so that I can address
> all of those on a single topic at a time.)
>
> On Thu, Dec 29, 2016 at 4:29 PM, Brian Smith  wrote:
> > It would make self-hosting a small that can survive a spike in traffic
> > ("slashdot effect") more difficult, thus encouraging sites to use
> > CDNs, which decrease the integrity, confidentiality, and privacy
> > properties of all connections between the client and the origin
> > server. Unfortunately I don't know how to quantify that.
> >
> > Another unintended consequence is that it makes the PSK modes
> > relatively more attractive. For some very small devices doing ECDH
> > agreement on every connection and only doing a occasionally doing a
> > ECDH keygen could be at the edge of their performance/power limits,
> > and needing to do the ECDH keygen for every connection could easily
> > make using ECDH prohibitive. OTOH, we don't want these devices to use
> > have a fixed ECDH key burned into their ROM, or similar, either.
> >
> > Another unintended consequence is that it makes resumption (formulated
> > as a sort of server-generated PSK in TLS 1.3) more necessary. Although
> > many implementations will probably implement resumption, I think there
> > are a lot of cases where one might prefer to avoid implementing and/or
> > enabling it. Also, even when it is enabled, making ECDH more expensive
> > relative to PSK/resumption would encourage building more complex
> > server software to distribute PSKs across machines. And/or the server
> > may choose to reuse PSKs longer. And/or the server may choose to use
> > the non-ECDHE form of PSK-based resumption (IIUC). Again, though, I
> > can't quantify the effect here.
> >
> > With all this in mind, absent other information. In the case of
> > servers hosting websites, I do think a limit of less than a minute is
> > reasonable. However, I think for the small device (IoT) case, the
> > limit should be longer, perhaps even hours.
>
> In practice, at the moment, sites are generally using RSA
> certificates, so a small device getting slashdotted would be dominated
> by the RSA signing costs rather than the ECDHE.
>
> But let's posit EdDSA certificates, where ECDHE generation might then
> account for a 1/3 of the handshake cost. You make a good point that we
> don't want to push low-end devices into PSK. Also, small devices are
> less likely to be able to afford the tables that make fixed-base
> operations fast.
>
> I don't actually object to allowing servers to cache a ECDHE public
> value for a small about of time. (QUIC currently does so for 30
> seconds.) But I was worried about the arguments over the duration if I
> specified one :)
>
> Consider the motivations here:
>
> 1) We know that some implementations have gotten this wrong with TLS
> 1.2 and cached values for far too long. Presumably if they were to be
> naively extended to TLS 1.3 this issue would carry over.
>
> 2) We probably disagree with this banking industry desire to be able
> to backdoor their TLS connections, but we're not the police and fixing
> DH values is probably how they would do it. If it's going to be A
> Thing then it's much more likely that things will get misconfigured
> and it&#x

Re: [TLS] PR#875: Additional Derive-Secret Stage

2017-02-22 Thread Hugo Krawczyk
On Thu, Feb 9, 2017 at 4:15 PM, Eric Rescorla  wrote:

> I've just posted a pull request which slightly adjusts the structure of
> key derivation.
> PR#875 adds another Derive-Secret stage to the left side of the key ladder
> between each pair of HKDF-Extracts. There are two reasons for this:
>
> - Address a potential issue raised by Trevor Perrin where an attacker
>   somehow forces the IKM value to match the label value for Derive-Secret,
>   in which case the output of HKDF-Extract would match the derived secret.
>   This doesn't seem like it should be possible for any of the DH variants
>   we are using, and it's not clear that it would lead to any concrete
>   attack, but in the interest of cleanliness, it seemed good to address.
>
> - Restore Extract/Expand parity which gives us some flexibility in
>   case we want to replace HKDF.
>

​I want to stress, also as advise for future uses of HKDF, that a
recommended practice for HKDF is to always follow HKDF-extract with
HKDF-expand. That's how HKDF is defined and departing from it should be
done with utmost care. The issue raised by Trevor is an example of such
subtleties. In particular, note that HKDF-Extract does not carry a "info"
input while HKDF-Expand does, and such field is almost always essential for
key separation and to tie derived keys to some particular context.

Hugo


> I don't expect this change to be controversial and I'll merge it on Monday
> unless I hear objections.
>
> Thanks,
> -Ekr
>
>
>
>
>
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Adding an additional step to exporters

2017-02-24 Thread Hugo Krawczyk
Martin,

Which of these two derivation schemes are you proposing?
Are you assuming that all uses of the exporter_secret are known at the end
of
the handshake? If not, you still need to keep an exporter_secret beyond the
handshake.

Master Secret
  |
  |
  +-> Derive-Secret(., "exporter master secret 1",
  | ClientHello...Server Finished)
  | = exporter_secret_1
  |
  +-> Derive-Secret(., "exporter master secret 2",
ClientHello...Server Finished)
= exporter_secret_2

Or:

Master Secret
  |
  |
  +-> Derive-Secret(., "exporter master secret",
ClientHello...Server Finished)
= exporter_secret
 |
 +-> Derive-Secret(., "exporter secret
1",
 | what_exactly)
 | = exporter_secret_1
 |
 |
 +-> Derive-Secret(., "exporter secret
2",
   what_exactly)
   = exporter_secret_2


(I wrote "what exactly" since I am not sure what do you plan to include
there.)

Regarding Ilari's comment on HKDF pairings, I have said that an Extract
operation needs to always be followed by an Expand operation, but an expand
operation may be followed by an expand operation. The point is that the
output
of Expand is (typically) intended as a key to another cryptographic
function.
Such function can be HKDF-Expand itself (which is just a specific
implementation
of a variable-length input/output PRF).

Thus, both of the above possible derivations are OK from the point of view
of HKDF.

Hugo


On Fri, Feb 24, 2017 at 12:40 AM, Martin Thomson 
wrote:

> On 24 February 2017 at 16:01, Sean Turner  wrote:
> > So this isn’t entirely novel right I mean we did something similar wrt
> other key schedules?
>
> I certainly hope it isn't novel.  I'm just applying the same
> technique: keep independent keys independent.
>
> On 24 February 2017 at 16:09, Felix Günther 
> wrote:
> > just to clarify: you add an additional HKDF.Expand step, not
> > HKDF.Extract, right?
>
> Yes, you are right, I should have said expand.  You need to use expand
> to get the label-based separation on type.
>
> I don't know how I got confused about that.  If we need to maintain
> extract and expand in pairs (as we have already been burned by), then
> I will defer to cryptographers on that.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls