[TLS] DSA support in TLS 1.3.

2015-08-28 Thread Dang, Quynh
Hi all,


DSA is supported in the previous versions of TLS. It would be nice if someone 
who uses DSA can use it in TLS 1.3 as well.


People who don't use DSA, then they don't use DSA. People who use DSA right, it 
should be fine for them to use DSA.


I don't see a convincing reason to remove support of DSA in TLS 1.3.


Quynh.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] DSA support in TLS 1.3.

2015-08-31 Thread Dang, Quynh
Hi all,


I thank everyone who took time to think about the issue.


The tone of my message below asked for a discussion of "allowed"/optional 
support for DSA with key size of 2K or bigger. So there would not be a required 
support for it.


There is a number of validated DSA implementations out there with key size of 
2K (http://csrc.nist.gov/groups/STM/cavp/documents/dss/dsanewval.htm) ( of 
course I don't know the number of the implementations without validations).  
DSA with 2K or bigger key sizes were added to FIPS 186 in June 2009 (FIPS 
186-3).  TLSs are used in more places than just public servers and common 
browsers. For the people who use DSA in TLSs, it would be nice if they could 
run TLS 1.3 with DSA if they choose to do so.


Quynh.



From: TLS  on behalf of Dang, Quynh 
Sent: Friday, August 28, 2015 3:17 PM
To: e...@rtfm.com; tls@ietf.org
Subject: [TLS] DSA support in TLS 1.3.


Hi all,


DSA is supported in the previous versions of TLS. It would be nice if someone 
who uses DSA can use it in TLS 1.3 as well.


People who don't use DSA, then they don't use DSA. People who use DSA right, it 
should be fine for them to use DSA.


I don't see a convincing reason to remove support of DSA in TLS 1.3.


Quynh.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] DSA support in TLS 1.3.

2015-09-04 Thread Dang, Quynh
Hi IIari,

>From all of the RFCs about suite B that I have read, DSA has never been a part 
>of it.

RSA can be used for signatures and key wrap/transport.

Quynh. 


From: TLS  on behalf of Ilari Liusvaara 

Sent: Wednesday, September 2, 2015 1:49 PM
To: Salz, Rich
Cc: tls@ietf.org
Subject: Re: [TLS] DSA support in TLS 1.3.

On Tue, Sep 01, 2015 at 05:58:33PM +, Salz, Rich wrote:
> There is a third option:  you don't get to use TLS 1.3 until the
> government requirements are updated.
>
> I'm fine with that.

I think they already have, with NSA seemingly saying RSA3k is OK for
up to TOP SECRET (unless I misunderstood).

The same table from NSA that mentions RSA (and the 3k limit) does
not mention DSA (the only other signature algo is ECDSA with
384 limit).


So maybe even US govt. is not using DSA?


-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] '15 TLS Fall Interim Minutes

2015-09-23 Thread Dang, Quynh
I am just curious why we need the content type here?

Quynh. 


From: TLS  on behalf of Dave Garrett 

Sent: Tuesday, September 22, 2015 7:45 PM
To: Sean Turner
Cc: tls@ietf.org
Subject: Re: [TLS] '15 TLS Fall Interim Minutes

On Tuesday, September 22, 2015 07:27:35 pm Sean Turner wrote:
> I’ve gone ahead and posted the minutes/list of decisions to:
>
> https://www.ietf.org/proceedings/interim/2015/09/21/tls/minutes/minutes-interim-2015-tls-3

That has this:

> For padding, we reached a very rough consensus to start with the content type 
> followed by all zeros (insert reasons why) over the explicit length option 
> (insert reasons why).  DKG to propose a PR that we'll then fight out on the 
> list.  See PR #253.

The "reasons why" that were discussed were not inserted. ;)


Dave

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Encrypted SNI (was: Privacy considerations - identity hiding from eavesdropping in (D)TLS)

2015-09-25 Thread Dang, Quynh
How about making fixed length(s) for each message type, then pad it with 0x01 
then optional 0x00s?

Quynh. 


From: TLS  on behalf of Dave Garrett 

Sent: Friday, September 25, 2015 2:11 PM
To: tls@ietf.org; m...@sap.com
Subject: Re: [TLS] Encrypted SNI (was: Privacy considerations - identity hiding 
from eavesdropping in (D)TLS)

On Friday, September 25, 2015 01:10:37 pm Martin Rex wrote:
> Because it is not necessarily immediately obvious, you will need
> padding also for the Server Certificate handshake messages.
> And, because the key exchange is side-effected by properties of
> the Server Certificate, you may additionally need padding for the
> ServerKeyExchange and ClientKeyExchange handshake messages, so
> that the protocol doesn't leak of one of the service uses
> an RSA certificate and the other uses an ECDSA (or EdDSA) certificate.

This sounds like a good argument to come up with a default padding scheme for 
all handshake messages for even clients that don't use application data padding.


Dave

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Collision issue in ciphertexts.

2015-11-01 Thread Dang, Quynh
Hi Eric,


As you asked the question about how many ciphertext blocks should be safe under 
a single key, I think it is safe to have 2^96 blocks under a given key if the 
IV (counter) is 96 bits.


When there is a collision between two ciphertext blocks when two different 
counter values are used , the chance of the same plaintext was used twice is 
1^128.  Collisions start to happen a lot when the number of ciphertext blocks 
are above 2^64. However, each collision just reveals that the corresponding 
plaintext blocks are probably different ones.



Quynh.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Collision issue in ciphertexts.

2015-11-02 Thread Dang, Quynh
Now, you talked about a MAC function (with AES). I previously talked about 
encryption.


If I , the only person, uses the MAC key, when I generate more than 2^64 MAC 
values (Let's say each MAC value is 96 bits), I have many collided MAC pairs. 
But, I am the only one (beside the person(s) verifying my MACs) who knows the 
MAC key in order to generate those  verified MAC values.


If the MAC length is k bits, an attacker is allowed to send 2^n failed 
verifications, his or her chance of success is approximately 2^n / 2^k. Let's 
imagine n is 64 and k is 96, the success chance is 1/2^36 which is practically 
ZERO!


If I am an attacker, I would choose a message that I want to be verified, and I 
keep changing the MAC key to generate different MAC values with different keys 
and hope one of them will get verified.  Let's assume the MAC key to be 96 bits 
( 96 bits of random bits, the other 32 bits are known). In theory, when I get 
close to 2^96 attempts, I would expect some chance of success. To deal with 
this attacker, one would change the MAC key when the number of failed attempts 
gets close to a number that you don't want. For example, if you don't want a 
success chance of an attack to be above 1 / 2^36, then you need to change your 
MAC key when the number of failed verifications reaches 2^64 when your MAC 
length is 96 bits.


After you change the MAC key, I ( the attacker) will have to start everything 
again because all of the failed MACs I generated before are useless now.


From: Watson Ladd 
Sent: Monday, November 2, 2015 5:07 AM
To: Dang, Quynh
Cc: tls@ietf.org; c...@ietf.org; Eric Rescorla
Subject: Re: [Cfrg] Collision issue in ciphertexts.


On Nov 2, 2015 2:14 AM, "Dang, Quynh" 
mailto:quynh.d...@nist.gov>> wrote:
>
> Hi Eric,
>
>
> As you asked the question about how many ciphertext blocks should be safe 
> under a single key, I think it is safe to have 2^96 blocks under a given key 
> if the IV (counter) is 96 bits.

This is wrong for PRP, right for PRF. It's not that hard to find the right 
result.

>
>
> When there is a collision between two ciphertext blocks when two different 
> counter values are used , the chance of the same plaintext was used twice is 
> 1^128.  Collisions start to happen a lot when the number of ciphertext blocks 
> are above 2^64. However, each collision just reveals that the corresponding 
> plaintext blocks are probably different ones.

Which breaks IND-$. Let's not be clever, but stick to ensuring proven 
definitions are true.

>
>
>
> Quynh.
>
>
> ___
> Cfrg mailing list
> c...@irtf.org<mailto:c...@irtf.org>
> https://www.irtf.org/mailman/listinfo/cfrg
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Data limit for GCM under a given key.

2015-11-04 Thread Dang, Quynh
Hi Eric and all,


The limit of 2^48 packets under a given key for GCM you mentioned today is the 
limit for SRTP 
(https://tools.ietf.org/html/draft-ietf-avtcore-srtp-aes-gcm-17#section-6). The 
nonce space of the IV construction is only 48 bits and that is why it has the 
limit of 2^48. The limit here should be 2^48 blocks, not records as stated in 
the document.


As I explained before, GCM is counter mode for encryption. For a given key, the 
nonce never repeats globally, then confidentiality of the encrypted data is 
preserved. When the nonce space is 2^n values, then 2^n message blocks can have 
secure confidentiality protection.


Regarding to authentication, as I explained before, if the tag size is n, then 
you have collision issue among the tags when the number of tags goes around 
2^(n/2) which is not a good thing, but strictly speaking, this does not break 
your authentication.


However, rekeying often is a good thing which could help prevent disaster to 
keep go on if there is something wrong with the IV or the key.


Quynh.




From: Dang, Quynh
Sent: Monday, November 2, 2015 3:00 PM
To: Watson Ladd
Cc: tls@ietf.org; c...@ietf.org; Eric Rescorla
Subject: Re: [Cfrg] Collision issue in ciphertexts.


Now, you talked about a MAC function (with AES). I previously talked about 
encryption.


If I , the only person, uses the MAC key, when I generate more than 2^64 MAC 
values (Let's say each MAC value is 96 bits), I have many collided MAC pairs. 
But, I am the only one (beside the person(s) verifying my MACs) who knows the 
MAC key in order to generate those  verified MAC values.


If the MAC length is k bits, an attacker is allowed to send 2^n failed 
verifications, his or her chance of success is approximately 2^n / 2^k. Let's 
imagine n is 64 and k is 96, the success chance is 1/2^36 which is practically 
ZERO!


If I am an attacker, I would choose a message that I want to be verified, and I 
keep changing the MAC key to generate different MAC values with different keys 
and hope one of them will get verified.  Let's assume the MAC key to be 96 bits 
( 96 bits of random bits, the other 32 bits are known). In theory, when I get 
close to 2^96 attempts, I would expect some chance of success. To deal with 
this attacker, one would change the MAC key when the number of failed attempts 
gets close to a number that you don't want. For example, if you don't want a 
success chance of an attack to be above 1 / 2^36, then you need to change your 
MAC key when the number of failed verifications reaches 2^64 when your MAC 
length is 96 bits.


After you change the MAC key, I ( the attacker) will have to start everything 
again because all of the failed MACs I generated before are useless now.


From: Watson Ladd 
Sent: Monday, November 2, 2015 5:07 AM
To: Dang, Quynh
Cc: tls@ietf.org; c...@ietf.org; Eric Rescorla
Subject: Re: [Cfrg] Collision issue in ciphertexts.


On Nov 2, 2015 2:14 AM, "Dang, Quynh" 
mailto:quynh.d...@nist.gov>> wrote:
>
> Hi Eric,
>
>
> As you asked the question about how many ciphertext blocks should be safe 
> under a single key, I think it is safe to have 2^96 blocks under a given key 
> if the IV (counter) is 96 bits.

This is wrong for PRP, right for PRF. It's not that hard to find the right 
result.

>
>
> When there is a collision between two ciphertext blocks when two different 
> counter values are used , the chance of the same plaintext was used twice is 
> 1^128.  Collisions start to happen a lot when the number of ciphertext blocks 
> are above 2^64. However, each collision just reveals that the corresponding 
> plaintext blocks are probably different ones.

Which breaks IND-$. Let's not be clever, but stick to ensuring proven 
definitions are true.

>
>
>
> Quynh.
>
>
> ___
> Cfrg mailing list
> c...@irtf.org<mailto:c...@irtf.org>
> https://www.irtf.org/mailman/listinfo/cfrg
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data limit for GCM under a given key.

2015-11-04 Thread Dang, Quynh
I did not talk  under indistinguishability framework. My discussion was about 
confidentiality protection and authentication.

Quynh.

From: Watson Ladd 
Sent: Wednesday, November 4, 2015 3:17:00 PM
To: Dang, Quynh
Cc: Eric Rescorla; tls@ietf.org
Subject: Re: [TLS] Data limit for GCM under a given key.

On Wed, Nov 4, 2015 at 2:29 PM, Dang, Quynh  wrote:
> Hi Eric and all,
>
>
> The limit of 2^48 packets under a given key for GCM you mentioned today is
> the limit for SRTP
> (https://tools.ietf.org/html/draft-ietf-avtcore-srtp-aes-gcm-17#section-6).
> The nonce space of the IV construction is only 48 bits and that is why it
> has the limit of 2^48. The limit here should be 2^48 blocks, not records as
> stated in the document.
>
>
> As I explained before, GCM is counter mode for encryption. For a given key,
> the nonce never repeats globally, then confidentiality of the encrypted data
> is preserved. When the nonce space is 2^n values, then 2^n message blocks
> can have secure confidentiality protection.

This is completely untrue. If you actually understood the definitions,
and thought about the matter for 15 minutes, you would realize that
permutations are distinguishable from functions after 2^(n/2) queries
with high probabilities, and this breaks IND-$. This is an elementary
result found on page 134 of Boneh-Shoup.

>
>
> Regarding to authentication, as I explained before, if the tag size is n,
> then you have collision issue among the tags when the number of tags goes
> around 2^(n/2) which is not a good thing, but strictly speaking, this does
> not break your authentication.

Carter-Wegman security results are weaker than for PRF-based MACs.
>
>
> However, rekeying often is a good thing which could help prevent disaster to
> keep go on if there is something wrong with the IV or the key.
>
>
> Quynh.
>
>
>
>
> 
> From: Dang, Quynh
> Sent: Monday, November 2, 2015 3:00 PM
> To: Watson Ladd
> Cc: tls@ietf.org; c...@ietf.org; Eric Rescorla
> Subject: Re: [Cfrg] Collision issue in ciphertexts.
>
>
> Now, you talked about a MAC function (with AES). I previously talked about
> encryption.
>
>
> If I , the only person, uses the MAC key, when I generate more than 2^64 MAC
> values (Let's say each MAC value is 96 bits), I have many collided MAC
> pairs. But, I am the only one (beside the person(s) verifying my MACs) who
> knows the MAC key in order to generate those  verified MAC values.
>
>
> If the MAC length is k bits, an attacker is allowed to send 2^n failed
> verifications, his or her chance of success is approximately 2^n / 2^k.
> Let's imagine n is 64 and k is 96, the success chance is 1/2^36 which is
> practically ZERO!
>
>
> If I am an attacker, I would choose a message that I want to be verified,
> and I keep changing the MAC key to generate different MAC values with
> different keys and hope one of them will get verified.  Let's assume the MAC
> key to be 96 bits ( 96 bits of random bits, the other 32 bits are known). In
> theory, when I get close to 2^96 attempts, I would expect some chance of
> success. To deal with this attacker, one would change the MAC key when the
> number of failed attempts gets close to a number that you don't want. For
> example, if you don't want a success chance of an attack to be above 1 /
> 2^36, then you need to change your MAC key when the number of failed
> verifications reaches 2^64 when your MAC length is 96 bits.
>
>
> After you change the MAC key, I ( the attacker) will have to start
> everything again because all of the failed MACs I generated before are
> useless now.
>
>
> ________
> From: Watson Ladd 
> Sent: Monday, November 2, 2015 5:07 AM
> To: Dang, Quynh
> Cc: tls@ietf.org; c...@ietf.org; Eric Rescorla
> Subject: Re: [Cfrg] Collision issue in ciphertexts.
>
>
>
> On Nov 2, 2015 2:14 AM, "Dang, Quynh"  wrote:
>>
>> Hi Eric,
>>
>>
>> As you asked the question about how many ciphertext blocks should be safe
>> under a single key, I think it is safe to have 2^96 blocks under a given key
>> if the IV (counter) is 96 bits.
>
> This is wrong for PRP, right for PRF. It's not that hard to find the right
> result.
>
>>
>>
>> When there is a collision between two ciphertext blocks when two different
>> counter values are used , the chance of the same plaintext was used twice is
>> 1^128.  Collisions start to happen a lot when the number of ciphertext
>> blocks are above 2^64. However, each collision just reveals that the
>> corresponding plaintext blocks are probably different ones.
>
>

Re: [TLS] Data limit for GCM under a given key.

2015-11-06 Thread Dang, Quynh
Tony,


You are correct. An Indistinguishability bound promises you no attacks will be 
below the bound assuming the claimed property(ies) of the underline function in 
the construction (mode) hold(s).


A distinguishing attack below the bound tells you that the construction or the 
underlined function is not strong or ideal as you would like, but it does not 
directly (100%) lead to a break of plaintext confidentiality or authenticity.  
Here, confidentiality protection of plaintext(s) is that an attacker who does 
not know the key can not find out any part of the plaintext(s) by decryption. 
And, I explained the point in the previous emails.


Under indistinguishability framework, one should not even go to 2^32 blocks 
with GCM when the IV space is 2^64 because there is a high probability of 
ciphertext collision with 2^32 ciphertexts.


Quynh.





From: Tony Arcieri 
Sent: Friday, November 6, 2015 7:59 PM
To: Watson Ladd
Cc: Dang, Quynh; tls@ietf.org
Subject: Re: [TLS] Data limit for GCM under a given key.

On Friday, November 6, 2015, Watson Ladd 
mailto:watsonbl...@gmail.com>> wrote:
On Wed, Nov 4, 2015 at 3:43 PM, Dang, Quynh  wrote:
> I did not talk  under indistinguishability framework. My discussion was about 
> confidentiality protection and authentication.

What is the definition of "confidentiality protection" being used here?

I too am confused by Quynh's statement. Indistinguishability is the modern bar 
for confidentiality and authentication.

Quynh, are you talking about anything less than IND-CCA2? If you are, that is 
less than the modern bar I would personally consider acceptable.


--
Tony Arcieri

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-16 Thread Dang, Quynh
Hi Eric,


I explained the issue before and some other people recently explained it again 
on this thread. AES has 128-bit block. Therefore, when there are 2^64  or more 
ciphertext blocks, there are likely collisions among the ciphertext blocks (the 
collision probability increases rapidly when the number of ciphertext blocks 
increases above 2^64 ( 2^n/2 in generic term) ).


However, the only information the attacker can gain from ANY pair of collided 
ciphertext blocks is that their corresponding plaintext blocks are probably 
different ones because the chance for them to be the same is 1/2^128 (1/2^n in 
generic term) and this is NOT better than a random guess. So, you don't lose 
anything actually.


As a pseudorandom function, AES completely fails under any mode when the number 
of ciphertext blocks gets above 2^64.  When the counter is effectively only 64 
bits (instead of 96 bits as in TLS 1.3), the data complexity should be below 
2^32 blocks because the same input block and the same key can be repeated 2^32 
times to find a collision in the ciphertext blocks.  If you want a negligible 
collision probability, the number of data blocks should be way below 2^32 in 
this situation.


However, the confidentiality of the plaintext blocks is not lost at all as long 
as the counter number does not repeat.


Quynh.





From: TLS  on behalf of Eric Rescorla 
Sent: Wednesday, December 16, 2015 6:17 AM
To: Simon Josefsson
Cc: tls@ietf.org
Subject: Re: [TLS] Data volume limits



On Wed, Dec 16, 2015 at 12:44 AM, Simon Josefsson 
mailto:si...@josefsson.org>> wrote:
Eric Rescorla mailto:e...@rtfm.com>> writes:

> Watson kindly prepared some text that described the limits on what's safe
> for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
> limit (2^{36} bytes), even though ChaCha doesn't have the same
> restriction.

Can we see a brief writeup explaining the 2^36 number?

I believe Watson provided one a while back at:
https://www.ietf.org/mail-archive/web/tls/current/msg18240.html

I don't like re-keying.  It is usually a sign that your primitives are
too weak and you are attempting to hide that fact.  To me, it is similar
to discard the first X byte of RC4 output.

To be clear: I would prefer not to rekey either, but the consensus at IETF 
Yokohama
was that we were close enough to the limit that we probably had to. Would be
happy to learn that we didn't.

-Ekr



If AES-GCM cannot provide confidentiality beyond 64GB (which would
surprise me somewhat), I believe we ought to be careful about
recommending it.

Of course, the devil is in the details: if the risk is that the secret
key is leaked, that's fatal; if the risk is that the attacker can tell
whether two particular plaintext 128 byte blocks are the same or not in
the entire file, that can be a risk we can live with (similar to the
discard X bytes of RC4 fix).

I believe 64GB is within the range that people download in a web browser
these days.  More data intensive longer-running protocols often transfer
significantly more.

/Simon

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-18 Thread Dang, Quynh
The collision probability of ciphertext blocks also depends on the size of the 
plaintext (record size  in a TLS implementation) in each call of the GCM 
encryption function.  Let's call each plaintext  to be 2^x 128-bit blocks.

TLS 1.3 uses 96-bit IV. 

If someone wants the collision probability below 1/2^y such as 1/2^24 or 1/2^32 
(2^32 = 4,294,967,296 and 2^24 = 16,777,216 ), the total number of plaintext 
blocks under a given key must be 2^((96 + x - y)/2) or lower.  

So, 2^((96 + x - y)/2) 128-bit blocks are the limit to achieve  IND-* with GCM. 

If someone does not need IND-* property, the above restriction is not needed.

Quynh. 



From: TLS  on behalf of Yoav Nir 
Sent: Thursday, December 17, 2015 6:07 AM
To: Nikos Mavrogiannopoulos
Cc: tls@ietf.org; Simon Josefsson
Subject: Re: [TLS] Data volume limits

> On 17 Dec 2015, at 10:19 AM, Nikos Mavrogiannopoulos  wrote:
>
> On Wed, 2015-12-16 at 09:57 -1000, Brian Smith wrote:
>
>> Therefore, I think we shouldn't add the rekeying mechanism as it is
>> unnecessary and it adds too much complexity.
>
> Any arbitrary limit for a TLS connection is almost guaranteed to cause
> problems in the future. We cannot predict whether 2^x should be
> sufficient for everyone, and I'm pretty sure this will prove to be a
> terrible mistake. TLS is already being used for VPNs and transferring
> larger amounts of data in long lived connections is a reality even
> today. The rekey today happens using the reauthentication mechanism,
> which has very complex semantics. Converting these to a simpler and
> predictable rekey mechanism would be an improvement.

Agreed. The alternative to having a rekey mechanism is to push the complexity 
to the application protocol, requiring it to be able to use more than one 
connection to transfer all the data, which may require some sort of session 
layer to maintain state between connections.

So unless we can guarantee or require that every algorithm we are going to use 
is good for some ridiculous amount of data (2^64 bytes may be enough), we need 
rekeying.

Yoav

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-29 Thread Dang, Quynh
Hi all,

Rekeying too often unnecessarily does not increase any cryptographic security. 
In addition, it could create other cryptographic issues for the system. The 
first issue is key collision risk when AES-128 is used and the second issue 
could be multi-target (multi-key) risk theoretically. 

Therefore, I would suggest not to rekey (as currently specified) too often 
unnecessarily. 

I think providing a data limit guidance sub-section under the Security 
Consideration section is one good option to be considered. Users just follow 
the guidance to set their own data limit(s). 

Quynh. 


From: TLS  on behalf of Dang, Quynh 
Sent: Friday, December 18, 2015 10:49 AM
To: tls@ietf.org
Subject: Re: [TLS] Data volume limits

The collision probability of ciphertext blocks also depends on the size of the 
plaintext (record size  in a TLS implementation) in each call of the GCM 
encryption function.  Let's call each plaintext  to be 2^x 128-bit blocks.

TLS 1.3 uses 96-bit IV.

If someone wants the collision probability below 1/2^y such as 1/2^24 or 1/2^32 
(2^32 = 4,294,967,296 and 2^24 = 16,777,216 ), the total number of plaintext 
blocks under a given key must be 2^((96 + x - y)/2) or lower.

So, 2^((96 + x - y)/2) 128-bit blocks are the limit to achieve  IND-* with GCM.

If someone does not need IND-* property, the above restriction is not needed.

Quynh.



From: TLS  on behalf of Yoav Nir 
Sent: Thursday, December 17, 2015 6:07 AM
To: Nikos Mavrogiannopoulos
Cc: tls@ietf.org; Simon Josefsson
Subject: Re: [TLS] Data volume limits

> On 17 Dec 2015, at 10:19 AM, Nikos Mavrogiannopoulos  wrote:
>
> On Wed, 2015-12-16 at 09:57 -1000, Brian Smith wrote:
>
>> Therefore, I think we shouldn't add the rekeying mechanism as it is
>> unnecessary and it adds too much complexity.
>
> Any arbitrary limit for a TLS connection is almost guaranteed to cause
> problems in the future. We cannot predict whether 2^x should be
> sufficient for everyone, and I'm pretty sure this will prove to be a
> terrible mistake. TLS is already being used for VPNs and transferring
> larger amounts of data in long lived connections is a reality even
> today. The rekey today happens using the reauthentication mechanism,
> which has very complex semantics. Converting these to a simpler and
> predictable rekey mechanism would be an improvement.

Agreed. The alternative to having a rekey mechanism is to push the complexity 
to the application protocol, requiring it to be able to use more than one 
connection to transfer all the data, which may require some sort of session 
layer to maintain state between connections.

So unless we can guarantee or require that every algorithm we are going to use 
is good for some ridiculous amount of data (2^64 bytes may be enough), we need 
rekeying.

Yoav

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSA-PSS in TLS 1.3

2016-03-03 Thread Dang, Quynh (Fed)
Hi all,

Why don't we use an even more elegant RSA signature called " full-domain hash 
RSA signature" ?

As you know, a SHAKE (as a variable output-length hash function) naturally 
produces a hash value which fits any given modulus size. Therefore, no paddings 
are needed which avoids any potential issues with the paddings and the 
signature algorithm would be very simple. 

Regards,
Quynh. 


From: TLS  on behalf of Dave Garrett 

Sent: Wednesday, March 2, 2016 4:16 PM
To: tls@ietf.org
Subject: Re: [TLS] RSA-PSS in TLS 1.3

On Wednesday, March 02, 2016 01:57:48 am Viktor Dukhovni wrote:
> adaptive attacks are I think a greater potential
> threat against interactive TLS than against a bunch of CA-authored
> bits at rest.

+1

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSA-PSS in TLS 1.3

2016-03-03 Thread Dang, Quynh (Fed)
Hi Hanno,

I think the PSS uses a random salt to get the hashing probabilistic.

A customized version of a SHAKE can/may take a domain-separation string or/and 
a random salt.

Quynh. 


From: TLS  on behalf of Hanno Böck 
Sent: Thursday, March 3, 2016 8:49 AM
To: tls@ietf.org
Subject: Re: [TLS] RSA-PSS in TLS 1.3

On Thu, 3 Mar 2016 13:35:46 +
"Dang, Quynh (Fed)"  wrote:

> Why don't we use an even more elegant RSA signature called "
> full-domain hash RSA signature" ?

Full Domain Hashing was originally developed by Rogaway and Bellare and
then later dismissed because they found that they could do better. Then
they developed PSS.

See
http://web.cs.ucdavis.edu/~rogaway/papers/exact.pdf

So in essence FDH is a predecessor of PSS and the authors of both
schemes came to the conclusion that PSS is the superior scheme.


> As you know, a SHAKE (as a variable output-length hash function)
> naturally produces a hash value which fits any given modulus size.
> Therefore, no paddings are needed which avoids any potential issues
> with the paddings and the signature algorithm would be very simple.

You could also use SHAKE in PSS to replace MGF1. This is probably
desirable if you intent to use PSS with SHA-3.

PSS doesn't really have any padding in the traditional sense. That is,
all the padding is somehow either hashed or xored with a hashed value.
I don't think any of the padding-related issues apply in any way to
PSS, if you disagree please explain.

(shameless plug: I wrote my thesis about PSS, in case anyone wants to
read it: https://rsapss.hboeck.de/ - it's been a while, don't be too
hard on me if I made mistakes)


--
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: BBB51E42

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSA-PSS in TLS 1.3

2016-03-03 Thread Dang, Quynh (Fed)
PSS+SHAKE128/512+SHAKE128 or PSS+SHAKE256/512+SHAKE256 (as SHAKEs being used as 
MGF) would be more efficient options. NIST is working on a formal specification 
for the SHAKEs being used as fixed output-length hash functions such as 
SHAKE128/256, SHAKE128/512 and SHAKE256/512. 

Prepending a random salt of 512-bit in length (fixed length of 512 bits) to the 
message should work and this is very simple and efficient.

Quynh. 




From: TLS  on behalf of Hanno Böck 
Sent: Thursday, March 3, 2016 11:11 AM
To: Blumenthal, Uri - 0553 - MITLL; tls@ietf.org
Subject: Re: [TLS] RSA-PSS in TLS 1.3

On Thu, 3 Mar 2016 15:29:37 +
"Blumenthal, Uri - 0553 - MITLL"  wrote:

> Also, wasn't PSS ‎developed before SHA3 and SHAKE were known, let
> alone available?

Yeah, more than 10 years before.
It's more the other way found: PSS and other constructions showed the
need for hash functions with a defined output length. SHAKE is such a
function. PSS uses a construction called MGF1, which essentially takes
an existing fixed-output-length hash, combines that with a counter and
produces some construction. SHAKE deprecates the need for such a
workaround.

So instead of using PSS+SHA256+MGF1-with-SHA256 you could say you use
PSS+SHA-3-256+SHAKE256. I don't think this changes a whole lot in
regards to security (as long as we assume both sha256 and sha-3-256 are
very secure algorithms).

> It may be worth asking the authors what's their opinion of FDH vs PSS
> in view of the state of the art *today*.

You may do that, but I doubt that changes much.

I think FDH really is not an option at all here. It may very well be
that there are better ways to do RSA-padding, but I don't think that
this is viable for TLS 1.3 (and I don't think FDH is better).
PSS has an RFC (3447) and has been thoroughly analyzed by research. I
think there has been far less analyzing effort towards FDH (or any
other construction) and it is not in any way specified in a standards
document. If one would want to use FDH or anything else one would imho
first have to go through some standardization process (which could be
CFRG or NIST or someone else) and call for a thorough analysis of it
by the cryptographic community. Which would take at least a couple of
years.

Given that there probably is no long term future for RSA anyway (people
want ECC and postquantum is ahead) I doubt anything else than the
primitives we already have in standards will ever be viable.


--
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: BBB51E42
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] call for consensus: changes to IANA registry rules for cipher suites

2016-03-31 Thread Dang, Quynh (Fed)
Hi Sean and all,

I support the first condition: A spec gets a "Y" when it has the IETF consensus.

Regards,
Quynh. 


From: TLS  on behalf of Hannes Tschofenig 

Sent: Thursday, March 31, 2016 9:45 AM
To: Sean Turner; 
Subject: Re: [TLS] call for consensus: changes to IANA registry rules for 
cipher suites

Hi Sean,

What is the requirement for adding a spec to the list with the value
IETF Recommended = "Y" (or to change an entry from "Y" to "N")?

You mention two conditions:

 * IETF has consensus
 * Are reasonably expected to be supported by widely used
implementations such as open-source libraries

Of course, with all our work we expect them to be supported by widely
used implementations. The future is unpredicable and therefore not a
good item for making a judgement. I realy find document authors who have
less interest to get their stuff deployed.

Getting IETF consensus on specifications has turned to be easier than
most people expect and the IETF published RFCs that have not received a
lot of review. Large amount of review is not a pre-condition for consensus.

While your idea sounds good it suffers from practical issues. I am
worried that the process will not be too fair and may favor a certain
type of community.

Ciao
Hannes


On 03/30/2016 03:53 AM, Sean Turner wrote:
> Hi!
>
> In Yokohama, we discussed changing the IANA registry assignment rules for 
> cipher suites to allow anyone with a stable, publicly available, peer 
> reviewed reference document to request and get a code point and to add an 
> “IETF Recommended” column to the registry.  This change is motivated by the 
> large # of requests received for code points [0], the need to alter the 
> incorrect perception that getting a code point somehow legitimizes the 
> suite/algorithm, and to help implementers out.  We need to determine whether 
> we have consensus on this plan, which follows:
>
> 1. The IANA registry rules for the TLS cipher suite registry [1] will be 
> changed to specification required.
>
> 2. A new “IETF Recommended” column will be added with two values: “Y” or “N”. 
>  Y and N have the following meaning:
>
>  Cipher suites marked with a “Y” the IETF has consensus on
>  and are reasonably expected to be supported by widely
>  used implementations such as open-source libraries.  The
>  IETF takes no position on the cipher suites marked with an
>  “N”.  Not IETF recommended does not necessarily (but can)
>  mean that the ciphers are not cryptographically sound (i.e.,
>  are bad).  Cipher suites can be recategorized from N to Y
>  (e.g., Curve448) and vice versa.
>
> 3. We will add a “Note" to the IANA registry itself (i.e., on [0]) that 
> matches the above so that the same information is available to those who 
> don’t read the IANA considerations section of the RFC.
>
> Please indicate whether or not you could support this plan.
>
> Thanks,
>
> J&S
>
> [0] In the last year, the chairs have received requests for:
>
> PSK: https://datatracker.ietf.org/doc/draft-mattsson-tls-ecdhe-psk-aead/
> AES-OCB: https://www.ietf.org/archive/id/draft-zauner-tls-aes-ocb-03.txt
> Kcipher2: https://datatracker.ietf.org/doc/draft-kiyomoto-kcipher2-tls/
> dragonfly: https://datatracker.ietf.org/doc/draft-ietf-tls-pwd/
> NTRU:  http://www.ietf.org/id/draft-whyte-qsh-tls12-01.txt
> JPAKE: not sure they got around to publishing a draft.
>
> [1] 
> https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-4
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] call for consensus: changes to IANA registry rules for cipher suites

2016-04-06 Thread Dang, Quynh (Fed)
Hi Sean,

I would like to express my opinion again.

I think the first requirement is great and sufficient. 

I have great support, appreciation and respect for the open source communities. 
However, the second requirement means that an IETF consensus can have no values 
in theory and that sounds not right to me.

Regards,
Quynh. 


From: TLS  on behalf of Hannes Tschofenig 

Sent: Thursday, March 31, 2016 9:45 AM
To: Sean Turner; 
Subject: Re: [TLS] call for consensus: changes to IANA registry rules for 
cipher suites

Hi Sean,

What is the requirement for adding a spec to the list with the value
IETF Recommended = "Y" (or to change an entry from "Y" to "N")?

You mention two conditions:

 * IETF has consensus
 * Are reasonably expected to be supported by widely used
implementations such as open-source libraries

Of course, with all our work we expect them to be supported by widely
used implementations. The future is unpredicable and therefore not a
good item for making a judgement. I realy find document authors who have
less interest to get their stuff deployed.

Getting IETF consensus on specifications has turned to be easier than
most people expect and the IETF published RFCs that have not received a
lot of review. Large amount of review is not a pre-condition for consensus.

While your idea sounds good it suffers from practical issues. I am
worried that the process will not be too fair and may favor a certain
type of community.

Ciao
Hannes


On 03/30/2016 03:53 AM, Sean Turner wrote:
> Hi!
>
> In Yokohama, we discussed changing the IANA registry assignment rules for 
> cipher suites to allow anyone with a stable, publicly available, peer 
> reviewed reference document to request and get a code point and to add an 
> “IETF Recommended” column to the registry.  This change is motivated by the 
> large # of requests received for code points [0], the need to alter the 
> incorrect perception that getting a code point somehow legitimizes the 
> suite/algorithm, and to help implementers out.  We need to determine whether 
> we have consensus on this plan, which follows:
>
> 1. The IANA registry rules for the TLS cipher suite registry [1] will be 
> changed to specification required.
>
> 2. A new “IETF Recommended” column will be added with two values: “Y” or “N”. 
>  Y and N have the following meaning:
>
>  Cipher suites marked with a “Y” the IETF has consensus on
>  and are reasonably expected to be supported by widely
>  used implementations such as open-source libraries.  The
>  IETF takes no position on the cipher suites marked with an
>  “N”.  Not IETF recommended does not necessarily (but can)
>  mean that the ciphers are not cryptographically sound (i.e.,
>  are bad).  Cipher suites can be recategorized from N to Y
>  (e.g., Curve448) and vice versa.
>
> 3. We will add a “Note" to the IANA registry itself (i.e., on [0]) that 
> matches the above so that the same information is available to those who 
> don’t read the IANA considerations section of the RFC.
>
> Please indicate whether or not you could support this plan.
>
> Thanks,
>
> J&S
>
> [0] In the last year, the chairs have received requests for:
>
> PSK: https://datatracker.ietf.org/doc/draft-mattsson-tls-ecdhe-psk-aead/
> AES-OCB: https://www.ietf.org/archive/id/draft-zauner-tls-aes-ocb-03.txt
> Kcipher2: https://datatracker.ietf.org/doc/draft-kiyomoto-kcipher2-tls/
> dragonfly: https://datatracker.ietf.org/doc/draft-ietf-tls-pwd/
> NTRU:  http://www.ietf.org/id/draft-whyte-qsh-tls12-01.txt
> JPAKE: not sure they got around to publishing a draft.
>
> [1] 
> https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-4
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Alexey Melnikov's Yes on draft-ietf-tls-chacha20-poly1305-04: (with COMMENT)

2016-05-05 Thread Dang, Quynh (Fed)

Hi Stephen,

The one below can be used.

[FIPS 180-4]  Federal Information Processing Standards Publication
(FIPS PUB) 180-4, Secure Hash Standard (SHS), August 2015.

Regards,
Quynh.

From: TLS  on behalf of Stephen Farrell 

Sent: Thursday, May 5, 2016 10:14:19 AM
To: Alexey Melnikov; The IESG
Cc: draft-ietf-tls-chacha20-poly1...@ietf.org; tls-cha...@ietf.org; tls@ietf.org
Subject: Re: [TLS] Alexey Melnikov's Yes on 
draft-ietf-tls-chacha20-poly1305-04: (with COMMENT)

On 24/04/16 17:23, Alexey Melnikov wrote:
> Alexey Melnikov has entered the following ballot position for
> draft-ietf-tls-chacha20-poly1305-04: Yes
>
> When responding, please keep the subject line intact and reply to all
> email addresses included in the To and CC lines. (Feel free to cut this
> introductory paragraph, however.)
>
>
> Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
> for more information about IESG DISCUSS and COMMENT positions.
>
>
> The document, along with other ballot positions, can be found here:
> https://datatracker.ietf.org/doc/draft-ietf-tls-chacha20-poly1305/
>
>
>
> --
> COMMENT:
> --
>
> Nit: SHA-256 probably needs a normative reference.

I added an RFC editor note. If someone has the right reference
to add handy I can add that too. Or I'll get to it in a bit,

Cheers,
S.

>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Comments on nonce construction and cipher text size restriction.

2016-05-24 Thread Dang, Quynh (Fed)
Hi Eric,

1. For this text:  "plus the length of the output of the signing algorithm. " 
in the last paragraph of Section 4.8.1, did you mean "plus the output of the 
signing algorithm." ?

2. "The length (in bytes) of the following TLSCiphertext.fragment. The length 
MUST NOT exceed 2^14 + 256. An endpoint that receives a record that exceeds 
this length MUST generate a fatal "record_overflow" alert. " . There could be a 
cipher that generates ciphertext longer than plaintext in some cases plus the 
tag. If the tag was 256 bits, then this requirement would disallow that cipher 
unnecessarily when a record size is 2^14.

3. "The padded sequence number is XORed with the static client_write_iv or 
server_write_iv, depending on the role." I think the ivs are not needed.



4. The current way nonce is specified would disallow ciphers that use any other 
ways of generating the nonce such as random nonces.



Regards,

Quynh.



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Comments on nonce construction and cipher text size restriction.

2016-05-24 Thread Dang, Quynh (Fed)


On 5/24/16, 12:13 PM, "Martin Thomson"  wrote:

>On 24 May 2016 at 08:20, Dang, Quynh (Fed)  wrote:
>> 1. For this text:  "plus the length of the output of the signing
>>algorithm.
>> " in the last paragraph of Section 4.8.1, did you mean "plus the output
>>of
>> the signing algorithm.² ?
>
>The text is correct.  It is talking about the length of the structure,
>not its contents.

>
>> 2. "The length (in bytes) of the following TLSCiphertext.fragment. The
>> length MUST NOT exceed 2^14 + 256. An endpoint that receives a record
>>that
>> exceeds this length MUST generate a fatal "record_overflow" alert. " .
>>There
>> could be a cipher that generates ciphertext longer than plaintext in
>>some
>> cases plus the tag. If the tag was 256 bits, then this requirement would
>> disallow that cipher unnecessarily when a record size is 2^14.
>
>The value 256 is octets, not bits.  If you are aware of a need for an
>authentication tag longer than 256 octets, now would be a great time
>to tell all of us.

My misreading of the text. Thanks.
>
>
>> 3. "The padded sequence number is XORed with the static client_write_iv
>>or
>> server_write_iv, depending on the role.² I think the ivs are not needed.
>
>We discussed this at quite some length.  I originally took your
>position, but the IVs add an extra layer of safety at very little
>cost.

I don¹t see any extra layer here.

>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Comments on nonce construction and cipher text size restriction.

2016-05-24 Thread Dang, Quynh (Fed)


On 5/24/16, 12:58 PM, "ilariliusva...@welho.com on behalf of Ilari
Liusvaara"  wrote:

>On Tue, May 24, 2016 at 03:20:17PM +, Dang, Quynh (Fed) wrote:
>> Hi Eric,
>> 
>> 1. For this text:  "plus the length of the output of the signing
>> algorithm. " in the last paragraph of Section 4.8.1, did you mean
>> "plus the output of the signing algorithm." ?
>
>The paragraph seems to talk about the length, so plus length of seems
>correct.
>
>> 2. "The length (in bytes) of the following TLSCiphertext.fragment.
>> The length MUST NOT exceed 2^14 + 256. An endpoint that receives a
>> record that exceeds this length MUST generate a fatal "record_overflow"
>> alert. " . There could be a cipher that generates ciphertext longer
>> than plaintext in some cases plus the tag. If the tag was 256 bits,
>> then this requirement would disallow that cipher unnecessarily when
>> a record size is 2^14.
>
>It is not in bits, it is in bytes. So to blow the limit, you would need
>cipher that expands the plaintext by 256 bytes (remember the typebyte
>counts as plaintext input here).
>
>And what algorithm would have >2040 bits of expansion?
>
>Variable-tau MRAEs could have larger expansions, but practical
>parameters limit expansion much below that value.
>
>> 3. "The padded sequence number is XORed with the static client_write_iv
>> or server_write_iv, depending on the role." I think the ivs are not
>> needed.
>
>Oh, they are needed (as in, security will be degraded if IVs are
>removed).

I don¹t think so. 

> 
>> 4. The current way nonce is specified would disallow ciphers that
>> use any other ways of generating the nonce such as random nonces.
>
>None of the present algorithms is able to handle a random nonce,
>you would need much longer nonces.

True. 
>
>And also, it is much easier to just count than try to randomly
>generate "nonces" (and as far as it is known, more secure,
>due to random "nonces" having tendency to repeat).

I did not recommend using short random nonces. If the current construction
of nonces is for GCM, than that would be fine and recommended. But,
currently, it is written as for all ciphers.  If the construction of nonce
is not fixed, then an encryption mode just needs to specify its own way of
generating nonces. 

>
>And TLS breaks in truly catastrophic way if GCM nonce ever
>repeats (and wasn't there just such problem in multiple TLS

See comment above. 
>implmenentations)?
>
>
>-Ilari

Quynh. 
>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Comments on nonce construction and cipher text size restriction.

2016-05-24 Thread Dang, Quynh (Fed)


On 5/24/16, 2:42 PM, "Martin Thomson"  wrote:

>On 24 May 2016 at 10:46, Dang, Quynh (Fed)  wrote:
>>>We discussed this at quite some length.  I originally took your
>>>position, but the IVs add an extra layer of safety at very little
>>>cost.
>>
>> I don¹t see any extra layer here.
>
>
>The argument here is that there are only 2^128 keys and some protocols
>have predictable plaintext.  A predictable nonce would allow an
>attacker to do some pre-calculation with a large number of keys to get
>a chance of a collision (and a break).  It's a long bow, but not
>entirely implausible.

Ciphers use nonces are designed/proved to be secure when nonces are
predictable: nonces are not random values.

>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Dang, Quynh (Fed)
Hi Eric and all,

In my opinion, we should give better information about data limit for AES_GCM 
in TLS 1.3 instead of what is current in the draft 14.

In this paper: http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf,  what is called 
confidentiality attack is the known plaintext differentiality attack where the 
attacker has/chooses two plaintexts, send them to the AES-encryption oracle.  
The oracle encrypts one of them, then sends the ciphertext to the attacker.  
After seeing the ciphertext, the attacker has some success probability of 
telling which plaintext was encrypted and this success probability is in the 
column called "Attack Success Probability" in Table 1.  This attack does not 
break confidentiality.

If the attack above breaks one of security goal(s) of your individual system, 
then making success probability of that attack at 2^(-32) max is enough. In 
that case, the Max number of records is around 2^38.


Regards,
Quynh.



Date: Monday, July 11, 2016 at 3:08 PM
To: "tls@ietf.org" mailto:tls@ietf.org>>
Subject: [TLS] New draft: draft-ietf-tls-tls13-14.txt

Folks,

I've just submitted draft-ietf-tls-tls13-14.txt and it should
show up on the draft repository shortly. In the meantime you
can find the editor's copy in the usual location at:

  http://tlswg.github.io/tls13-spec/

The major changes in this document are:

* A big restructure to make it read better. I moved the Overview
  to the beginning and then put the document in a more logical
  order starting with the handshake and then the record and
  alerts.

* Totally rewrote the section which used to be called "Security
  Analysis" and is now called "Overview of Security Properties".
  This section is still kind of a hard hat area, so PRs welcome.
  In particular, I know I need to beef up the citations for the
  record layer section.

* Removed the 0-RTT EncryptedExtensions and moved ticket_age
  into the ClientHello. This quasi-reverts a change in -13 that
  made implementation of 0-RTT kind of a pain.

As usual, comments welcome.
-Ekr



* Allow cookies to be longer (*)

* Remove the "context" from EarlyDataIndication as it was undefined
  and nobody used it (*)

* Remove 0-RTT EncryptedExtensions and replace the ticket_age extension
  with an obfuscated version. Also necessitates a change to
  NewSessionTicket (*).

* Move the downgrade sentinel to the end of ServerHello.Random
  to accomodate tlsdate (*).

* Define ecdsa_sha1 (*).

* Allow resumption even after fatal alerts. This matches current
  practice.

* Remove non-closure warning alerts. Require treating unknown alerts as
  fatal.

* Make the rules for accepting 0-RTT less restrictive.

* Clarify 0-RTT backward-compatibility rules.

* Clarify how 0-RTT and PSK identities interact.

* Add a section describing the data limits for each cipher.

* Major editorial restructuring.

* Replace the Security Analysis section with a WIP draft.

(*) indicates changes to the wire protocol which may require implementations
to update.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Dang, Quynh (Fed)
Hi Eric and all,

In my opinion, we should give better information about data limit for AES_GCM 
in TLS 1.3 instead of what is current in the draft 14.

In this paper: http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf,  what is called 
confidentiality attack is the known plaintext differentiality attack where the 
attacker has/chooses two plaintexts, send them to the AES-encryption oracle.  
The oracle encrypts one of them, then sends the ciphertext to the attacker.  
After seeing the ciphertext, the attacker has some success probability of 
telling which plaintext was encrypted and this success probability is in the 
column called "Attack Success Probability" in Table 1.  This attack does not 
break confidentiality.

If the attack above breaks one of security goal(s) of your individual system, 
then making success probability of that attack at 2^(-32) max is enough. In 
that case, the Max number of records is around 2^38.


Regards,
Quynh.

From: TLS mailto:tls-boun...@ietf.org>> on behalf of Eric 
Rescorla mailto:e...@rtfm.com>>
Date: Monday, July 11, 2016 at 3:08 PM
To: "tls@ietf.org" mailto:tls@ietf.org>>
Subject: [TLS] New draft: draft-ietf-tls-tls13-14.txt

Folks,

I've just submitted draft-ietf-tls-tls13-14.txt and it should
show up on the draft repository shortly. In the meantime you
can find the editor's copy in the usual location at:

  http://tlswg.github.io/tls13-spec/

The major changes in this document are:

* A big restructure to make it read better. I moved the Overview
  to the beginning and then put the document in a more logical
  order starting with the handshake and then the record and
  alerts.

* Totally rewrote the section which used to be called "Security
  Analysis" and is now called "Overview of Security Properties".
  This section is still kind of a hard hat area, so PRs welcome.
  In particular, I know I need to beef up the citations for the
  record layer section.

* Removed the 0-RTT EncryptedExtensions and moved ticket_age
  into the ClientHello. This quasi-reverts a change in -13 that
  made implementation of 0-RTT kind of a pain.

As usual, comments welcome.
-Ekr



* Allow cookies to be longer (*)

* Remove the "context" from EarlyDataIndication as it was undefined
  and nobody used it (*)

* Remove 0-RTT EncryptedExtensions and replace the ticket_age extension
  with an obfuscated version. Also necessitates a change to
  NewSessionTicket (*).

* Move the downgrade sentinel to the end of ServerHello.Random
  to accomodate tlsdate (*).

* Define ecdsa_sha1 (*).

* Allow resumption even after fatal alerts. This matches current
  practice.

* Remove non-closure warning alerts. Require treating unknown alerts as
  fatal.

* Make the rules for accepting 0-RTT less restrictive.

* Clarify 0-RTT backward-compatibility rules.

* Clarify how 0-RTT and PSK identities interact.

* Add a section describing the data limits for each cipher.

* Major editorial restructuring.

* Replace the Security Analysis section with a WIP draft.

(*) indicates changes to the wire protocol which may require implementations
to update.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Dang, Quynh (Fed)
Hi Kenny,

The indistinguishability-based security notion in the paper is a stronger
security notion than the (old) traditional confidentiality notion.


(*) Indistinguishability notion (framework) guarantees no other attacks
can be better than the indistinguishability bound. Intuitively, you can¹t
attack if you can¹t even tell two things are different or not. So, being
able to say two things are different or not is the minimal condition to
lead to any attack.

The traditional confidentiality definition is that knowing only the
ciphertexts, the attacker can¹t know any content of the corresponding
plaintexts with a greater probability than some value and this value
depends on the particular cipher. Of course, the maximum amount of data
must not be more than some limit under a given key which also depends on
the cipher. 

For example, with counter mode AES_128, Let¹s say encrypting 2^70 input
blocks with a single key. With the 2^70 ciphertext blocks alone (each
block is 128 bits), I don¹t think one can find out any content of any of
the plaintexts. The chance for knowing any block of the plaintexts is
1/(2^128) in this case.

I support the strongest indistinguishability notion mentioned in (*)
above, but in my opinion we should provide good description to the users.
That is why I support the limit around 2^38 records.

Regards,
Quynh. 

On 7/12/16, 10:03 AM, "Paterson, Kenny"  wrote:

>Hi Quynh,
>
>This indistinguishability-based security notion is the confidentiality
>notion that is by now generally accepted in the crypto community. Meeting
>it is sufficient to guarantee security against many other forms of attack
>on confidentiality, which is one of the main reasons we use it.
>
>You say that an attack in the sense implied by breaking this notion does
>not break confidentiality. Can you explain what you mean by
>"confidentiality", in a precise way? I can then try to tell you whether
>this notion will imply yours.
>
>Regards
>
>Kenny 
>
>On 12/07/2016 14:04, "TLS on behalf of Dang, Quynh (Fed)"
> wrote:
>
>>Hi Eric and all, 
>>
>>
>>In my opinion, we should give better information about data limit for
>>AES_GCM in TLS 1.3 instead of what is current in the draft 14.
>>
>>
>>In this paper: http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf,  what is
>>called confidentiality attack is the known plaintext differentiality
>>attack where
>> the attacker has/chooses two plaintexts, send them to the AES-encryption
>>oracle.  The oracle encrypts one of them, then sends the ciphertext to
>>the attacker.  After seeing the ciphertext, the attacker has some success
>>probability of telling which plaintext
>> was encrypted and this success probability is in the column called
>>³Attack Success Probability² in Table 1.  This attack does not break
>>confidentiality. 
>>
>>
>>If the attack above breaks one of security goal(s) of your individual
>>system, then making success probability of that attack at 2^(-32) max is
>>enough. In that case, the Max number of records is around 2^38.
>>
>>
>>
>>
>>Regards,
>>Quynh. 
>>
>>
>>
>>
>>
>>
>>Date: Monday, July 11, 2016 at 3:08 PM
>>To: "tls@ietf.org" 
>>Subject: [TLS] New draft: draft-ietf-tls-tls13-14.txt
>>
>>
>>
>>Folks,
>>
>>
>>I've just submitted draft-ietf-tls-tls13-14.txt and it should
>>show up on the draft repository shortly. In the meantime you
>>can find the editor's copy in the usual location at:
>>
>>
>>  http://tlswg.github.io/tls13-spec/
>>
>>
>>The major changes in this document are:
>>
>>
>>* A big restructure to make it read better. I moved the Overview
>>  to the beginning and then put the document in a more logical
>>  order starting with the handshake and then the record and
>>  alerts.
>>
>>
>>* Totally rewrote the section which used to be called "Security
>>  Analysis" and is now called "Overview of Security Properties".
>>  This section is still kind of a hard hat area, so PRs welcome.
>>  In particular, I know I need to beef up the citations for the
>>  record layer section.
>>
>>
>>* Removed the 0-RTT EncryptedExtensions and moved ticket_age
>>  into the ClientHello. This quasi-reverts a change in -13 that
>>  made implementation of 0-RTT kind of a pain.
>>
>>
>>As usual, comments welcome.
>>-Ekr
>>
>>
>>
>>
>>
>>
>>* Allow cookies to be longer (*)
>>
>>
>>* Remove the "context" from EarlyDataIndication as it was undefined
>>

Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Dang, Quynh (Fed)
Hi Kenny, 

On 7/12/16, 12:33 PM, "Paterson, Kenny"  wrote:

>Unfortunately, that's not quite the right interpretation. The bounds one
>obtains depend on both the total amount of data encrypted AND the number
>of encryption queries the adversary is allowed to make to AES-GCM under
>the (single) target key.
>
>We assumed each record was 2^14 bytes in size to simplify the ensuing
>analysis, and to enable us to focus on how the security bounds then depend
>on the number of records encrypted. See equation (5) and Table 2 in the
>note at 
>
>   http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf.
>
>In short, the security bound does not necessarily hold for ANY 2^52
>encrypted data bytes. For example, if the attacker encrypted 2^52 records
>of size 1 (!) then equation (5) would tell us almost nothing useful at all
>about security.
>
>Finally, you write "to come to the 2^38 record limit, they assume that
>each record is the maximum 2^14 bytes". For clarity, we did not recommend
>a limit of 2^38 records. That's Quynh's preferred number, and is
>unsupported by our analysis.

What is problem with my suggestion even with the record size being the
maximum value?


>
>Cheers,
>
>Kenny

Regards,
Quynh. 
> 
>
>
>On 12/07/2016 16:45, "Scott Fluhrer (sfluhrer)" 
>wrote:
>
>>Actually, a more correct way of viewing the limit would be 2^52 encrypted
>>data bytes. To come to the 2^38 record limit, they assume that each
>>record is the maximum 2^14 bytes.  Of course, at a 1Gbps rate, it'd take
>>over a year to encrypt that much data...
>>
>>> -Original Message-
>>> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Dang, Quynh (Fed)
>>> Sent: Tuesday, July 12, 2016 11:12 AM
>>> To: Paterson, Kenny; Dang, Quynh (Fed); Eric Rescorla; tls@ietf.org
>>> Subject: Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt
>>> 
>>> Hi Kenny,
>>> 
>>> The indistinguishability-based security notion in the paper is a
>>>stronger
>>> security notion than the (old) traditional confidentiality notion.
>>> 
>>> 
>>> (*) Indistinguishability notion (framework) guarantees no other attacks
>>>can
>>> be better than the indistinguishability bound. Intuitively, you can¹t
>>>attack if
>>> you can¹t even tell two things are different or not. So, being able to
>>>say two
>>> things are different or not is the minimal condition to lead to any
>>>attack.
>>> 
>>> The traditional confidentiality definition is that knowing only the
>>>ciphertexts,
>>> the attacker can¹t know any content of the corresponding plaintexts
>>>with a
>>> greater probability than some value and this value depends on the
>>>particular
>>> cipher. Of course, the maximum amount of data must not be more than
>>> some limit under a given key which also depends on the cipher.
>>> 
>>> For example, with counter mode AES_128, Let¹s say encrypting 2^70 input
>>> blocks with a single key. With the 2^70 ciphertext blocks alone (each
>>>block is
>>> 128 bits), I don¹t think one can find out any content of any of the
>>>plaintexts.
>>> The chance for knowing any block of the plaintexts is
>>> 1/(2^128) in this case.
>>> 
>>> I support the strongest indistinguishability notion mentioned in (*)
>>>above,
>>> but in my opinion we should provide good description to the users.
>>> That is why I support the limit around 2^38 records.
>>> 
>>> Regards,
>>> Quynh.
>>> 
>>> On 7/12/16, 10:03 AM, "Paterson, Kenny" 
>>> wrote:
>>> 
>>> >Hi Quynh,
>>> >
>>> >This indistinguishability-based security notion is the confidentiality
>>> >notion that is by now generally accepted in the crypto community.
>>> >Meeting it is sufficient to guarantee security against many other
>>>forms
>>> >of attack on confidentiality, which is one of the main reasons we use
>>>it.
>>> >
>>> >You say that an attack in the sense implied by breaking this notion
>>> >does not break confidentiality. Can you explain what you mean by
>>> >"confidentiality", in a precise way? I can then try to tell you
>>>whether
>>> >this notion will imply yours.
>>> >
>>> >Regards
>>> >
>>> >Kenny
>>> >
>>> >On 12/07/2016 14:04, "TLS on behalf of Dang, Quynh (Fed)"
>>> 

Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Dang, Quynh (Fed)
Hi Kenny, 

On 7/12/16, 1:05 PM, "Paterson, Kenny"  wrote:

>Hi
>
>On 12/07/2016 16:12, "Dang, Quynh (Fed)"  wrote:
>
>>Hi Kenny,
>>
>>The indistinguishability-based security notion in the paper is a stronger
>>security notion than the (old) traditional confidentiality notion.
>
>Well, indeed, I'm somewhat aware of the notion and its emergence over the
>years. Indeed, I have had the very real pleasure of writing a few research
>papers using indistinguishability-based security notions! Resisting the
>temptation to give you chapter and verse on your analysis of the notions
>and how to interpret them...
>
>>
>>(*) Indistinguishability notion (framework) guarantees no other attacks
>>can be better than the indistinguishability bound. Intuitively, you can¹t
>>attack if you can¹t even tell two things are different or not. So, being
>>able to say two things are different or not is the minimal condition to
>>lead to any attack.
>>The traditional confidentiality definition is that knowing only the
>>ciphertexts, the attacker can¹t know any content of the corresponding
>>plaintexts with a greater probability than some value and this value
>>depends on the particular cipher.
>>Of course, the maximum amount of data
>>must not be more than some limit under a given key which also depends on
>>the cipher. 
>>
>>For example, with counter mode AES_128, Let¹s say encrypting 2^70 input
>>blocks with a single key. With the 2^70 ciphertext blocks alone (each
>>block is 128 bits), I don¹t think one can find out any content of any of
>>the plaintexts. The chance for knowing any block of the plaintexts is
>>1/(2^128) in this case.
>
>>I support the strongest indistinguishability notion mentioned in (*)
>>above, but in my opinion we should provide good description to the users.
>
>OK, I think now we are at the heart of your argument. You support our
>choice of security definition and method of analysis after all.
>
>And we can agree that good descriptions can only help.
>
>>That is why I support the limit around 2^38 records.
>
>I don't see how changing 2^24.5 (which is in the current draft) to 2^38
>provides a better description to users.
>
>Are you worried they won't know what a decimal in the exponent means?
>
>Or, more seriously, are you saying that 2^{-32} for single key attacks is
>a big enough security margin? If so, can you say what that's based on?

It would not make sense to ask people to rekey unnecessarily. 1 in 2^32 is
1 in 4,294,967,296 for the indistinguishability attack.

>Cheers,
>
>Kenny

Regards,
Quynh. 

> 
>
>
>>
>>Regards,
>>Quynh. 
>>
>>On 7/12/16, 10:03 AM, "Paterson, Kenny" 
>>wrote:
>>
>>>Hi Quynh,
>>>
>>>This indistinguishability-based security notion is the confidentiality
>>>notion that is by now generally accepted in the crypto community.
>>>Meeting
>>>it is sufficient to guarantee security against many other forms of
>>>attack
>>>on confidentiality, which is one of the main reasons we use it.
>>>
>>>You say that an attack in the sense implied by breaking this notion does
>>>not break confidentiality. Can you explain what you mean by
>>>"confidentiality", in a precise way? I can then try to tell you whether
>>>this notion will imply yours.
>>>
>>>Regards
>>>
>>>Kenny 
>>>
>>>On 12/07/2016 14:04, "TLS on behalf of Dang, Quynh (Fed)"
>>> wrote:
>>>
>>>>Hi Eric and all,
>>>>
>>>>
>>>>In my opinion, we should give better information about data limit for
>>>>AES_GCM in TLS 1.3 instead of what is current in the draft 14.
>>>>
>>>>
>>>>In this paper: http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf,  what is
>>>>called confidentiality attack is the known plaintext differentiality
>>>>attack where
>>>> the attacker has/chooses two plaintexts, send them to the
>>>>AES-encryption
>>>>oracle.  The oracle encrypts one of them, then sends the ciphertext to
>>>>the attacker.  After seeing the ciphertext, the attacker has some
>>>>success
>>>>probability of telling which plaintext
>>>> was encrypted and this success probability is in the column called
>>>>³Attack Success Probability² in Table 1.  This attack does not break
>>>>confidentiality.
>>>>
>>>>
>>>>If the attack above breaks one of security goal(s) of your

Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Dang, Quynh (Fed)
Hi Kenny, 

On 7/12/16, 1:39 PM, "Paterson, Kenny"  wrote:

>Hi
>
>On 12/07/2016 18:12, "Dang, Quynh (Fed)"  wrote:
>
>>Hi Kenny, 
>>
>>On 7/12/16, 1:05 PM, "Paterson, Kenny"  wrote:
>>
>>>Hi
>>>
>>>On 12/07/2016 16:12, "Dang, Quynh (Fed)"  wrote:
>>>
>>>>Hi Kenny,
>>>>
>>>>I support the strongest indistinguishability notion mentioned in (*)
>>>>above, but in my opinion we should provide good description to the
>>>>users.
>>>
>>>OK, I think now we are at the heart of your argument. You support our
>>>choice of security definition and method of analysis after all.
>>>
>>>And we can agree that good descriptions can only help.
>>>
>>>>That is why I support the limit around 2^38 records.
>>>
>>>I don't see how changing 2^24.5 (which is in the current draft) to 2^38
>>>provides a better description to users.
>>>
>>>Are you worried they won't know what a decimal in the exponent means?
>>>
>>>Or, more seriously, are you saying that 2^{-32} for single key attacks
>>>is
>>>a big enough security margin? If so, can you say what that's based on?
>>
>>It would not make sense to ask people to rekey unnecessarily. 1 in 2^32
>>is
>>1 in 4,294,967,296 for the indistinguishability attack.
>
>I would agree that it does not make sense to ask TLS peers to rekey
>unnecessarily. I also agree that 1 in 2^32 is
>1 in 4,294,967,296. Sure looks like a big, scary number, don't it?
>
>Are you then arguing that 2^{-32} for single key attacks is a big enough
>security margin because we want to avoid rekeying?

Because it is safe therefore there are no needs to rekey. I don¹t
recommend to run another function/protocol when there are no needs for it.
I don¹t see any particular reasons for mentioning single key in the
indistinguishability attack here.

>Then do you have a
>specific concern about the security of rekeying? I could see various ways
>in which it might go wrong if not designed carefully.
>
>Or are you directly linking a fundamental security question to an
>operational one, by which I mean: are you saying we should trade security
>for avoiding the "cost" of rekeying for some notion of "cost"? If so, can
>you quantify the cost for the use cases that matter to you?
>
>Cheers,
>
>Kenny 

Regards,
Quynh. 


>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Dang, Quynh (Fed)
Hi Kenny, 

On 7/12/16, 12:33 PM, "Paterson, Kenny"  wrote:

>Unfortunately, that's not quite the right interpretation. The bounds one
>obtains depend on both the total amount of data encrypted AND the number
>of encryption queries the adversary is allowed to make to AES-GCM under
>the (single) target key.

My understanding is that the total is the data complexity limit (counting
block encryptions and queries which are also block encryptions). To be
more precise, then we should count data complexity by number of block
encryptions (encrypting 1 bit is 1 block encryption). But for large data
in TLS, encryption is pretty much with full blocks of plaintexts.

If the attacker has 2^10 encrypted blocks and  is allowed to query another
2^10 encryptions, then the total data is 2^20 because the total of 2^20
block encryptions happens. In TLS, it means that 2^20 AES block
encryptions happen.

>We assumed each record was 2^14 bytes in size to simplify the ensuing
>analysis, and to enable us to focus on how the security bounds then depend
>on the number of records encrypted. See equation (5) and Table 2 in the
>note at 
>
>   http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf.
>
>In short, the security bound does not necessarily hold for ANY 2^52
>encrypted data bytes. For example, if the attacker encrypted 2^52 records
>of size 1 (!) then equation (5) would tell us almost nothing useful at all
>about security.
>
>Finally, you write "to come to the 2^38 record limit, they assume that
>each record is the maximum 2^14 bytes". For clarity, we did not recommend
>a limit of 2^38 records. That's Quynh's preferred number, and is
>unsupported by our analysis.
>
>Cheers,
>
>Kenny

Regards,
Quynh. 














> 
>
>
>On 12/07/2016 16:45, "Scott Fluhrer (sfluhrer)" 
>wrote:
>
>>Actually, a more correct way of viewing the limit would be 2^52 encrypted
>>data bytes. To come to the 2^38 record limit, they assume that each
>>record is the maximum 2^14 bytes.  Of course, at a 1Gbps rate, it'd take
>>over a year to encrypt that much data...
>>
>>> -Original Message-
>>> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Dang, Quynh (Fed)
>>> Sent: Tuesday, July 12, 2016 11:12 AM
>>> To: Paterson, Kenny; Dang, Quynh (Fed); Eric Rescorla; tls@ietf.org
>>> Subject: Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt
>>> 
>>> Hi Kenny,
>>> 
>>> The indistinguishability-based security notion in the paper is a
>>>stronger
>>> security notion than the (old) traditional confidentiality notion.
>>> 
>>> 
>>> (*) Indistinguishability notion (framework) guarantees no other attacks
>>>can
>>> be better than the indistinguishability bound. Intuitively, you can¹t
>>>attack if
>>> you can¹t even tell two things are different or not. So, being able to
>>>say two
>>> things are different or not is the minimal condition to lead to any
>>>attack.
>>> 
>>> The traditional confidentiality definition is that knowing only the
>>>ciphertexts,
>>> the attacker can¹t know any content of the corresponding plaintexts
>>>with a
>>> greater probability than some value and this value depends on the
>>>particular
>>> cipher. Of course, the maximum amount of data must not be more than
>>> some limit under a given key which also depends on the cipher.
>>> 
>>> For example, with counter mode AES_128, Let¹s say encrypting 2^70 input
>>> blocks with a single key. With the 2^70 ciphertext blocks alone (each
>>>block is
>>> 128 bits), I don¹t think one can find out any content of any of the
>>>plaintexts.
>>> The chance for knowing any block of the plaintexts is
>>> 1/(2^128) in this case.
>>> 
>>> I support the strongest indistinguishability notion mentioned in (*)
>>>above,
>>> but in my opinion we should provide good description to the users.
>>> That is why I support the limit around 2^38 records.
>>> 
>>> Regards,
>>> Quynh.
>>> 
>>> On 7/12/16, 10:03 AM, "Paterson, Kenny" 
>>> wrote:
>>> 
>>> >Hi Quynh,
>>> >
>>> >This indistinguishability-based security notion is the confidentiality
>>> >notion that is by now generally accepted in the crypto community.
>>> >Meeting it is sufficient to guarantee security against many other
>>>forms
>>> >of attack on confidentiality, which is one of the main reasons we use
>>>it.
>>> >
>>> >

Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-13 Thread Dang, Quynh (Fed)
Good morning Kenny,

On 7/12/16, 3:03 PM, "Paterson, Kenny"  wrote:

>Hi,
>
>> On 12 Jul 2016, at 18:56, Dang, Quynh (Fed)  wrote:
>> 
>> Hi Kenny, 
>> 
>>> On 7/12/16, 1:39 PM, "Paterson, Kenny" 
>>>wrote:
>>> 
>>> Hi
>>> 
>>>> On 12/07/2016 18:12, "Dang, Quynh (Fed)"  wrote:
>>>> 
>>>> Hi Kenny, 
>>>> 
>>>>> On 7/12/16, 1:05 PM, "Paterson, Kenny" 
>>>>>wrote:
>>>>> 
>>>>> Hi
>>>>> 
>>>>>> On 12/07/2016 16:12, "Dang, Quynh (Fed)" 
>>>>>>wrote:
>>>>>> 
>>>>>> Hi Kenny,
>>>>>> 
>>>>>> I support the strongest indistinguishability notion mentioned in (*)
>>>>>> above, but in my opinion we should provide good description to the
>>>>>> users.
>>>>> 
>>>>> OK, I think now we are at the heart of your argument. You support our
>>>>> choice of security definition and method of analysis after all.
>>>>> 
>>>>> And we can agree that good descriptions can only help.
>>>>> 
>>>>>> That is why I support the limit around 2^38 records.
>>>>> 
>>>>> I don't see how changing 2^24.5 (which is in the current draft) to
>>>>>2^38
>>>>> provides a better description to users.
>>>>> 
>>>>> Are you worried they won't know what a decimal in the exponent means?
>>>>> 
>>>>> Or, more seriously, are you saying that 2^{-32} for single key
>>>>>attacks
>>>>> is
>>>>> a big enough security margin? If so, can you say what that's based
>>>>>on?
>>>> 
>>>> It would not make sense to ask people to rekey unnecessarily. 1 in
>>>>2^32
>>>> is
>>>> 1 in 4,294,967,296 for the indistinguishability attack.
>>> 
>>> I would agree that it does not make sense to ask TLS peers to rekey
>>> unnecessarily. I also agree that 1 in 2^32 is
>>> 1 in 4,294,967,296. Sure looks like a big, scary number, don't it?
>>> 
>>> Are you then arguing that 2^{-32} for single key attacks is a big
>>>enough
>>> security margin because we want to avoid rekeying?
>> 
>> Because it is safe therefore there are no needs to rekey.
>
>Could you define "safe", please? Safe for what? For whom?
>
>Again, why are you choosing 2^-32 for your security bound? Why not 2^-40
>or even 2^-24? What's your rationale? Is it just finger in the air, or do
>you have a threat analysis, or ...?

I said it is safe because the chance of 1 in 4,294,967,296 practically
does not happen. I am not interested in talking about other numbers and
other questions. 

>> I don¹t
>> recommend to run another function/protocol when there are no needs for
>>it.
>> I don¹t see any particular reasons for mentioning single key in the
>> indistinguishability attack here.
>> 
>
>Then please read a little further into the note that presents the
>analysis: a conservative but generic approach dictates that, when the
>attacker has multiple keys to attack, we should multiply the security
>bounds by the number of target keys.
>
>A better analysis for AES-GCM may eventually be forthcoming but we don't
>have it yet. 
>
>>> Then do you have a
>>> specific concern about the security of rekeying? I could see various
>>>ways
>>> in which it might go wrong if not designed carefully.
>>> 
>>> Or are you directly linking a fundamental security question to an
>>> operational one, by which I mean: are you saying we should trade
>>>security
>>> for avoiding the "cost" of rekeying for some notion of "cost"? If so,
>>>can
>>> you quantify the cost for the use cases that matter to you?
>
>I'd love to have your answer to these questions. I didn't see one yet.
>What is the cost metric you're using and how does it quantity for your
>use cases?

Again, I am not interested in other questions. I suggested the number
about 2^38 records because it is a safe data bound because Eric put in his
tls 1.3 draft the number 2^24.5 which is unnecessarily small.

Your paper is a nice one which gives users good information about choices.

>
>Cheers,
>
>Kenny
>
>>> 
>>> Cheers,
>>> 
>>> Kenny
>> 
>> Regards,
>> Quynh.

Regards,
Quynh. 
>> 
>> 
>> 
>> 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-13 Thread Dang, Quynh (Fed)
Hi Kenny, 

On 7/12/16, 3:03 PM, "Paterson, Kenny"  wrote:

>Hi,
>
>> On 12 Jul 2016, at 18:56, Dang, Quynh (Fed)  wrote:
>> 
>> Hi Kenny, 
>> 
>>> On 7/12/16, 1:39 PM, "Paterson, Kenny" 
>>>wrote:
>>> 
>>> Hi
>>> 
>>>> On 12/07/2016 18:12, "Dang, Quynh (Fed)"  wrote:
>>>> 
>>>> Hi Kenny, 
>>>> 
>>>>> On 7/12/16, 1:05 PM, "Paterson, Kenny" 
>>>>>wrote:
>>>>> 
>>>>> Hi
>>>>> 
>>>>>> On 12/07/2016 16:12, "Dang, Quynh (Fed)" 
>>>>>>wrote:
>>>>>> 
>>>>>> Hi Kenny,
>>>>>> 
>>>>>> I support the strongest indistinguishability notion mentioned in (*)
>>>>>> above, but in my opinion we should provide good description to the
>>>>>> users.
>>>>> 
>>>>> OK, I think now we are at the heart of your argument. You support our
>>>>> choice of security definition and method of analysis after all.
>>>>> 
>>>>> And we can agree that good descriptions can only help.
>>>>> 
>>>>>> That is why I support the limit around 2^38 records.
>>>>> 
>>>>> I don't see how changing 2^24.5 (which is in the current draft) to
>>>>>2^38
>>>>> provides a better description to users.
>>>>> 
>>>>> Are you worried they won't know what a decimal in the exponent means?
>>>>> 
>>>>> Or, more seriously, are you saying that 2^{-32} for single key
>>>>>attacks
>>>>> is
>>>>> a big enough security margin? If so, can you say what that's based
>>>>>on?
>>>> 
>>>> It would not make sense to ask people to rekey unnecessarily. 1 in
>>>>2^32
>>>> is
>>>> 1 in 4,294,967,296 for the indistinguishability attack.
>>> 
>>> I would agree that it does not make sense to ask TLS peers to rekey
>>> unnecessarily. I also agree that 1 in 2^32 is
>>> 1 in 4,294,967,296. Sure looks like a big, scary number, don't it?
>>> 
>>> Are you then arguing that 2^{-32} for single key attacks is a big
>>>enough
>>> security margin because we want to avoid rekeying?
>> 
>> Because it is safe therefore there are no needs to rekey.
>
>Could you define "safe", please? Safe for what? For whom?
>
>Again, why are you choosing 2^-32 for your security bound? Why not 2^-40
>or even 2^-24? What's your rationale? Is it just finger in the air, or do
>you have a threat analysis, or ...?
>
>> I don¹t
>> recommend to run another function/protocol when there are no needs for
>>it.
>> I don¹t see any particular reasons for mentioning single key in the
>> indistinguishability attack here.
>> 
>
>Then please read a little further into the note that presents the
>analysis: a conservative but generic approach dictates that, when the
>attacker has multiple keys to attack, we should multiply the security
>bounds by the number of target keys.

I don’t think multiple target keys help the data complexity in the context
of TLS here for the distinguishing attack. Let’s look at two situations in
multiple keys in TLS.

1) Different data sets with different keys and their respective bound such
as 2^38 records: (k1, dataset1, 2^38),…..(k10, dataset10, 2^38).

The attacker has 10 times more chances of success with 10 times more data
complexity.

2) The same data set with different keys: (k1, dataset, 2^38),…., (k10,
dataset, 2^38).

Even though, the same data set is used with different keys, the data
complexity is 10 times more in order for the attacker to have 10 times
more likely to succeed.

Regards,
Quynh. 



> 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-13 Thread Dang, Quynh (Fed)
Hi Atul, 

On 7/12/16, 3:50 PM, "Atul Luykx"  wrote:

>> To be clear, this probability is that an attacker would be able to
>> take a huge (4+ Petabyte) ciphertext, and a compatibly sized potential
>> (but incorrect) plaintext, and with probability 2^{-32}, be able to
>> determine that this plaintext was not the one used for the ciphertext
>> (and with probability 0.9767..., know nothing about whether
>> his guessed plaintext was correct or not).
>
>You need to be careful when making such claims. There are schemes for
>which when you reach the birthday bound you can perform partial key
>recovery.
>
>The probabilities we calculated guarantee that there won't be any
>attacks (with the usual assumptions...). Beyond the bounds, there are no
>guarantees. In particular, you cannot conclude that one, for example,
>loses 1 bit of security once beyond the birthday bound.

How can one use the distinguishing attack with the data complexity bound I
suggested for recovering 1 bit of the encryption key in the context of TLS
? 


Regards,
Quynh. 




>
>Atul
>
>On 2016-07-12 20:06, Scott Fluhrer (sfluhrer) wrote:
>>> -Original Message-
>>> From: Paterson, Kenny [mailto:kenny.pater...@rhul.ac.uk]
>>> Sent: Tuesday, July 12, 2016 1:17 PM
>>> To: Dang, Quynh (Fed); Scott Fluhrer (sfluhrer); Eric Rescorla;
>>> tls@ietf.org
>>> Subject: Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt
>>> 
>>> Hi
>>> 
>>> On 12/07/2016 18:04, "Dang, Quynh (Fed)"  wrote:
>>> 
>>> >Hi Kenny,
>>> >
>>> >On 7/12/16, 12:33 PM, "Paterson, Kenny" 
>>> wrote:
>>> >
>>> >>Finally, you write "to come to the 2^38 record limit, they assume
>>>that
>>> >>each record is the maximum 2^14 bytes". For clarity, we did not
>>> >>recommend a limit of 2^38 records. That's Quynh's preferred number,
>>> >>and is unsupported by our analysis.
>>> >
>>> >What is problem with my suggestion even with the record size being the
>>> >maximum value?
>>> 
>>> There may be no problem with your suggestion. I was simply trying to
>>> make it
>>> clear that 2^38 records was your suggestion for the record limit and
>>> not ours.
>>> Indeed, if one reads our note carefully, one will find that we do not
>>> make any
>>> specific recommendations. We consider the decision to be one for the
>>> WG;
>>> our preferred role is to supply the analysis and help interpret it if
>>> people
>>> want that. Part of that involves correcting possible misconceptions
>>> and
>>> misinterpretations before they get out of hand.
>>> 
>>> Now 2^38 does come out of our analysis if you are willing to accept
>>> single key
>>> attack security (in the indistinguishability sense) of 2^{-32}. So in
>>> that limited
>>> sense, 2^38 is supported by our analysis. But it is not our
>>> recommendation.
>>> 
>>> But, speaking now in a personal capacity, I consider that security
>>> margin to be
>>> too small (i.e. I think that 2^{-32} is too big a success
>>> probability).
>> 
>> To be clear, this probability is that an attacker would be able to
>> take a huge (4+ Petabyte) ciphertext, and a compatibly sized potential
>> (but incorrect) plaintext, and with probability 2^{-32}, be able to
>> determine that this plaintext was not the one used for the ciphertext
>> (and with probability 0.9767..., know nothing about whether
>> his guessed plaintext was correct or not).
>> 
>> I'm just trying to get people to understand what we're talking about.
>> This is not "with probability 2^{-32}, he can recover the plaintext"
>> 
>> 
>>> 
>>> Regards,
>>> 
>>> Kenny
>> 
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-13 Thread Dang, Quynh (Fed)


On 7/13/16, 9:26 AM, "Watson Ladd"  wrote:

>On Wed, Jul 13, 2016 at 5:30 AM, Atul Luykx 
>wrote:
>> Hey Quynh,
>>
>>> How can one use the distinguishing attack with the data complexity
>>>bound I
>>> suggested for recovering 1 bit of the encryption key in the context of
>>>TLS
>>> ?
>>
>> You cannot recover any bits of the encryption key unless you attack AES.
>>
>> No-one, as far as I know, has analyzed what kind of attacks one can
>>perform
>> against GCM around and beyond the birthday bound (except for the forgery
>> attacks, which require repeated nonces or known forgeries). However,
>>for CTR
>> mode, the underlying encryption of GCM, David McGrew typed up a document
>> describing an attack one could perform to recover information about the
>> plaintext:
>> http://eprint.iacr.org/2012/623
>> He describes it for 64 bit block ciphers, but the attacks work equally
>>well
>> for 128 bit block ciphers, at a higher data complexity of course.
>>
>> Basically, there are a lot of unknowns, and it could be that the bounds
>>you
>> recommend will be good enough in practice. However, it's important to be
>> clear about the risks involved in venturing into unknown territory.
>>
>> Atul
>
>Furthermore the cost of avoiding this is trivial. The rekeying
>mechanism has been designed to have minimal code complexity.

GCM with data complexity of about 3^38 records is not vulnerable to that
plaintext recovery attack. Therefore, there are no needs to rekey before
that data complexity is reached.

For counter-mode, I think the attack works if there is a large set of
known plaintexts. In protocols such as TLS and Ipsec, there are known
plaintexts, but I don¹t think the amount of known plaintexts (even though
the amount of encrypted repeated-plaintexts can be big) is enough to
create risk for AES_128 by the targeted plaintext recovery attack. A known
plaintext can be encrypted multiple times with different keys, not with
the same key. 

Regards,
Quynh. 


>>
>>
>> On 2016-07-13 13:14, Dang, Quynh (Fed) wrote:
>>>
>>> Hi Atul,
>>>
>>> On 7/12/16, 3:50 PM, "Atul Luykx"  wrote:
>>>
>>>>> To be clear, this probability is that an attacker would be able to
>>>>> take a huge (4+ Petabyte) ciphertext, and a compatibly sized
>>>>>potential
>>>>> (but incorrect) plaintext, and with probability 2^{-32}, be able to
>>>>> determine that this plaintext was not the one used for the ciphertext
>>>>> (and with probability 0.9767..., know nothing about whether
>>>>> his guessed plaintext was correct or not).
>>>>
>>>>
>>>> You need to be careful when making such claims. There are schemes for
>>>> which when you reach the birthday bound you can perform partial key
>>>> recovery.
>>>>
>>>> The probabilities we calculated guarantee that there won't be any
>>>> attacks (with the usual assumptions...). Beyond the bounds, there are
>>>>no
>>>> guarantees. In particular, you cannot conclude that one, for example,
>>>> loses 1 bit of security once beyond the birthday bound.
>>>
>>>
>>> How can one use the distinguishing attack with the data complexity
>>>bound I
>>> suggested for recovering 1 bit of the encryption key in the context of
>>>TLS
>>> ?
>>>
>>>
>>> Regards,
>>> Quynh.
>>>
>>>
>>>
>>>
>>>>
>>>> Atul
>>>>
>>>> On 2016-07-12 20:06, Scott Fluhrer (sfluhrer) wrote:
>>>>>>
>>>>>> -Original Message-
>>>>>> From: Paterson, Kenny [mailto:kenny.pater...@rhul.ac.uk]
>>>>>> Sent: Tuesday, July 12, 2016 1:17 PM
>>>>>> To: Dang, Quynh (Fed); Scott Fluhrer (sfluhrer); Eric Rescorla;
>>>>>> tls@ietf.org
>>>>>> Subject: Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt
>>>>>>
>>>>>> Hi
>>>>>>
>>>>>> On 12/07/2016 18:04, "Dang, Quynh (Fed)" 
>>>>>>wrote:
>>>>>>
>>>>>> >Hi Kenny,
>>>>>> >
>>>>>> >On 7/12/16, 12:33 PM, "Paterson, Kenny" 
>>>>>> wrote:
>>>>>> >
>>>>>> >>Finally, you write "to come to the 2^38 record limit, they assume
>>>>>> 

[TLS] Data limit to achieve Indifferentiability for ciphertext with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

2016-11-13 Thread Dang, Quynh (Fed)
Hi Eric and all,

Regardless of the actual record size, each 128-bit block encryption is 
performed with a unique 128-bit counter which is formed by the 96-bit IV and 
the 32-bit counter_block value called CB in NIST SP 800-38D under a given key 
as long as the number of encrypted records is not more than 2^64.

Assuming a user would like to limit the probability of a collision among 
128-bit ciphertext-blocks under 1/2^32, the data limit of the ciphertext ( or 
plaintext) is 2^(96/2) (= 2^48) 128-bit blocks which is 2^64 bytes.

Reading the 2nd paragraph of section 5.5, a user might feel that he/she needs 
to rekey a lot more quicker than he/she needs. Putting an unnecessarily low 
data limit of 2^24.5 full-size records (2^38.5 bytes) also creates an incorrect 
negative impression (in my opinion) about GCM.

I would like to request the working group to consider to revise the text.

Regards,
Quynh.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Data limit to achieve Indifferentiability for ciphertext with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

2016-11-13 Thread Dang, Quynh (Fed)
Hi Martin,


"very conservative" ? No. 1/2^32 and 1/2^57 are practically the same; they are 
both practically zero.


By your argument, if somebody wants to be more "conservative" and uses the 
margin of 1/2^75 instead, then he/she would need to stop using GCM then.


Rekeying too often than needed would just create more room for issues for the 
connection/session without gaining any additional practical security at all.


Quynh.



From: Martin Thomson 
Sent: Sunday, November 13, 2016 6:54 PM
To: Dang, Quynh (Fed)
Cc: e...@rtfm.com; tls@ietf.org; c...@ietf.org
Subject: Re: [Cfrg] Data limit to achieve Indifferentiability for ciphertext 
with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

These are intentionally very conservative.  Having implemented this, I
find it OK.  The text cites its sources.  Reading those sources
corrects any misapprehension.

The key point here is that we want to ensure that the initial - maybe
uninformed - inferences need to be for the safe thing.  We don't want
to have the situation where we have looser text and that leads to
mistakes.

For instance, someone could deploy code that assumes a certain
"average" record size based on a particular deployment and hence a
larger limit.  If the deployment characteristics change without the
code changing we potentially have an issue.

You really need to demonstrate that there is harm with the current
text.  if rekeying happens on that timescale (which is still very
large), that's not harmful.  I'm concerned that we aren't going to
rekey often enough.  I don't agree that it will create any negative
perception of GCM.

On 14 November 2016 at 05:48, Dang, Quynh (Fed)  wrote:
> Hi Eric and all,
>
>
> Regardless of the actual record size, each 128-bit block encryption is
> performed with a unique 128-bit counter which is formed by the 96-bit IV and
> the 32-bit counter_block value called CB in NIST SP 800-38D under a given
> key as long as the number of encrypted records is not more than 2^64.
>
> Assuming a user would like to limit the probability of a collision among
> 128-bit ciphertext-blocks under 1/2^32, the data limit of the ciphertext (
> or plaintext) is 2^(96/2) (= 2^48) 128-bit blocks which is 2^64 bytes.
>
> Reading the 2nd paragraph of section 5.5, a user might feel that he/she
> needs to rekey a lot more quicker than he/she needs. Putting an
> unnecessarily low data limit of 2^24.5 full-size records (2^38.5 bytes) also
> creates an incorrect negative impression (in my opinion) about GCM.
>
> I would like to request the working group to consider to revise the text.
>
> Regards,
> Quynh.
>
>
> ___
> Cfrg mailing list
> c...@irtf.org
> https://www.irtf.org/mailman/listinfo/cfrg
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Data limit to achieve Indifferentiability for ciphertext with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

2016-11-21 Thread Dang, Quynh (Fed)
Hi Ilari,


You were right, for testing, a smaller number should be used.


Quynh.





From: ilariliusva...@welho.com  on behalf of Ilari 
Liusvaara 
Sent: Monday, November 21, 2016 3:42 PM
To: Dang, Quynh (Fed)
Cc: Martin Thomson; tls@ietf.org; c...@ietf.org
Subject: Re: [TLS] [Cfrg] Data limit to achieve Indifferentiability for 
ciphertext with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

On Mon, Nov 14, 2016 at 02:54:23AM +, Dang, Quynh (Fed) wrote:
>
> Rekeying too often than needed would just create more room for
> issues for the connection/session without gaining any additional
> practical security at all.

With regards to rekeying frequency I'm concerned about testability,
have it to be too rare and it is pretty much as good as nonexistent.

This is the reason why I set the rekey limit to 2M(!) records in
btls (with first rekey at 1k(!) records). These limits have absolutely
nothing to do with any sort of cryptographic reasoning[1][2].




[1] If they did, then Chacha rekey limit would be when RSN exhaustion
is imminent (since RSNs can't wrap, but can be reset).

[2] The 2M limit is chosen so that it is reached in ~1minute in fast
transfer tests.


-Ilari
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-10 Thread Dang, Quynh (Fed)
Hi Sean and all,


I agree with everyone that the text in (b) was not very good text.


The problem with (c) is that it is not precise at places and it leaves out a 
lot of informative discussions which users should know.


The sentence "The maximum amount of plaintext data that can be safely encrypted 
with  AES-GCM in a session is 2^48 128-bit blocks (2^52 bytes), assuming  
probability of success at 1/2^32" is not clear.  What is the success here? And, 
with 2^48 (full or partial) blocks, the collision probability is below (not at) 
2^(-32).


And, "safely encrypted", what does this mean? I would like not having a 
collision among 128-bit blocks of ciphertexts, but I dont see any damage to the 
data owner who sends the encrypted data over a TLS session.


I copied the text that I later proposed under the discussion of PR#769 below.


" To use AES-GCM to provide authenticity of authenticated data, confidentiality 
of plaintext content, and information leakage [0] protection for the plaintext 
safely, the limit of total ciphertext under a single key is ( 
(TLSCipherText.length / 16) / ceiling (TLSCipherText.length / 16) ) times 2^48 
128-bit blocks.


When the data limit is reached, the chance of having a collision among 128-bit 
blocks of the ciphertext is below 2^(-32) which is negligible.

Since the block size of AES is 128 bits, there will be collisions among 
different sets of ciphertext from multiple sessions using GCM (or any other 
modes of AES) when the total amount of the ciphertext of all considered 
sessions is more than 2^64 128-bit blocks. This fact does not seem to create a 
practical security weakness of using AES GCM.

For ChaCha20/Poly1305, the record sequence number would wrap before the safety 
limit is reached. See [AEAD-LIMITS] for further analysis.

[0]: Information leakage in the context of TLS is a chosen-plaintext 
distinguishing attack where the attacker provides 2 128-bit plaintext blocks to 
a GCM encryption engine, after seeing one encrypted block for one of the 2 
plaintext blocks, the attacker knows which plaintext block was encrypted. Or, 
it means that there is a collision among 128-bit blocks of the ciphertext. "

  1.  The text above uses blocks instead of bytes or records of ciphertext.
  2.  The partial block situation is taken into account.

"


Or, using good text from PR769 provided by brainhub, the first paragraph could 
be replaced by the following.


"To use AES-GCM to provide authenticity of authenticated data, confidentiality 
of plaintext content, and information leakage [0] protection for the plaintext, 
the limit of total ciphertext under a single key is 2^48 128-bit blocks with 
the ciphertext size being rounded up to the next 16-byte boundary. "


Regards,

Quynh.




From: Cfrg  on behalf of Sean Turner 
Sent: Friday, February 10, 2017 12:07:35 AM
To: 
Cc: IRTF CFRG
Subject: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

All,

We’ve got two outstanding PRs that propose changes to draft-ietf-tls-tls13 
Section 5.5 “Limits on Key Usage”.  As it relates to rekeying, these limits 
have been discussed a couple of times and we need to resolve once and for all 
whether the TLS WG wants to:

a) Close these two PRs and go with the existing text [0]
b) Adopt PR#765 [1]
c) Adopt PR#769 [2]

Please indicate you preference to the TLS mailing list before Feb 17.  Note 
that unless there’s clear consensus to change the text will remain as is (i.e., 
option a).

J&S

[0] https://tlswg.github.io/tls13-spec/#rfc.section.5.5
[1] https://github.com/tlswg/tls13-spec/pull/765
[2] https://github.com/tlswg/tls13-spec/pull/769
___
Cfrg mailing list
c...@irtf.org
https://www.irtf.org/mailman/listinfo/cfrg
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-10 Thread Dang, Quynh (Fed)
Hi Kenny,

From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
"Paterson, Kenny" mailto:kenny.pater...@rhul.ac.uk>>
Date: Friday, February 10, 2017 at 4:06 AM
To: Sean Turner mailto:s...@sn3rd.com>>
Cc: IRTF CFRG mailto:c...@irtf.org>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hi,

My preference is to go with the existing text, option a).

>From the github discussion, I think option c) involves a less conservative
security bound (success probability for IND-CPA attacker bounded by
2^{-32} instead of 2^{-60}). I can live with that, but the WG should be
aware of the weaker security guarantees it provides.

I do not understand option b). It seems to rely on an analysis of
collisions of ciphertext blocks rather than the established security proof
for AES-GCM.

My suggestion was based on counting.  I analyzed AES-GCM in TLS 1.3  as being a 
counter-mode encryption and each counter is a 96-bit nonce || 32-bit counter. I 
don’t know if there is another kind of proof that is more precise than that.

Regards,
Quynh.



Regards,

Kenny

On 10/02/2017 05:44, "Cfrg on behalf of Martin Thomson"
mailto:cfrg-boun...@irtf.org> on behalf of 
martin.thom...@gmail.com> wrote:

On 10 February 2017 at 16:07, Sean Turner 
mailto:s...@sn3rd.com>> wrote:
a) Close these two PRs and go with the existing text [0]
b) Adopt PR#765 [1]
c) Adopt PR#769 [2]


a) I'm happy enough with the current text (I've implemented that any
it's relatively easy).

I could live with c, but I'm opposed to b. It just doesn't make sense.
It's not obviously wrong any more, but the way it is written it is
very confusing and easily open to misinterpretation.

___
Cfrg mailing list
c...@irtf.org
https://www.irtf.org/mailman/listinfo/cfrg

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-10 Thread Dang, Quynh (Fed)
Hi Rene,

From: TLS mailto:tls-boun...@ietf.org>> on behalf of Rene 
Struik mailto:rstruik@gmail.com>>
Date: Friday, February 10, 2017 at 10:51 AM
To: Sean Turner mailto:s...@sn3rd.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Cc: IRTF CFRG mailto:c...@irtf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Dear colleagues:

I would suggest adding the following paragraph at the end of Section 5.5:

[current text of Section 5.5]


There are cryptographic limits on the amount of plaintext which can be safely 
encrypted under a given set of keys. 
[AEAD-LIMITS] provides an 
analysis of these limits under the assumption that the underlying primitive 
(AES or ChaCha20) has no weaknesses. Implementations SHOULD do a key update 
Section 4.6.3 prior to reaching 
these limits.

For AES-GCM, up to 2^24.5 full-size records (about 24 million) may be encrypted 
on a given connection while keeping a safety margin of approximately 2^-57 for 
Authenticated Encryption (AE) security. For ChaCha20/Poly1305, the record 
sequence number would wrap before the safety limit is reached.

[suggested additional text]

The above upper limits do not take into account potential side channel attacks, 
which - in some implementations - have been shown to be successful at 
recovering keying material with a relatively small number of messages encrypted 
using the same key. While results are highly implementation-specific, thereby 
making it hard to provide precise guidance, prudence suggests that 
implementations should not reuse keys ad infinitum. Implementations SHALL 
therefore always implement the key update mechanism of Section 4.6.3.

{editorial note: perhaps, one should impose the limit 2^20, just to make sure 
people do not "forget" to implement key updates?}

How do you do side channel attacks on TLS ? Do these side-channel attacks work 
for AES-GCM only in TLS 1.3 ?




See also my email of August 29, 2016:
https://mailarchive.ietf.org/arch/msg/cfrg/SUuLDg0wTvjR7H46oNyEtyGVdno

On 2/10/2017 12:07 AM, Sean Turner wrote:

All,

We’ve got two outstanding PRs that propose changes to draft-ietf-tls-tls13 
Section 5.5 “Limits on Key Usage”.  As it relates to rekeying, these limits 
have been discussed a couple of times and we need to resolve once and for all 
whether the TLS WG wants to:

a) Close these two PRs and go with the existing text [0]
b) Adopt PR#765 [1]
c) Adopt PR#769 [2]

Please indicate you preference to the TLS mailing list before Feb 17.  Note 
that unless there’s clear consensus to change the text will remain as is (i.e., 
option a).

J&S

[0] https://tlswg.github.io/tls13-spec/#rfc.section.5.5
[1] https://github.com/tlswg/tls13-spec/pull/765
[2] https://github.com/tlswg/tls13-spec/pull/769
___
Cfrg mailing list
c...@irtf.orghttps://www.irtf.org/mailman/listinfo/cfrg



--
email: rstruik@gmail.com | Skype: rstruik
cell: +1 (647) 867-5658 | US: +1 (415) 690-7363
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-10 Thread Dang, Quynh (Fed)
Dear Kenny,


From: "Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>
Date: Friday, February 10, 2017 at 12:22 PM
To: 'Quynh' mailto:quynh.d...@nist.gov>>, Sean Turner 
mailto:s...@sn3rd.com>>
Cc: IRTF CFRG mailto:c...@irtf.org>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Dear Quynh,

On 10/02/2017 12:48, "Dang, Quynh (Fed)" 
mailto:quynh.d...@nist.gov>> wrote:

Hi Kenny,

Hi,


My preference is to go with the existing text, option a).


>From the github discussion, I think option c) involves a less
conservative
security bound (success probability for IND-CPA attacker bounded by
2^{-32} instead of 2^{-60}). I can live with that, but the WG should be
aware of the weaker security guarantees it provides.


I do not understand option b). It seems to rely on an analysis of
collisions of ciphertext blocks rather than the established security
proof
for AES-GCM.




My suggestion was based on counting.  I analyzed AES-GCM in TLS 1.3  as
being a counter-mode encryption and each counter is a 96-bit nonce ||
32-bit counter. I don’t know if there is another kind of proof that is
more precise than that.

Thanks for explaining. I think, then, that what you are doing is (in
effect) accounting for the PRP/PRF switching lemma that is used (in a
standard way) as part of the IND-CPA security proof of AES-GCM. One can
obtain a greater degree of precision by using the proven bounds for
IND-CPA security of AES-GCM. These incorporate the "security loss" coming
from the PRP/PRF switching lemma. The current best form of these bounds is
due to Iwata et al.. This is precisely what we analyse in the note at
http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf - specifically, see
equations (5) - (7) on page 6 of that note.

I reviewed the paper more than once. I highly value the work. I suggested to 
reference  your paper in the text.  I think the result in your paper is the 
same with what is being suggested when the collision probability allowed is 
2^(-32).

Regards,
Quynh.


Regards,

Kenny



Regards,
Quynh.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-10 Thread Dang, Quynh (Fed)
Hi Rene,


I care about side channel attacks in general as much as you do. But my question 
was that how you carry out those attacks on GCM in TLS 1.3's servers and 
clients ? Do those side-channel attacks apply only to GCM in TLS 1.3 ?


Quynh.


From: Rene Struik 
Sent: Friday, February 10, 2017 2:02:14 PM
To: Dang, Quynh (Fed); Sean Turner; 
Cc: IRTF CFRG
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hi Quynh:

Not sure where to start (there is vast literature on side channel attacks and 
other implementation attacks). A good starting point would be the book [1], but 
one could also look at some NIST publications [2].

Side channel attacks differs from cryptanalytic attacks in that it does not 
merely study I/O behavior of crypto contructs, but also looks into what 
information can be obtained from what is going on "under the hood" of the 
computations (power consumption, radiation, timing, etc; or even invasive 
attacks). Most commonly one looks at crypto building blocks, but ultimately 
side channels can comprise any system behavior ("Lucky13" does, e.g., exploit 
this, if I remember correctly).

>From the last page of [2]: Finally, the most important conclusion from this 
>paper is that it is not only a necessity but also a must, in the coming 
>version of FIPS 140-3 standard, to evaluate cryptographic modules for their 
>resistivity against SCA attacks.

Ref:
[1] Stefan Mangard, Elisabeth Oswald, Thomas Popp, "Power Analysis Attacks - 
Revealing the Secrets of Smart Cards", Springer, 2007.
[2] 
http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-3/physec/papers/physecpaper19.pdf
[2]


On 2/10/2017 1:47 PM, Dang, Quynh (Fed) wrote:
Hi Rene,

From: TLS mailto:tls-boun...@ietf.org>> on behalf of Rene 
Struik mailto:rstruik@gmail.com>>
Date: Friday, February 10, 2017 at 10:51 AM
To: Sean Turner mailto:s...@sn3rd.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Cc: IRTF CFRG mailto:c...@irtf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Dear colleagues:

I would suggest adding the following paragraph at the end of Section 5.5:

[current text of Section 5.5]


There are cryptographic limits on the amount of plaintext which can be safely 
encrypted under a given set of keys. 
[AEAD-LIMITS]<https://tlswg.github.io/tls13-spec/#AEAD-LIMITS> provides an 
analysis of these limits under the assumption that the underlying primitive 
(AES or ChaCha20) has no weaknesses. Implementations SHOULD do a key update 
Section 4.6.3<https://tlswg.github.io/tls13-spec/#key-update> prior to reaching 
these limits.

For AES-GCM, up to 2^24.5 full-size records (about 24 million) may be encrypted 
on a given connection while keeping a safety margin of approximately 2^-57 for 
Authenticated Encryption (AE) security. For ChaCha20/Poly1305, the record 
sequence number would wrap before the safety limit is reached.

[suggested additional text]

The above upper limits do not take into account potential side channel attacks, 
which - in some implementations - have been shown to be successful at 
recovering keying material with a relatively small number of messages encrypted 
using the same key. While results are highly implementation-specific, thereby 
making it hard to provide precise guidance, prudence suggests that 
implementations should not reuse keys ad infinitum. Implementations SHALL 
therefore always implement the key update mechanism of Section 4.6.3.

{editorial note: perhaps, one should impose the limit 2^20, just to make sure 
people do not "forget" to implement key updates?}

How do you do side channel attacks on TLS ? Do these side-channel attacks work 
for AES-GCM only in TLS 1.3 ?




See also my email of August 29, 2016:
https://mailarchive.ietf.org/arch/msg/cfrg/SUuLDg0wTvjR7H46oNyEtyGVdno

On 2/10/2017 12:07 AM, Sean Turner wrote:

All,

We’ve got two outstanding PRs that propose changes to draft-ietf-tls-tls13 
Section 5.5 “Limits on Key Usage”.  As it relates to rekeying, these limits 
have been discussed a couple of times and we need to resolve once and for all 
whether the TLS WG wants to:

a) Close these two PRs and go with the existing text [0]
b) Adopt PR#765 [1]
c) Adopt PR#769 [2]

Please indicate you preference to the TLS mailing list before Feb 17.  Note 
that unless there’s clear consensus to change the text will remain as is (i.e., 
option a).

J&S

[0] https://tlswg.github.io/tls13-spec/#rfc.section.5.5
[1] https://github.com/tlswg/tls13-spec/pull/765
[2] https://github.com/tlswg/tls13-spec/pull/769
___
Cfrg mailing list
c...@irtf.org<mailto:c...@irtf.org>https://www.irtf.org/mailman/listinfo/cfrg



--
email: rstruik@gmail.com<mailto:rstruik@gmail.com> | Skype: rstruik
ce

Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-11 Thread Dang, Quynh (Fed)

Hi Kenny,


AES-permutation is a permutation.  But, AES-GCM (AES in counter mode) is a PRF 
as long as the 128-bit IVs are unique under the encryption key.  The amount of 
plaintext is the same with the amount of ciphertext.


I originally talked about plaintext in my discussion, but several people asked 
me to talk about ciphertext instead (I thought maybe measuring ciphertext was 
easier than measuring plaintext in practice and that was why they asked me 
that).


The number of 128-bit blocks of plaintext is the same with the number of 
128-bit "one-time pad" keys produced by the AES key and the unique 128-bit IVs. 
These 128-bit "one-time pad" keys and the corresponding 128-bit ciphertext 
blocks are the same in the sense that they are both sets of pseudo-random 
128-bit blocks.  But, the 128-bit "one-time pad" keys are not stored, they have 
to either measure the amount of plaintext or ciphertext.


Regards,

Quynh.




From: Paterson, Kenny 
Sent: Friday, February 10, 2017 2:06:46 PM
To: Dang, Quynh (Fed); Sean Turner
Cc: IRTF CFRG; 
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hi,

On 10/02/2017 18:56, "Dang, Quynh (Fed)"  wrote:

>Dear Kenny,
>
>From: "Paterson, Kenny" 
>Date: Friday, February 10, 2017 at 12:22 PM
>To: 'Quynh' , Sean Turner 
>Cc: IRTF CFRG , "" 
>Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
>(#765/#769)
>
>
>
>>Dear Quynh,
>>
>>
>>On 10/02/2017 12:48, "Dang, Quynh (Fed)"  wrote:
>>
>>
>>>Hi Kenny,
>>>
>>>
>>>>Hi,
>>>>
>>>>
>>>>
>>>>
>>>>My preference is to go with the existing text, option a).
>>>>
>>>>
>>>>
>>>>
>>>>From the github discussion, I think option c) involves a less
>>>>conservative
>>>>security bound (success probability for IND-CPA attacker bounded by
>>>>2^{-32} instead of 2^{-60}). I can live with that, but the WG should be
>>>>aware of the weaker security guarantees it provides.
>>>>
>>>>
>>>>
>>>>
>>>>I do not understand option b). It seems to rely on an analysis of
>>>>collisions of ciphertext blocks rather than the established security
>>>>proof
>>>>for AES-GCM.
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>>My suggestion was based on counting.  I analyzed AES-GCM in TLS 1.3  as
>>>being a counter-mode encryption and each counter is a 96-bit nonce ||
>>>32-bit counter. I don’t know if there is another kind of proof that is
>>>more precise than that.
>>
>>
>>Thanks for explaining. I think, then, that what you are doing is (in
>>effect) accounting for the PRP/PRF switching lemma that is used (in a
>>standard way) as part of the IND-CPA security proof of AES-GCM. One can
>>obtain a greater degree of precision by using the proven bounds for
>>IND-CPA security of AES-GCM. These incorporate the "security loss" coming
>>from the PRP/PRF switching lemma. The current best form of these bounds
>>is
>>due to Iwata et al.. This is precisely what we analyse in the note at
>>http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf - specifically, see
Limits on Authenticated Encryption Use in TLS - 
isg.rhul.ac.uk<http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf>
www.isg.rhul.ac.uk
Limits on Authenticated Encryption Use in TLS Atul Luykx and Kenneth G. 
Paterson March 8, 2016 Abstract 
Thistechnicalnotepresentslimitsonthesecurity(asafunctionofthe



>>equations (5) - (7) on page 6 of that note.
>>
>
>I reviewed the paper more than once. I highly value the work. I suggested
>to reference  your paper in the text.  I think the result in your paper
>is the same with what is being suggested when the collision probability
>allowed is 2^(-32).

Thanks for this feedback. I guess my confusion arises from wondering what
you mean by collision probability and why you care about it. There are no
collisions in the block cipher's outputs per se, because AES is a
permutation for each choice of key. And collisions in the ciphertext
blocks output by AES-GCM are irrelevant to its formal security analysis.

On the other hand, when in the proof of IND-CPA security of AES-GCM one
switches from a random permutation (which is how we model AES) to a random
function (which is what we need to argue in the end that the plaintext is
masked by a one-time pad, giving indistinguishability), then one needs to
deal with the probability that collisions occur in the function's outputs
but not in the permutation's. This ends up being the main contribution to
the security bound in the proof for IND-CPA security.

Is that what you are getting at?

If so, then we are on the same page, and what remains is to decide whether
a 2^{-32} bound is a good enough security margin.

Regards,

Kenny


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-13 Thread Dang, Quynh (Fed)
Hi Markulf,

The probability of a bad thing to happen is actually below (or about) 2^(-33). 
It practically won’t happen when the chance is 1 in 2^32. And, to achieve that 
chance, you must collect 2^48 128-bit blocks.

Regards,
Quynh.

From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
Markulf Kohlweiss mailto:mark...@microsoft.com>>
Date: Monday, February 13, 2017 at 10:34 AM
To: "Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>, Sean Turner 
mailto:s...@sn3rd.com>>
Cc: Antoine Delignat-Lavaud mailto:an...@microsoft.com>>, 
IRTF CFRG mailto:c...@irtf.org>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hello,

Our analysis of miTLS also supports option a)

A security level of 2^-32 seems too low from a provable security point of view, 
especially for a confidentiality bound.

We verified an implementation of the TLS 1.3 record 
(https://eprint.iacr.org/2016/1178, to appear at Security & Privacy 2017) where 
we arrive at a combined bound for authenticity and confidentiality that is 
compatible with the Iwata et al. bound.

Regards,
Markulf (for the miTLS team)

Hi,

My preference is to go with the existing text, option a).

>From the github discussion, I think option c) involves a less conservative
security bound (success probability for IND-CPA attacker bounded by
2^{-32} instead of 2^{-60}). I can live with that, but the WG should be
aware of the weaker security guarantees it provides.

I do not understand option b). It seems to rely on an analysis of
collisions of ciphertext blocks rather than the established security proof
for AES-GCM.

Regards,

Kenny

On 10/02/2017 05:44, "Cfrg on behalf of Martin Thomson"
 wrote:

On 10 February 2017 at 16:07, Sean Turner  wrote:
a) Close these two PRs and go with the existing text [0]
b) Adopt PR#765 [1]
c) Adopt PR#769 [2]


a) I'm happy enough with the current text (I've implemented that any
it's relatively easy).

I could live with c, but I'm opposed to b. It just doesn't make sense.
It's not obviously wrong any more, but the way it is written it is
very confusing and easily open to misinterpretation.

___
Cfrg mailing list
Cfrg at irtf.org
https://www.irtf.org/mailman/listinfo/cfrg

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-14 Thread Dang, Quynh (Fed)
Hi Markulf and all,

I provided more explanation below.

From: 'Quynh' mailto:quynh.d...@nist.gov>>
Date: Monday, February 13, 2017 at 10:45 AM
To: Markulf Kohlweiss mailto:mark...@microsoft.com>>, 
"Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>, Sean Turner 
mailto:s...@sn3rd.com>>
Cc: Antoine Delignat-Lavaud mailto:an...@microsoft.com>>, 
IRTF CFRG mailto:c...@irtf.org>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hi Markulf,

The probability of a bad thing to happen is actually below (or about) 2^(-33). 
It practically won’t happen when the chance is 1 in 2^32. And, to achieve that 
chance, you must collect 2^48 128-bit blocks.

Regards,
Quynh.

From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
Markulf Kohlweiss mailto:mark...@microsoft.com>>
Date: Monday, February 13, 2017 at 10:34 AM
To: "Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>, Sean Turner 
mailto:s...@sn3rd.com>>
Cc: Antoine Delignat-Lavaud mailto:an...@microsoft.com>>, 
IRTF CFRG mailto:c...@irtf.org>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hello,

Our analysis of miTLS also supports option a)

A security level of 2^-32 seems too low from a provable security point of view, 
especially for a confidentiality bound.

When someone says AES-128 has 128 bits of security he or she means that 2^128 
AES operations will break the cipher with probability 100%: finding the key and 
the plaintext.  It does not mean that attackers have only 2^(-128) chance of 
success. If an attacker could run 2^100 AES operations, his or her chance of 
success is way below 2^(-32): this does not mean that AES has a security level 
of 2^(-32) or  2^(-28).

The success probability 1/2^32 means that after 2^48 AES operations, the 
attacker has a success probability of 2^-32 which is practically zero.

Also, many users don’t know what “confidentiality bound” means.

The current text Eric wrote talks about a number 2^24.5 of full-size records. 
In many situations, the record sizes are not full size, but different sizes. My 
latest suggestion text does not assume full size records, it covers variable 
record sizes, it just counts AES-input blocks or AES operations.

Regards,
Quynh.






We verified an implementation of the TLS 1.3 record 
(https://eprint.iacr.org/2016/1178, to appear at Security & Privacy 2017) where 
we arrive at a combined bound for authenticity and confidentiality that is 
compatible with the Iwata et al. bound.

Regards,
Markulf (for the miTLS team)

Hi,

My preference is to go with the existing text, option a).

>From the github discussion, I think option c) involves a less conservative
security bound (success probability for IND-CPA attacker bounded by
2^{-32} instead of 2^{-60}). I can live with that, but the WG should be
aware of the weaker security guarantees it provides.

I do not understand option b). It seems to rely on an analysis of
collisions of ciphertext blocks rather than the established security proof
for AES-GCM.

Regards,

Kenny

On 10/02/2017 05:44, "Cfrg on behalf of Martin Thomson"
 wrote:

On 10 February 2017 at 16:07, Sean Turner  wrote:
a) Close these two PRs and go with the existing text [0]
b) Adopt PR#765 [1]
c) Adopt PR#769 [2]


a) I'm happy enough with the current text (I've implemented that any
it's relatively easy).

I could live with c, but I'm opposed to b. It just doesn't make sense.
It's not obviously wrong any more, but the way it is written it is
very confusing and easily open to misinterpretation.

___
Cfrg mailing list
Cfrg at irtf.org
https://www.irtf.org/mailman/listinfo/cfrg

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-14 Thread Dang, Quynh (Fed)
Hi Atul,

From: Atul Luykx 
mailto:atul.lu...@esat.kuleuven.be>>
Date: Tuesday, February 14, 2017 at 11:17 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Markulf Kohlweiss mailto:mark...@microsoft.com>>, 
Antoine Delignat-Lavaud mailto:an...@microsoft.com>>, IRTF 
CFRG mailto:c...@irtf.org>>, "tls@ietf.org<mailto:tls@ietf.org>" 
mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hey Quynh,

When someone says AES-128 has 128 bits of security he or she means
that 2^128 AES operations will break the cipher with probability 100%:
finding the key and the plaintext.
The claim is stronger: regardless of the number of plaintext-ciphertext
pairs available to the adversary, it will still take roughly 2^128
operations to recover the key with AES.

Actually the same claim: my claim did not require any data requirement: just 
one ciphertext block.

This contrasts with any mode of
operation, where adversarial success probability increases according to
the amount of data available and the computational complexity required
to perform such an attack is not the limiting factor (which is the core
of the problem we're discussing right now).

IND-CPA is important. That is why I have always been supporting it. Data is 
equivalent to computation in the sense that data are produced by computation. 
2^x blocks = 2^x AES operations.

With 2^48 AES operations/input blocks, the actual margin is below 2^(-33). And, 
1 in 2^32 is 1 in 4,294, 967,296.00 which is safe.

Quynh.


Regardless, correct me if I'm wrong Quynh, but you seem to have two
issues with Eric's text:
1. the data limit recommendation is too strict, and
2. it only gives a data limit in terms of full records.

For point 1 it seems like most people would rather err on the side of
caution instead of recommending that people switch when adversaries have
success probability 2^{-32}. I don't see the discussion progressing on
this point, and basically a decision needs to be made.

I don't think point 2 is a problem because it gives people a good enough
heuristic, however this can be fixed easily by minimally modifying the
original text.

Atul


On 2017-02-14 03:59, Dang, Quynh (Fed) wrote:
Hi Markulf and all,
I provided more explanation below.
  From: 'Quynh' mailto:quynh.d...@nist.gov>>
Date: Monday, February 13, 2017 at 10:45 AM
To: Markulf Kohlweiss mailto:mark...@microsoft.com>>, 
"Paterson, Kenny"
mailto:kenny.pater...@rhul.ac.uk>>, Sean Turner 
mailto:s...@sn3rd.com>>
Cc: Antoine Delignat-Lavaud mailto:an...@microsoft.com>>, 
IRTF CFRG
mailto:c...@irtf.org>>, "mailto:tls@ietf.org>>" 
mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
(#765/#769)
Hi Markulf,
The probability of a bad thing to happen is actually below (or
about) 2^(-33). It practically won’t happen when the chance is 1
in 2^32. And, to achieve that chance, you must collect 2^48 128-bit
blocks.
Regards,
Quynh.
From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
Markulf Kohlweiss
mailto:mark...@microsoft.com>>
Date: Monday, February 13, 2017 at 10:34 AM
To: "Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>, Sean Turner
mailto:s...@sn3rd.com>>
Cc: Antoine Delignat-Lavaud mailto:an...@microsoft.com>>, 
IRTF CFRG
mailto:c...@irtf.org>>, "mailto:tls@ietf.org>>" 
mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage"
PRs (#765/#769)
Hello,
Our analysis of miTLS also supports option a)
A security level of 2^-32 seems too low from a provable security
point of view, especially for a confidentiality bound.
When someone says AES-128 has 128 bits of security he or she means
that 2^128 AES operations will break the cipher with probability 100%:
finding the key and the plaintext.  It does not mean that attackers
have only 2^(-128) chance of success. If an attacker could run 2^100
AES operations, his or her chance of success is way below 2^(-32):
this does not mean that AES has a security level of 2^(-32) or
2^(-28).
The success probability 1/2^32 means that after 2^48 AES operations,
the attacker has a success probability of 2^-32 which is practically
zero.
Also, many users don’t know what “confidentiality bound” means.
The current text Eric wrote talks about a number 2^24.5 of full-size
records. In many situations, the record sizes are not full size, but
different sizes. My latest suggestion text does not assume full size
records, it covers variable record sizes, it just counts AES-input
blocks or AES operations.
Regards,
Quynh.
We verified an implementation of the TLS 1.3 record
(https://eprint.iacr.org/2016/1178, to appear at Security & Privacy
2017) where we arrive at a combined bound for authenticity and
confidential

Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-14 Thread Dang, Quynh (Fed)
Hi Sean and all,


Beside my suggestion at 
https://www.ietf.org/mail-archive/web/tls/current/msg22381.html, I have a 
second suggestion below.


Just replacing this sentence: "

For AES-GCM, up to 2^24.5 full-size records (about 24 million) may be
   encrypted on a given connection while keeping a safety margin of
   approximately 2^-57 for Authenticated Encryption (AE) security.

" in Section 5.5 by this sentence: " For AES-GCM, up to 2^48 (partial or full) 
input blocks may be encrypted with one key. For other suggestions and analysis, 
see the referred paper above."


Regards,

Quynh.

________
From: Dang, Quynh (Fed)
Sent: Tuesday, February 14, 2017 1:20:12 PM
To: Atul Luykx; Dang, Quynh (Fed)
Cc: Markulf Kohlweiss; Antoine Delignat-Lavaud; IRTF CFRG; tls@ietf.org
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hi Atul,

From: Atul Luykx 
mailto:atul.lu...@esat.kuleuven.be>>
Date: Tuesday, February 14, 2017 at 11:17 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Markulf Kohlweiss mailto:mark...@microsoft.com>>, 
Antoine Delignat-Lavaud mailto:an...@microsoft.com>>, IRTF 
CFRG mailto:c...@irtf.org>>, "tls@ietf.org<mailto:tls@ietf.org>" 
mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hey Quynh,

When someone says AES-128 has 128 bits of security he or she means
that 2^128 AES operations will break the cipher with probability 100%:
finding the key and the plaintext.
The claim is stronger: regardless of the number of plaintext-ciphertext
pairs available to the adversary, it will still take roughly 2^128
operations to recover the key with AES.

Actually the same claim: my claim did not require any data requirement: just 
one ciphertext block.

This contrasts with any mode of
operation, where adversarial success probability increases according to
the amount of data available and the computational complexity required
to perform such an attack is not the limiting factor (which is the core
of the problem we're discussing right now).

IND-CPA is important. That is why I have always been supporting it. Data is 
equivalent to computation in the sense that data are produced by computation. 
2^x blocks = 2^x AES operations.

With 2^48 AES operations/input blocks, the actual margin is below 2^(-33). And, 
1 in 2^32 is 1 in 4,294, 967,296.00 which is safe.

Quynh.


Regardless, correct me if I'm wrong Quynh, but you seem to have two
issues with Eric's text:
1. the data limit recommendation is too strict, and
2. it only gives a data limit in terms of full records.

For point 1 it seems like most people would rather err on the side of
caution instead of recommending that people switch when adversaries have
success probability 2^{-32}. I don't see the discussion progressing on
this point, and basically a decision needs to be made.

I don't think point 2 is a problem because it gives people a good enough
heuristic, however this can be fixed easily by minimally modifying the
original text.

Atul


On 2017-02-14 03:59, Dang, Quynh (Fed) wrote:
Hi Markulf and all,
I provided more explanation below.
  From: 'Quynh' mailto:quynh.d...@nist.gov>>
Date: Monday, February 13, 2017 at 10:45 AM
To: Markulf Kohlweiss mailto:mark...@microsoft.com>>, 
"Paterson, Kenny"
mailto:kenny.pater...@rhul.ac.uk>>, Sean Turner 
mailto:s...@sn3rd.com>>
Cc: Antoine Delignat-Lavaud mailto:an...@microsoft.com>>, 
IRTF CFRG
mailto:c...@irtf.org>>, "mailto:tls@ietf.org>>" 
mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
(#765/#769)
Hi Markulf,
The probability of a bad thing to happen is actually below (or
about) 2^(-33). It practically won’t happen when the chance is 1
in 2^32. And, to achieve that chance, you must collect 2^48 128-bit
blocks.
Regards,
Quynh.
From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
Markulf Kohlweiss
mailto:mark...@microsoft.com>>
Date: Monday, February 13, 2017 at 10:34 AM
To: "Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>, Sean Turner
mailto:s...@sn3rd.com>>
Cc: Antoine Delignat-Lavaud mailto:an...@microsoft.com>>, 
IRTF CFRG
mailto:c...@irtf.org>>, "mailto:tls@ietf.org>>" 
mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage"
PRs (#765/#769)
Hello,
Our analysis of miTLS also supports option a)
A security level of 2^-32 seems too low from a provable security
point of view, especially for a confidentiality bound.
When someone says AES-128 has 128 bits of security he or she means
that 2^128 AES operations will break the cipher with probability 100%:
finding the key and the plaintext.  It does not mean that attackers
have only 

Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-15 Thread Dang, Quynh (Fed)
Hi Atul,

I hope you had a happy Valentine!

From: Atul Luykx 
mailto:atul.lu...@esat.kuleuven.be>>
Date: Tuesday, February 14, 2017 at 4:52 PM
To: Yoav Nir mailto:ynir.i...@gmail.com>>
Cc: 'Quynh' mailto:quynh.d...@nist.gov>>, IRTF CFRG 
mailto:c...@irtf.org>>, "tls@ietf.org<mailto:tls@ietf.org>" 
mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Why is that 2^48 input blocks rather than 2^34.5 input blocks?
Because he wants to lower the security level.

I respectfully disagree. 2^-32, 2^-33, 2^-57, 2^-60, 2^-112 are practically the 
same: they are practically zero.  And, 2^-32 is an absolute chance in this case 
meaning that all attackers can’t improve their chance: no matter how much 
computational power the attacker has.

I don’t understand why the number 2^-60 is your special chosen number for this 
? In your “theory”, 2^-112 would be in “higher” security than 2^-60.

Quynh.


The original text
recommends switching at 2^{34.5} input blocks, corresponding to a
success probability of 2^{-60}, whereas his text recommends switching at
2^{48} blocks, corresponding to a success probability of 2^{-32}.

Atul

On 2017-02-14 11:45, Yoav Nir wrote:
Hi, Quynh
On 14 Feb 2017, at 20:45, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>>
wrote:
Hi Sean and all,
Beside my suggestion at
https://www.ietf.org/mail-archive/web/tls/current/msg22381.html [1],
I have a second suggestion below.
Just replacing this sentence: "
For AES-GCM, up to 2^24.5 full-size records (about 24 million) may
be
encrypted on a given connection while keeping a safety margin of
approximately 2^-57 for Authenticated Encryption (AE) security.
" in Section 5.5 by this sentence: " For AES-GCM, up to 2^48
(partial or full) input blocks may be encrypted with one key. For
other suggestions and analysis, see the referred paper above."
Regards,
Quynh.
I like the suggestion, but I’m probably missing something pretty
basic about it.
2^24.5 full-size records is 2^24.5 records of 2^14 bytes each, or
(since an AES block is 16 bytes or 2^4 bytes) 2^24.5 records of 2^10
blocks.
Why is that 2^48 input blocks rather than 2^34.5 input blocks?
Thanks
Yoav
Links:
--
[1] https://www.ietf.org/mail-archive/web/tls/current/msg22381.html
___
TLS mailing list
TLS@ietf.org<mailto:TLS@ietf.org>
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-16 Thread Dang, Quynh (Fed)
Hi Kenny,

I am glad to see that you enjoyed the discussion more than what you planed for 
the time on your vacation.  We love crypto and the IETF!

From: "Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>
Date: Wednesday, February 15, 2017 at 8:46 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Atul Luykx 
mailto:atul.lu...@esat.kuleuven.be>>, Yoav Nir 
mailto:ynir.i...@gmail.com>>, IRTF CFRG 
mailto:c...@irtf.org>>, "tls@ietf.org<mailto:tls@ietf.org>" 
mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Hi Quynh,

I'm meant to be on vacation, but I'm finding this on-going discussion 
fascinating, so I'm chipping in again.

On 15 Feb 2017, at 21:12, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>> wrote:

Hi Atul,

I hope you had a happy Valentine!

From: Atul Luykx 
mailto:atul.lu...@esat.kuleuven.be>>
Date: Tuesday, February 14, 2017 at 4:52 PM
To: Yoav Nir mailto:ynir.i...@gmail.com>>
Cc: 'Quynh' mailto:quynh.d...@nist.gov>>, IRTF CFRG 
mailto:c...@irtf.org>>, "tls@ietf.org<mailto:tls@ietf.org>" 
mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Why is that 2^48 input blocks rather than 2^34.5 input blocks?
Because he wants to lower the security level.

I respectfully disagree. 2^-32, 2^-33, 2^-57, 2^-60, 2^-112 are practically the 
same: they are practically zero.

I'm not clear what you mean by "practically" here.

As far as I know, such chance has not happened in history for any targeted 
search where the chance for hitting the target is 1 in 2^32.

They're clearly not the same as real numbers. And if we are being conservative 
about security, then the extremes in your list are a long way apart.

And, 2^-32 is an absolute chance in this case meaning that all attackers can’t 
improve their chance: no matter how much computational power the attacker has.

A sufficiently powerful adversary could carry out an exhaustive key search for 
GCM's underlying AES key. So I'm not sure what you're claiming here when you 
speak of "absolute chance".

I described my point not in a best way, sorry. For key recovery, if an attacker 
can do 2^96 AES operations, his chance of finding the key is 2^-32, but this 
chance will get improved if the attacker does more computation. On the 
contrary, the chance for the distinguishing attack won’t change with the 
proposed data limit.


I don’t understand why the number 2^-60 is your special chosen number for this ?

This is a bit subtle, but I'll try to explain in simple terms.

We can conveniently prove a bound of about this size (actually 2^-57) for 
INT-CTXT for a wide range of parameters covering both TLS and DTLS (where many 
verification failures may be permitted). Then, since we're ultimately 
interested in AE security, we would like to (roughly) match this for IND-CPA 
security, to get as good a bound as we can for AE security (the security bounds 
for the two notions sum to give an AE security bound - see page 2 of the "AE 
bounds" note).

In view of the INT-CTXT bound there's no point pushing the IND-CPA bound much 
lower than 2^-60 if the ultimate target is AE security. It just hurts the data 
limits more without significantly improving AE security.

I just checked the paper. There is a small error I think. AES-GCM in TLS 1.3 is 
a prf. Under a given key, every input block or just one repeated block 2^35 
times, their ciphertext blocks are 2^35 random 128-bit blocks assuming the key 
has 128 bits of entropy. If there is a collision among the ciphertext blocks, 
it does not mean anything because it does not say anything about the plaintext 
blocks.



Finally, 2^-60 is not *our* special chosen number. We wrote a note that 
contained a table of values, and it's worth noting that we did not make a 
specific recommendation in our note for which row of the table to select.

(Naturally, though, we'd like security to be as high as possible without making 
rekeying a frequent event. It's a continuing surprise to me that you are 
pushing for an option that actually reduces security when achieving higher 
security does not seem to cause any problems for implementors.)

I respectfully disagree. As I explained before, 2^-32, 2^-57 and 2^-60 are all 
safe choices. If someone wants to rekey sooner (or often) for operational 
reason or any other reason, that would be just fine. I just hope that we don’t 
have text which might imply that 2^-32 is not a safe choice.  In our 
guidelines, we basically indicate that 2^-32 or below is safe.



In your “theory”, 2^-112 would be in “higher” security than 2^-60.

It certainly would, if it were achievable (which it is not for GCM without 
putting some quite extreme limits on da

Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).

2017-02-25 Thread Dang, Quynh (Fed)
Hi Sean, Joe, Eric and all,


I would like to address my thoughts/suggestions on 2 issues in option a.


1) The data limit should be addressed in term of blocks, not records. When the 
record size is not the full size, some user might not know what to do. When the 
record size is 1 block, the limit of 2^24.5 blocks (records) is way too low 
unnecessarily for the margin of 2^-60.  In that case, 2^34.5 1-block records is 
the limit which still achieves the margin of 2^-60.


2) To achieve the margin of 2^-57 as the current text says, the limit number 
should be 2^36 blocks.


Regards,

Quynh.



From: Cfrg  on behalf of Sean Turner 
Sent: Friday, February 10, 2017 12:07 AM
To: 
Cc: IRTF CFRG
Subject: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

All,

We’ve got two outstanding PRs that propose changes to draft-ietf-tls-tls13 
Section 5.5 “Limits on Key Usage”.  As it relates to rekeying, these limits 
have been discussed a couple of times and we need to resolve once and for all 
whether the TLS WG wants to:

a) Close these two PRs and go with the existing text [0]
b) Adopt PR#765 [1]
c) Adopt PR#769 [2]

Please indicate you preference to the TLS mailing list before Feb 17.  Note 
that unless there’s clear consensus to change the text will remain as is (i.e., 
option a).

J&S

[0] https://tlswg.github.io/tls13-spec/#rfc.section.5.5
[1] https://github.com/tlswg/tls13-spec/pull/765
[2] https://github.com/tlswg/tls13-spec/pull/769
___
Cfrg mailing list
c...@irtf.org
https://www.irtf.org/mailman/listinfo/cfrg
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).

2017-03-01 Thread Dang, Quynh (Fed)


From: Aaron Zauner mailto:a...@azet.org>>
Date: Wednesday, March 1, 2017 at 8:11 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Sean Turner mailto:s...@sn3rd.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>, IRTF 
CFRG mailto:c...@irtf.org>>
Subject: Re: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).


On 25 Feb 2017, at 14:28, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>> wrote:
Hi Sean, Joe, Eric and all,
I would like to address my thoughts/suggestions on 2 issues in option a.
1) The data limit should be addressed in term of blocks, not records. When the 
record size is not the full size, some user might not know what to do. When the 
record size is 1 block, the limit of 2^24.5 blocks (records) is way too low 
unnecessarily for the margin of 2^-60.  In that case, 2^34.5 1-block records is 
the limit which still achieves the margin of 2^-60.

I respectfully disagree. TLS deals in records not in blocks, so in the end any 
semantic change here will just confuse implementors, which isn't a good idea in 
my opinion.

Over the discussion of the PRs, the preference was blocks.

Quynh.



Aaron

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).

2017-03-01 Thread Dang, Quynh (Fed)


From: "Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>
Date: Wednesday, March 1, 2017 at 9:38 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>, Aaron Zauner 
mailto:a...@azet.org>>
Cc: IRTF CFRG mailto:c...@irtf.org>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769).

Hi,

On 01/03/2017 14:31, "TLS on behalf of Dang, Quynh (Fed)"
mailto:tls-boun...@ietf.org> on behalf of 
quynh.d...@nist.gov<mailto:quynh.d...@nist.gov>> wrote:
From: Aaron Zauner mailto:a...@azet.org>>
Date: Wednesday, March 1, 2017 at 9:24 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Sean Turner mailto:s...@sn3rd.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>, IRTF
CFRG mailto:c...@irtf.org>>
Subject: Re: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
(#765/#769).





On 01 Mar 2017, at 13:18, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>> wrote:
From: Aaron Zauner mailto:a...@azet.org>>
Date: Wednesday, March 1, 2017 at 8:11 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Sean Turner mailto:s...@sn3rd.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>, IRTF
CFRG mailto:c...@irtf.org>>
Subject: Re: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
(#765/#769).
On 25 Feb 2017, at 14:28, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>>
wrote:
Hi Sean, Joe, Eric and all,
I would like to address my thoughts/suggestions on 2 issues in option
a.
1) The data limit should be addressed in term of blocks, not records.
When the record size is not the full size, some user might not know
what to do. When the record size is 1 block, the limit of 2^24.5
blocks (records) is way too low unnecessarily for
the margin of 2^-60.  In that case, 2^34.5 1-block records is the
limit which still achieves the margin of 2^-60.
I respectfully disagree. TLS deals in records not in blocks, so in the
end any semantic change here will just confuse implementors, which
isn't a good idea in my opinion.
Over the discussion of the PRs, the preference was blocks.


I don't see a clear preference. I see Brian Smith suggested switching to
blocks to be more precise in a PR. But in general it seems to me that
"Option A" was preferred in this thread anyhow - so these PRs aren't
relevant? I'm not sure that text on key-usage
limits in blocks in a spec that fundamentally deals in records is less
confusing, quite the opposite (at least to me). As I pointed out
earlier: I strongly recommend that any changes to the spec are as clear
als possible to engineers (non-crypto/math people)
-- e.g. why the spec is suddenly dealing in blocks instead of records
et cetera. Again; I really don't see any reason to change text here - to
me all suggested changes are even more confusing.




Hi Aaron,


The  technical reasons I explained are reasons for using records. I don’t
see how that is confusing.


If you like records, then the record number = the total blocks / the
record size in blocks: this is simplest already.


That formula does not correctly compute how many records have been sent on
a connection, because the record size in blocks is variable, not constant.
You can modify it to get bounds on the total number of records sent, but
the bounds are sloppy because some records only consume 2 blocks (one for
encryption, one for masking in GHASH) while some consume far more.

It's simpler for an implementation to count how many records have been
sent on a connection  by using the connection's sequence number. This
puts less burden on the implementation/implementer.

I think the record size is configurable and it does not change regularly in the 
same session (or connection).  Somebody corrects me here!

Quynh.



Cheers

Kenny



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).

2017-03-01 Thread Dang, Quynh (Fed)


From: Aaron Zauner mailto:a...@azet.org>>
Date: Wednesday, March 1, 2017 at 9:24 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Sean Turner mailto:s...@sn3rd.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>, IRTF 
CFRG mailto:c...@irtf.org>>
Subject: Re: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).


On 01 Mar 2017, at 13:18, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>> wrote:
From: Aaron Zauner mailto:a...@azet.org>>
Date: Wednesday, March 1, 2017 at 8:11 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Sean Turner mailto:s...@sn3rd.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>, IRTF 
CFRG mailto:c...@irtf.org>>
Subject: Re: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).
On 25 Feb 2017, at 14:28, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>> wrote:
Hi Sean, Joe, Eric and all,
I would like to address my thoughts/suggestions on 2 issues in option a.
1) The data limit should be addressed in term of blocks, not records. When the 
record size is not the full size, some user might not know what to do. When the 
record size is 1 block, the limit of 2^24.5 blocks (records) is way too low 
unnecessarily for the margin of 2^-60.  In that case, 2^34.5 1-block records is 
the limit which still achieves the margin of 2^-60.
I respectfully disagree. TLS deals in records not in blocks, so in the end any 
semantic change here will just confuse implementors, which isn't a good idea in 
my opinion.
Over the discussion of the PRs, the preference was blocks.

I don't see a clear preference. I see Brian Smith suggested switching to blocks 
to be more precise in a PR. But in general it seems to me that "Option A" was 
preferred in this thread anyhow - so these PRs aren't relevant? I'm not sure 
that text on key-usage limits in blocks in a spec that fundamentally deals in 
records is less confusing, quite the opposite (at least to me). As I pointed 
out earlier: I strongly recommend that any changes to the spec are as clear als 
possible to engineers (non-crypto/math people) -- e.g. why the spec is suddenly 
dealing in blocks instead of records et cetera. Again; I really don't see any 
reason to change text here - to me all suggested changes are even more 
confusing.

Hi Aaron,

The  technical reasons I explained are reasons for using records. I don’t see 
how that is confusing.

If you like records, then the record number = the total blocks / the record 
size in blocks: this is simplest already.

Quynh.





Aaron


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).

2017-03-01 Thread Dang, Quynh (Fed)


From: Watson Ladd mailto:watsonbl...@gmail.com>>
Date: Wednesday, March 1, 2017 at 1:36 PM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: "tls@ietf.org<mailto:tls@ietf.org>" mailto:tls@ietf.org>>, 
"c...@irtf.org<mailto:c...@irtf.org>" mailto:c...@irtf.org>>, 
Aaron Zauner mailto:a...@azet.org>>, "Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769).

That is not how HTTP works. Lots of records are small

OK. What is the percentage ? Even all records were small, providing a correct 
number would be a good thing. If someone wants to rekey a lot often, I am not 
suggesting against that.

Quynh.

because they result from small writes.

On Mar 1, 2017 6:48 AM, "Dang, Quynh (Fed)" 
mailto:quynh.d...@nist.gov>> wrote:


From: "Paterson, Kenny" 
mailto:kenny.pater...@rhul.ac.uk>>
Date: Wednesday, March 1, 2017 at 9:38 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>, Aaron Zauner 
mailto:a...@azet.org>>
Cc: IRTF CFRG mailto:c...@irtf.org>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769).

Hi,

On 01/03/2017 14:31, "TLS on behalf of Dang, Quynh (Fed)"
mailto:tls-boun...@ietf.org> on behalf of 
quynh.d...@nist.gov<mailto:quynh.d...@nist.gov>> wrote:
From: Aaron Zauner mailto:a...@azet.org>>
Date: Wednesday, March 1, 2017 at 9:24 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Sean Turner mailto:s...@sn3rd.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>, IRTF
CFRG mailto:c...@irtf.org>>
Subject: Re: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
(#765/#769).





On 01 Mar 2017, at 13:18, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>> wrote:
From: Aaron Zauner mailto:a...@azet.org>>
Date: Wednesday, March 1, 2017 at 8:11 AM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Sean Turner mailto:s...@sn3rd.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>, IRTF
CFRG mailto:c...@irtf.org>>
Subject: Re: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
(#765/#769).
On 25 Feb 2017, at 14:28, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>>
wrote:
Hi Sean, Joe, Eric and all,
I would like to address my thoughts/suggestions on 2 issues in option
a.
1) The data limit should be addressed in term of blocks, not records.
When the record size is not the full size, some user might not know
what to do. When the record size is 1 block, the limit of 2^24.5
blocks (records) is way too low unnecessarily for
the margin of 2^-60.  In that case, 2^34.5 1-block records is the
limit which still achieves the margin of 2^-60.
I respectfully disagree. TLS deals in records not in blocks, so in the
end any semantic change here will just confuse implementors, which
isn't a good idea in my opinion.
Over the discussion of the PRs, the preference was blocks.


I don't see a clear preference. I see Brian Smith suggested switching to
blocks to be more precise in a PR. But in general it seems to me that
"Option A" was preferred in this thread anyhow - so these PRs aren't
relevant? I'm not sure that text on key-usage
limits in blocks in a spec that fundamentally deals in records is less
confusing, quite the opposite (at least to me). As I pointed out
earlier: I strongly recommend that any changes to the spec are as clear
als possible to engineers (non-crypto/math people)
-- e.g. why the spec is suddenly dealing in blocks instead of records
et cetera. Again; I really don't see any reason to change text here - to
me all suggested changes are even more confusing.




Hi Aaron,


The  technical reasons I explained are reasons for using records. I don’t
see how that is confusing.


If you like records, then the record number = the total blocks / the
record size in blocks: this is simplest already.


That formula does not correctly compute how many records have been sent on
a connection, because the record size in blocks is variable, not constant.
You can modify it to get bounds on the total number of records sent, but
the bounds are sloppy because some records only consume 2 blocks (one for
encryption, one for masking in GHASH) while some consume far more.

It's simpler for an implementation to count how many records have been
sent on a connection  by using the connection's sequence number. This
puts less burden on the implementation/implementer.

I think the record size is configurable and it does not change regularly in the 
same session (or connection).  Somebody corrects me here!

Quynh.



Cheers

Kenny




___
TLS mailing list
TLS@ietf.org<mailto:TLS@ietf.org>
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).

2017-03-02 Thread Dang, Quynh (Fed)


From: Martin Thomson mailto:martin.thom...@gmail.com>>
Date: Wednesday, March 1, 2017 at 4:18 PM
To: 'Quynh' mailto:quynh.d...@nist.gov>>
Cc: Watson Ladd mailto:watsonbl...@gmail.com>>, 
"c...@irtf.org<mailto:c...@irtf.org>" mailto:c...@irtf.org>>, 
"tls@ietf.org<mailto:tls@ietf.org>" mailto:tls@ietf.org>>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769).

On 2 March 2017 at 05:44, Dang, Quynh (Fed) 
mailto:quynh.d...@nist.gov>> wrote:
OK. What is the percentage ? Even all records were small, providing a
correct number would be a good thing. If someone wants to rekey a lot often,
I am not suggesting against that.

It will vary greatly depending on circumstance.  Most of the time the
record size matches the MTU.  Other times it matches the write size,
which can be only a small number of octets.  For bulk transfers it can
approach the record maximum.  All on the same connection sometimes.

I really don't know what you are suggesting here.  The point is the
accounting in terms of records doesn't really give you any insight
into the number of blocks.

Hi Martin,

Thank you for the information.

In the PRs’ discussions, I saw that Brian and Rich wanted blocks. You, Eric and 
other people were comfortably discussing the issue in term of blocks. 
Implementing and running TLS were your career, so I made suggestions based on 
blocks.

Aaron wanted records, so I gave him the equation to figure that out. I did not 
mean to suggest to use records.

Quynh.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Requesting working group adoption of draft-stebila-tls-hybrid-design

2020-02-13 Thread Dang, Quynh H. (Fed)
This website has a good summary of the candidates: 
https://pqc-wiki.fau.edu/w/Special:DatabaseHome  .

Quynh.

From: TLS  on behalf of Martin Thomson 

Sent: Wednesday, February 12, 2020 4:57 PM
To: Blumenthal, Uri - 0553 - MITLL ; tls@ietf.org 

Subject: Re: [TLS] Requesting working group adoption of 
draft-stebila-tls-hybrid-design

On Thu, Feb 13, 2020, at 08:44, Blumenthal, Uri - 0553 - MITLL wrote:
> You saw the key sizes that the NIST PQC candidates require? How would
> you suggest dealing with them unless there's support for larger public
> keys?

Only a few of them.  Some are OK, but the number is few, I agree.  I haven't 
found a good summary of the second round candidates and I don't have time to 
dig into all of the candidates.

___
TLS mailing list
TLS@ietf.org
https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Fmailman%2Flistinfo%2Ftls&data=02%7C01%7Cquynh.dang%40nist.gov%7C66ec528552554b52d03f08d7b006af16%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C1%7C637171415063382880&sdata=DdG71mohtkyXCdRplxvT1z5JWRvg53RqO98BKqswIw4%3D&reserved=0
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] NIST crypto group and HKDF (and therefore TLS 1.3)

2020-05-11 Thread Dang, Quynh H. (Fed)
Hi Rich, Sean and all,

1) Traditionally, a HKDF-Extract is used to extract entropy from a DH type 
shared secret. However, the first HKDF-Extract in the key schedule takes a PSK 
instead of a DH shared secret.

We don't see security problems with this instance in TLS 1.3. NIST requires the 
PSK to have efficient amount of entropy (to achieve a security strength 
required by NIST) when it is externally generated. When it is externally 
generated, one of NIST's approved random bit generation methods in SP 800-90 
series must be used.

When the PSK is a resumption key, then its original key exchange and its key 
derivation function(s) must meet the security strength desired/required for the 
PSK.

NIST plans to allow/approve the function in SP 800-133r2, Section 6.3, item # 3 
on pages 22 and 23: 
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-133r2-draft.pdf

2) Traditionally, HKDF is extract-then-expand. However, in TLS 1.3, we have 
extract-then-multiple expands.

We don't see security problems for this new version of HKDF as specified in TLS 
1.3.  NIST plans to approve a general method for this approach in SP 800-56C 
revision 2, section 5.3: 
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-56Cr2-draft.pdf

NIST plans to handle the issues above that way to avoid repeating the work when 
one or both of the same HKDF instances or new variant(s) for one or both of 
them is/are used in different application(s).

The other KDFs are already compliant with NIST's existing KDFs.

Regards,
Quynh.
Recommendation for Key-Derivation Methods in Key-Establishment 
Schemes
23 . Draft NIST Special Publication 800-56C 24 . Revision 2 25 Recommendation 
for Key-Derivation 26 . Methods in Key-Establishment Schemes 27 . 28 . 29 
Elaine Barker 30 Lily Chen 31 Computer Security Division 32 . Information 
Technology Laboratory
nvlpubs.nist.gov


From: Cfrg  on behalf of Salz, Rich 

Sent: Friday, May 8, 2020 4:21 PM
To: tls@ietf.org ; c...@ietf.org 
Subject: [Cfrg] NIST crypto group and HKDF (and therefore TLS 1.3)

If you don’t care about FIPS-140, just delete this message, and avoid the 
temptation to argue how bad it is.

NIST SP 800-56C (Recommendation for Key-Derivation Methods in Key-Establishment 
Schemes) is currently a draft in review. The document is at 
https://csrc.nist.gov/publications/detail/sp/800-56c/rev-2/draft  Email 
comments can be sent to 800-56c_comme...@nist.gov with a deadline of May 15.  
That is not a lot of time.  The NIST crypto group is currently unlikely to 
include HKDF, which means that TLS 1.3 would not be part of FIPS. The CMVP 
folks at NIST understand this, and agree that this would be bad; they are 
looking at adding it, perhaps via an Implementation Guidance update.

If you have a view of HKDF (and perhaps TLS 1.3), I strongly encourage you to 
comment at the above address.  Please do not comment here. I know that many 
members of industry and academia have been involved with TLS 1.3, and performed 
security analysis of it. If you are one of those people, *please* send email 
and ask the NIST Crypto Team to reconsider.

Thank you.
/r$



___
Cfrg mailing list
c...@irtf.org
https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.irtf.org%2Fmailman%2Flistinfo%2Fcfrg&data=02%7C01%7Cquynh.dang%40nist.gov%7C6ba71a06d50f41e6582508d7f38d7d80%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C637245661358887556&sdata=5DEzorMNakxeh%2Bkb4wYJ00QxyRmcXbsDFm7wlVw9iig%3D&reserved=0
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] NIST crypto group and HKDF (and therefore TLS 1.3)

2020-05-12 Thread Dang, Quynh H. (Fed)
Hi Torsten,

Thank you for the review. I think the review helps many people to understand 
the HKDF's spec and its NIST's approval better.

In SP 800-108 
(https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-108.pdf, 
at the end of Section 5. (before 5.1), it says that "  Alternative orders for 
the input data fields may be used
for different KDFs. " .

And, at the end of the paragraph before that, it says "
One or more of these fixed input data fields may be omitted unless required for
certain purposes as discussed in Section 7.5 and Section 7.6.".

After an extraction step, the output is a pseudorandom key. The KDFs in SP 
800-108 are NIST's approved KDFs to derive key(s) from a pseudorandom key.  The 
purpose of any of these KDFs in SP 800-108 is the same with the purpose of the 
expansion step. Therefore, they are allowed for being used as expansion steps.

Regards,
Quynh.



From: "Torsten Schütze" 
Sent: Tuesday, May 12, 2020 7:39 AM
To: Hugo Krawczyk 
Cc: Dang, Quynh H. (Fed) ; c...@ietf.org ; 
tls@ietf.org ; rs...@akamai.com 
Subject: Re: [Cfrg] NIST crypto group and HKDF (and therefore TLS 1.3)

Hi Hugo, hi Quynh,

on Monday, 2020-05-11 Hugo Krawzcyk wrote:

> I haven't looked at the revisions. But in previous versions you needed lawyer 
> skills to go through the language to see that RFC 5869 was indeed compliant 
> with the NIST recommendation. It would be nice if this time it would make 
> very explicit that RFC 5869 is compliant with this Recommendation.

Indeed. In SP800-56C Rev. 2 draft we have in lines 545, 546:

"[RFC 5869] specifies a version of the above extraction-then-expansion 
key-derivation procedure using HMAC for both the extraction and expansion 
steps."  so one would assume that HKDF according to RFC 5869 is compliant with 
SP800-56CR2.

However, for key expansion it refers in line 533, 532 to

"2. Call KDF( K_DK, L, {IV,} FixedInfo ) to obtain DerivedKeyingMaterial or an 
error indicator (see [SP 800-108] for details)."

Everything would be fine if we find KDF( K_DK, L, {IV}, FixedInfo) as

HKDF-Expand(PRK, info, L) -> OKM

The output OKM is calculated as follows:

   N = ceil(L/HashLen)
   T = T(1) | T(2) | T(3) | ... | T(N)
   OKM = first L octets of T

   where:
   T(0) = empty string (zero length)
   T(1) = HMAC-Hash(PRK, T(0) | info | 0x01)
   T(2) = HMAC-Hash(PRK, T(1) | info | 0x02)
   T(3) = HMAC-Hash(PRK, T(2) | info | 0x03)

i.e. the definitions of RFC 5869 in SP800-108. Unfortunately, the closest one 
could find in SP800-108 is

5.2 KDF in Feedback Mode

1.  n: = \ceil{L/h}.
2.  If n > 2^{32} -1, then indicate an error and stop.
3.  result(0):= ∅ and K(0):= IV.
4.  For i = 1 to n, do
a.
K(i) := PRF (KI, K(i-1) {|| [i]2 }|| Label || 0x00 || Context || [L]2)
b.
result(i) := result(i-1) || K(i)
5. Return: K_O := the leftmost L bits of result(n).

With the substitutions PRK = KI, HashLen = h, N = n, T(i) = K(i-1) 0x01, 0x02 = 
[i]_2, PKM = K_O and info = Label || 0x00 || Context || [L]_2 one is almost 
there, EXCEPT

- the counter 0x01, 0x02, 0x03 is at the end of the string in HKDF RFC 5869 and 
right-after the K(i-1), respectively T(i), in SP800-108. At least this gives 
different results. (This is what already Dan Brown wrote in a recent mail). I 
don't think this has security implications, but I'm no expert.

- With HKDF, it is only allowed to iterate up to N = 255 as L \le 255 HashLen 
while in SP800-108 we have n \le 2^{32}-1.

So, with this interpretation I don't see that HKDF RFC5869 is a concrete 
instantiation of SP800-56C rev2 draft + SP800-108. At least I couldn't find any 
official CAVP test vectors for such an HKDF-HMAC-SHA-256 construct. BTW, while 
we have such test vectors in RFC 5869 for SHA-384 (and SHA-1) there are no such 
things for SHA-384 or SHA-512, i.e. higher security levels. As a practitioner I 
would first test my HKDF RFC 5869 implementation if it is allows to iterate 
above N = 255. BTW, I don't have a good feeling with extracting up to 2^{32}-1 
keys from a single IKM.

I would like to hear from NIST if there are any plans to provide CAVP test 
vectors for HKDF-HMAC-SHA-2 according to RFC 5869. In my opinion, SP800-56C 
rev2 draft is suboptimal as it refers for a very important component, i.e. key 
expansion, to another, quite old document.

Torsten
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] NIST crypto group and HKDF (and therefore TLS 1.3)

2020-05-14 Thread Dang, Quynh H. (Fed)
Hi Torsten,

The HKDF is one of the approved KDFs for being used together with an approved 
key exchange as specified in 56C.

At this moment, a standalone HKDF is not approved yet.

Draft version 2 of SP 800-133 (Section 6.3, item# 3: 
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-133r2-draft.pdf
 )specifies an option for a HKDF's extraction step when the IKM is a shared 
secret generated from a NIST's approved random bit generator in SP 800-90 
series (like external pre-shared key in TLS 1.3) or when the IKM is a 
pseudorandom key derived from a previous approved key exchange (like a 
resumption in TLS 1.3).


Recommendation for Cryptographic Key 
Generation<https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-133r2-draft.pdf>
Draft NIST Special Publication 800-133 . Revision 2. Recommendation for 
Cryptographic Key Generation. Elaine Barker . Allen Roginsky . Richard Davis . 
This publication is available free of charge from:
nvlpubs.nist.gov


When/if that extraction step option is officially approved, meaning the current 
NIST's approved HKDFs in key exchanges in SP 800-56C would become NIST-approved 
standalone HKDFs, we'll publish their test vectors.

Regards,
Quynh.


From: "Torsten Schütze" 
Sent: Tuesday, May 12, 2020 8:36 AM
To: Dang, Quynh H. (Fed) 
Cc: Hugo Krawczyk ; c...@ietf.org ; 
tls@ietf.org ; rs...@akamai.com 
Subject: Aw: Re: [Cfrg] NIST crypto group and HKDF (and therefore TLS 1.3)

Hi Quynh,

thank you for your quick response. I knew that omitting some fields was 
allowed, but not that permutations are allowed, too. Okay, this makes HKDF RFC 
5869 definitely to a NIST SP800-56C rev 2 compliant KDF. But what to do about 
the CAVP tests or approved test vectors. Couldn't NIST provide for the very 
often used RFC 5869 HKDF approved test vectors? I coulnd't find any. Only for 
some older, application specific KDFs. Of course, I can generate them by myself 
with an independent implementation, but I'm talking about evaluation/approval 
business here.

Regards

Torsten



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Re: X25519MLKEM768 in draft-kwiatkowski-tls-ecdhe-mlkem-02

2024-10-17 Thread Dang, Quynh H. (Fed)
Hi Kris and Deirdre,

We talked about cases of  X||Y internally at NIST.

1) X is a raw shared secret such as Z in 56C and Y is the output of a 
NIST-approved PQ KEM. Their order can be reversed. This should be approved for 
PQ security solution.

2) X is an output of a NIST-approved classical key establishment/agreement 
method (not a raw shared secret, not Z) and Y is the output of a NIST-approved 
PQ KEM. Their order can be reversed. This should be approved for PQ security 
solution.

3) Both X and Y are outputs of NIST-approved PQ secure KEMs. Their order can be 
reversed. This should be approved for PQ security solution.

4) Both X and Y are outputs of NIST-approved PQ secure KEMs. Later, if only one 
of them is still NIST-approved PQ secure KEM.

But I don’t know an official policy from my management at this moment.

Regards,
Quynh.






From: Deirdre Connolly 
Sent: Thursday, October 17, 2024 9:24 AM
To: Kris Kwiatkowski 
Cc: CJ Tjhai ; 
draft-kwiatkowski-tls-ecdhe-mlkem.auth...@ietf.org; TLS List 
Subject: [TLS] Re: X25519MLKEM768 in draft-kwiatkowski-tls-ecdhe-mlkem-02

Just to clarify Kris, you are _asking_ if there is a plan? I don't know if 
Quynh can comment but

NIST have said publicly that they plan to clarify hybrid KEMs in the 
forthcoming SP 800-227:
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/6_D0mMSYJZY/m/3DwwIAJXAwAJ

> is there a plan to change SP800-56Cr2, so that it allows to
> use combination of two shared secrets where one was generated by FIPS-approved
> technique, BUT concatenated in any order.

On Thu, Oct 17, 2024 at 9:10 AM Kris Kwiatkowski 
mailto:k...@amongbytes.com>> wrote:

Indeed, that would be good inside.

Additionally, is there a plan to change SP800-56Cr2, so that it allows to
use combination of two shared secrets where one was generated by FIPS-approved
technique, BUT concatenated in any order.

I understand it is potentially more complicated for ACVP testing, but it
seems it would solve a problem. Does order matter from the security perspective?
On 17/10/2024 13:53, Eric Rescorla wrote:
Can we get a ruling on this from NIST? Quynh?

-Ekr


On Thu, Oct 17, 2024 at 2:32 AM Joseph Birr-Pixton 
mailto:jpix...@gmail.com>> wrote:
Please could we... not?

It certainly is one interpretation of that section in SP800-56C. Another is 
that TLS1.3 falls outside SP800-56C, because while HKDF kinda looks like 
section 5, none of the allowed options for key expansion specified in SP800-108 
(and revs) are the same as HKDF-Expand. "KDF in Feedback Mode" gets close, but 
(ironically) the order and width of inputs are different. Given people have 
shipped FIPS-approved TLS1.3 many times by now (with approved HKDF 
implementations under SP800-56C!), we can conclude that FIPS approval is simply 
not sensitive to these sorts of details.

I also note that tls-hybrid-design says:

> The order of shares in the concatenation
> MUST be the same as the order of algorithms indicated in the
> definition of the NamedGroup.

So we're not even being consistent with something past WGLC?

Thanks,
Joe

On Thu, 17 Oct 2024 at 08:58, Kris Kwiatkowski 
mailto:k...@amongbytes.com>> wrote:

Yes, we switched the order. We want MLKEM before X25519, as that presumably can 
be FIPS-certified.
According to 
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-56Cr2.pdf, 
section 2,
the shared secret from the FIPS-approved algorithm must precede the one that is 
not approved. X25519
is not FIPS-approved hence MLKEM goes first. P-256 is FIPS-approved.

The ordering was mentioned a few times, and there was some discussion on github 
[1] about it. But,
maybe the conclusion should be just to change the name X25519MLKEM768 -> 
MLKEM768X25519 (any opinion?)
That would be just a name change, so the code point value should stay the same.

Cheers,
Kris

[1] 
https://github.com/open-quantum-safe/oqs-provider/issues/503#issuecomment-2349478942
On 17/10/2024 08:24, Watson Ladd wrote:

Did we really switch the order gratuitously on the wire between them?



On Thu, Oct 17, 2024 at 12:02 AM CJ Tjhai


 wrote:

Hello,



The X25519MLKEM768 scheme defined in the document is a concatenation of 
MLKEM768 and X25519, why is it not named MLKEM768X25519 instead?



For SecP256r1MLKEM768, the naming makes sense since it's a concatenation of 
P256 and MLKEM768.



Apologies if this has already been asked before.



Cheers,

CJ









PQ Solutions Limited (trading as ‘Post-Quantum’) is a private limited company 
incorporated in England and Wales with registered number 06808505.



This email is meant only for the intended recipient. If you have received this 
email in error, any review, use, dissemination, distribution, or copying of 
this email is strictly prohibited. Please notify us immediately of the error by 
return email and please delete this message from your system. Thank you in 
advance for your cooperation.



For more

[TLS] Re: X25519MLKEM768 in draft-kwiatkowski-tls-ecdhe-mlkem-02

2024-10-17 Thread Dang, Quynh H. (Fed)
Hi Eric and all,

NIST allows the shared secret key, called K, of a ML-KEM to be in the place of 
Z (a raw shared secret) in 56C.

In 56C,  Z can be X||Y, but X must be either a raw shared secret generated by a 
NIST-approved key agreement/establishment method or K.

A X25519’s raw shared secret or a pseudorandom key generated from it is not a 
NIST-approved X.

A NIST-approved ECDH method produces shared secret key(s), not a raw DH shared 
secret because it always has a KDF which outputs the desired key(s) (the 
desired keying material) the application on top of it needs.

Regards,
Quynh.

From: Eric Rescorla 
Sent: Thursday, October 17, 2024 8:54 AM
To: Joseph Birr-Pixton 
Cc: CJ Tjhai ; 
draft-kwiatkowski-tls-ecdhe-mlkem.auth...@ietf.org; TLS List 
Subject: [TLS] Re: X25519MLKEM768 in draft-kwiatkowski-tls-ecdhe-mlkem-02

Can we get a ruling on this from NIST? Quynh?

-Ekr


On Thu, Oct 17, 2024 at 2:32 AM Joseph Birr-Pixton 
mailto:jpix...@gmail.com>> wrote:
Please could we... not?

It certainly is one interpretation of that section in SP800-56C. Another is 
that TLS1.3 falls outside SP800-56C, because while HKDF kinda looks like 
section 5, none of the allowed options for key expansion specified in SP800-108 
(and revs) are the same as HKDF-Expand. "KDF in Feedback Mode" gets close, but 
(ironically) the order and width of inputs are different. Given people have 
shipped FIPS-approved TLS1.3 many times by now (with approved HKDF 
implementations under SP800-56C!), we can conclude that FIPS approval is simply 
not sensitive to these sorts of details.

I also note that tls-hybrid-design says:

> The order of shares in the concatenation
> MUST be the same as the order of algorithms indicated in the
> definition of the NamedGroup.

So we're not even being consistent with something past WGLC?

Thanks,
Joe

On Thu, 17 Oct 2024 at 08:58, Kris Kwiatkowski 
mailto:k...@amongbytes.com>> wrote:

Yes, we switched the order. We want MLKEM before X25519, as that presumably can 
be FIPS-certified.
According to 
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-56Cr2.pdf, 
section 2,
the shared secret from the FIPS-approved algorithm must precede the one that is 
not approved. X25519
is not FIPS-approved hence MLKEM goes first. P-256 is FIPS-approved.

The ordering was mentioned a few times, and there was some discussion on github 
[1] about it. But,
maybe the conclusion should be just to change the name X25519MLKEM768 -> 
MLKEM768X25519 (any opinion?)
That would be just a name change, so the code point value should stay the same.

Cheers,
Kris

[1] 
https://github.com/open-quantum-safe/oqs-provider/issues/503#issuecomment-2349478942
On 17/10/2024 08:24, Watson Ladd wrote:

Did we really switch the order gratuitously on the wire between them?



On Thu, Oct 17, 2024 at 12:02 AM CJ Tjhai


 wrote:

Hello,



The X25519MLKEM768 scheme defined in the document is a concatenation of 
MLKEM768 and X25519, why is it not named MLKEM768X25519 instead?



For SecP256r1MLKEM768, the naming makes sense since it's a concatenation of 
P256 and MLKEM768.



Apologies if this has already been asked before.



Cheers,

CJ









PQ Solutions Limited (trading as ‘Post-Quantum’) is a private limited company 
incorporated in England and Wales with registered number 06808505.



This email is meant only for the intended recipient. If you have received this 
email in error, any review, use, dissemination, distribution, or copying of 
this email is strictly prohibited. Please notify us immediately of the error by 
return email and please delete this message from your system. Thank you in 
advance for your cooperation.



For more information about Post-Quantum, please visit 
www.post-quantum.com.



In the course of our business relationship, we may collect, store and transfer 
information about you. Please see our privacy notice at 
www.post-quantum.com/privacy-policy/
 to learn about how we use this information.

___

TLS mailing list -- tls@ietf.org

To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: X25519MLKEM768 in draft-kwiatkowski-tls-ecdhe-mlkem-02

2024-10-23 Thread Dang, Quynh H. (Fed)
Hi all,

Be noted that in this discussion, I don't speak on behalf of anybody else 
including the NIST PQC team and my message does not intent to imply any 
technical policies in the future.  I just wanted to explain some technical 
matters.

In SP 800-56C, we specified Z = Z' || T as an option where Z' is generated by a 
NIST-approved Key establishment scheme (either a key agreement such as DH or a 
key transport like RSA).  My impression is that some people here are not happy 
about this.

For the 1 step KDFs either with a hash function or a HMAC, the message input to 
the KDFs is always counter||Z||FixedInfo and FixedInfo is optional, and it can 
be a public value.  So, Z = Z'||T keeps the status quo (not changing the 
analysis of the use of the one-step KDFs because one can treat T, not a 
NIST-approved shared secret value as a constant, a part of FixedInfo.  In fact, 
for the one-step KDFs, replacing Z by Z'||T is exactly the same as changing 
FixedInfo to FixedInfo' = T||FixedInfo, an option that was already available to 
all implementations of the one-step KDFs in 56C.

Why did we not allow Z to be T||Z' also in 56C ? We did not see any security 
reasons for doing it the other way around while we assumed that Z' is generated 
by a secure method and T is generated by a broken one. So, we did not see a 
need for a new variant. We also had a technical reason to not want people to 
have (ML-KEM-512 || RSA-3072 ) and (RSA-3072 || ML-KEM-512) (combos like that) 
to co-exist.

Let's say we have 2 protocols: they do reverse order of each other:  Z=Z1 || T1 
and Z' = T2||Z2. 3 parties A, B and M.  A and M do key exchange. B and M do key 
exchange and M is the bad guy, the attacker.

M needs to send data to A and B to establish Z1, Z2, T1 and T2. For example, M 
uses ML-KEM to generate Z1 and Z2 then takes Z2 and Z1 to be T1 and T2 for M's 
key exchanges with A and B respectively by doing RSA key transports (Z2 and Z1 
are the shared secrets of the 2 RSA key transports). So, in this case Z =Z1||T1 
= Z1 || Z2 and Z' =T2||Z2= Z1 ||Z2, Z =Z'.  Because of the existence of ML-KEM, 
 A and B might expect that their shared secret keys with M from the hybrids are 
"random" and "unique".  So, allowing both orders breaks that property.

If T1 and T2 are (EC)DH, the attack won’t work.

If T1 and T2 are pre shared keys, then the attack is more complicated because M 
must generate Z1 and Z2 as ML-KEM shared secret keys with A and B, then make A 
and B agree to Z2 and Z1 as their pre-shared secrets T1 and T2 respectively 
before doing actual key exchanges with them.  So, in theory this also works.

If T1 and T2 are generated by another KEM, then the attack is possible 
depending on how T1 and T2 are generated by this KEM.  For example, if this KEM 
does this: : (𝐾,𝑟) ← G(message), then the attacker M takes (𝑚‖H(ek) in ML-KEM 
as “message”   ( (𝐾,𝑟) ← G(𝑚‖H(ek)) is in ML-KEM).  So, after generating Z1 and 
Z2 from ML-KEM, the attacker can generate T1 and T2 to be Z2 and Z1 
respectively from this KEM. In this case, the attack works. This bad KEM works 
as a key transport because the Encaps side wholly decides the value of the 
shared secret keys T1 and T2.

A counter argument to the importance of the attack above is that when M decides 
to do bad things, M won’t do that because there are many worse things M can do.

We understand that the attack above does not work for TLS 1.3 because the 
inputs to the Derive-Secret functions contain unique byte strings such as "c ap 
traffic" and the transcript.

When we have a decision on this matter, our team will share it with you.

Regards,
Quynh.


From: David Benjamin 
Sent: Wednesday, October 23, 2024 11:52 AM
To: Joseph Birr-Pixton 
Cc: Salz, Rich ; John Mattsson 
; tls@ietf.org
Subject: [TLS] Re: X25519MLKEM768 in draft-kwiatkowski-tls-ecdhe-mlkem-02

Oh I definitely agree there is a deeper problem here. It seems like NIST wrote 
something over-restrictive and then folks did a preemptive compliance maneuver 
to try to satisfy it. This is not a good way to do protocol development and 
hampers what should ultimately have been the goal for everyone: making TLS a 
good protocol to secure network communications.

But, regrettable as it is, the damage has been done for X25519MLKEM768 and we 
shouldn't change it now. Fixing the underlying problem is for the future. The 
compliance schemes should be fixed so that they reflect the actual security 
needs here, and there is no reason to have an opinion on whether any particular 
compliance scheme's preferred secrets go first or second. After all, the entire 
point of this hybrid thing is that you don't have to care about the other input.

On Wed, Oct 23, 2024 at 11:39 AM Joseph Birr-Pixton 
mailto:jpix...@gmail.com>> wrote:
On Fri, 18 Oct 2024 at 15:12, Salz, Rich 
mailto:40akamai@dmarc.ietf.org>> wrote:
To me, this trumps geek esthetics about making things line up.

To be clear, my objection is not aesthetic or even about implementati

[TLS] Re: [EXTERNAL] Disallowing reuse of ephemeral keys

2024-12-16 Thread Dang, Quynh H. (Fed)
KeyGen is very cheap (generally). So, for many use cases, it seems it does not 
make sense to keep a private key around for later uses.

In very constrained environments, if the goal is to save computation as much as 
possible, some users could decide to reuse keys.

Regards,
Quynh.

From: Richard Barnes 
Sent: Monday, December 16, 2024 9:11 AM
To: Scott Fluhrer (sfluhrer) 
Cc: Andrei Popov ; IETF TLS 

Subject: [TLS] Re: [EXTERNAL] Disallowing reuse of ephemeral keys

You’re technically correct, but if you look at how TLS stacks work in practice, 
the amount of state they keep across connections is tiny, basically just what 
is needed to support resumption, if that. So tracking which public keys have 
been seen would be a big lift.

On the other hand, if a couple of widespread clients started enforcing 
uniqueness, it could be enough to make the ecosystem inimical to reuse.

On the third hand, enforcing means failing connections that would otherwise 
work, so you would need a substantial security benefit to get a critical mass 
to enforce.  Which I’m not sure is there.

—Richard



On Mon, Dec 16, 2024 at 09:03 Scott Fluhrer (sfluhrer) 
mailto:40cisco@dmarc.ietf.org>> wrote:
Might I remind people the ML-KEM public key reuse is detectable?

The ML-KEM public key is in the client hello, which is either in the clear, or 
(in the case of ECH) is readable by the server.  Hence, if the same ML-KEM is 
reused, then (in the worse case) the server can detect that.

And, if it is externally visible, I believe that the TLS WG can forbid it.  
Whether it should or not is what we are debating, but I believe the debate 
can't be closed on that basis.

Has anyone considered the open questions I gave a few days ago?

> -Original Message-
> From: Alicja Kario mailto:hka...@redhat.com>>
> Sent: Monday, December 16, 2024 8:42 AM
> To: Christian Huitema mailto:huit...@huitema.net>>
> Cc: Andrei Popov 
> mailto:40microsoft@dmarc.ietf.org>>;
>  IETF TLS
> mailto:tls@ietf.org>>
> Subject: [TLS] Re: [EXTERNAL] Disallowing reuse of ephemeral keys
>
> No, the specification definitely can, and should, specify behaviours that are
> unenforcable.
>
> When there are preferred or recommended ways of doing something, we
> should absolutely put that in writing.
>
> On Thursday, 12 December 2024 21:07:03 CET, Christian Huitema wrote:
> > I like keeping as they are. Disallowing only makes sense if that
> > prohibition can be enforced, and one of the peer refuses the
> > connection if it detects key reuse. Would we want to do that? And,
> > even if we wanted to accept the cost of refusing connections, could
> > individual nodes actually detect reuse by a peer?
> >
> > -- Christian Huitema
> >
> > On Dec 12, 2024, at 10:11 AM, Andrei Popov
> > mailto:40microsoft@dmarc.ietf.org>>
> >  wrote:
> >
> >
> > +1 in favor of option1.
> >
> > Cheers,
> >
> > Andrei
> >
> > From: Russ Housley mailto:hous...@vigilsec.com>>
> > Sent: Thursday, December 12, 2024 9:43 AM
> > To: Joe Salowey mailto:j...@salowey.net>>
> > Cc: IETF TLS mailto:tls@ietf.org>>
> > Subject: [EXTERNAL] [TLS] Re: Disallowing reuse of ephemeral keys
> >
> > I prefer option 1.
> >
> > Russ
> >
> >
> > On Dec 12, 2024, at 12:35 PM, Joseph Salowey 
> > mailto:j...@salowey.net>> wrote:
> >
> > Currently RFC 8446 (and RFC8446bis) do not forbid the reuse of
> > ephemeral keys.  This was the consensus of the working group during
> > the development of TLS 1.3.  There has been more recent discussion on
> > the list to forbid reuse for ML-KEM/hybrid key exchange.  There are
> > several possible options here:
> >
> > Keep things as they are (ie. say nothing, as was done in previous TLS
> > versions, to forbid the reuse of ephemeral keys) - this is the default
> > action if there is no consensus Disallow reuse for specific
> > ciphersuites.  It doesn’t appear that there is any real difference in
> > this matter between MLKEM/hybrids and ECDH here except that there are
> > many more ECDH implementations (some of which may reuse a keyshare)
> > Update 8446 to disallow reuse of ephemeral keyshares in general.  This
> > could be done by revising RFC 8446bis or with a separate document that
> > updates RFC 8446/bis
> >
> > We would like to know if there are folks who think the reuse of
> > keyshares is important for HTTP or non-HTTP use cases.
> >
> >
> > Thanks,
> >
> >
> > Joe, Deirdre and Sean
> >
>
> --
> Regards,
> Alicja (nee Hubert) Kario
> Principal Quality Engineer, RHEL Crypto team
> Web: www.cz.redhat.com
> Red Hat Czech s.r.o., Purkyňova 115, 612 00, Brno, Czech Republic
>
> ___
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to tls-le...@ietf.org
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org
___

[TLS] Re: Changing WG Mail List Reputation

2025-01-14 Thread Dang, Quynh H. (Fed)
Hi all,

It is sad to know that many people would like to join in the discussions but 
decide not to do so because of their anticipation of the pain they would get 
and the time they would need to spend.

There are ways to help the situation.  For example, the chairs could decide to 
say that 80% agree on something is defined to have the consensus.

The chairs can open a thread to discuss a technical matter then at some point 
the chairs make a consensus call: yes/no ( reasons not required because it has 
been discussed already).

One of the things I am concerned about this method is that every email has a 
vote.

Maybe consensus calls can only be made and completed at the in-person meetings ?

Regards,
Quynh.

From: Filippo Valsorda 
Sent: Tuesday, January 14, 2025 1:48 PM
To: tls@ietf.org
Subject: [TLS] Re: Changing WG Mail List Reputation

2024-10-25 14:30 GMT+02:00 Sean Turner mailto:s...@sn3rd.com>>:
• Repetition of arguments without providing substantive new information
• Requesting an unreasonable amount of work to provide information

Personally, the reason I find the list (and generally the IETF) unwelcoming is 
that arguments can easily prevail by attrition. Some participants have the time 
and determination to reply to every email, nitpick every argument, 
systematically reiterate their position, attack other's positions and 
motivations, and demand explanation of every assertion, while others don't.

I know at least a few implementers that don't engage with the IETF because they 
don't have time for all that. Myself I go months without opening the list inbox 
because I know engaging is a tiny campaign every time.

Two participants sending a dozen emails in support of solution A, and six 
participants sending one email each in support of solution B can look a lot 
like there is no consensus, or that there is consensus for solution A, 
especially if not all objections to solution B are painstakingly addressed.

I think this is what these two points in the reminder are getting at, but I am 
curious how moderating such behavior would look like, because every individual 
instance can be defended by arguing (probably at length!) that actually there 
is new information in each post, or that the amount of work being demanded is 
perfectly appropriate.

I want to acknowledge this is a common and difficult problem to solve. 
Famously, Wikipedia suffers from the same pathology. Maybe it's just the 
downside of open forums and it should be accepted, but if the goal is improving 
the reputation of the list, I feel there needs to be willingness to engage 
these behaviors, which will not make everyone happy.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: FW: I-D Action: draft-kwiatkowski-tls-ecdhe-mlkem-03.txt

2025-03-10 Thread Dang, Quynh H. (Fed)
The server can detect a reused encapsulation key if it saves the keys which 
have been received and check the newly received key against the list of its 
saved keys. The server could just save the hashes of the keys or a "small" 
portion of the keys as the key IDs.  My guess is that that would be an 
expensive operation because of many reasons. 

Regards,
Quynh. 
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org