[TLS] Data volume limits

2015-12-15 Thread Eric Rescorla
Watson kindly prepared some text that described the limits on what's safe
for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
limit (2^{36} bytes), even though ChaCha doesn't have the same
restriction.

I wanted to get people's opinions on whether that's actually what we want
or whether we should (as is my instinct) allow people to use ChaCha
for longer periods.

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Eric Rescorla
For context, see:
https://github.com/tlswg/tls13-spec/pull/372

On Tue, Dec 15, 2015 at 1:14 PM, Eric Rescorla  wrote:

> Watson kindly prepared some text that described the limits on what's safe
> for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
> limit (2^{36} bytes), even though ChaCha doesn't have the same
> restriction.
>
> I wanted to get people's opinions on whether that's actually what we want
> or whether we should (as is my instinct) allow people to use ChaCha
> for longer periods.
>
> -Ekr
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Watson Ladd
I don't think that's what I intended: I think the limit should be
ciphersuite specific. Unfortunately that requires more work.

On Tue, Dec 15, 2015 at 4:15 PM, Eric Rescorla  wrote:
> For context, see:
> https://github.com/tlswg/tls13-spec/pull/372
>
> On Tue, Dec 15, 2015 at 1:14 PM, Eric Rescorla  wrote:
>>
>> Watson kindly prepared some text that described the limits on what's safe
>> for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
>> limit (2^{36} bytes), even though ChaCha doesn't have the same
>> restriction.
>>
>> I wanted to get people's opinions on whether that's actually what we want
>> or whether we should (as is my instinct) allow people to use ChaCha
>> for longer periods.
>>
>> -Ekr
>>
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>



-- 
"Man is born free, but everywhere he is in chains".
--Rousseau.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Eric Rescorla
On Tue, Dec 15, 2015 at 1:17 PM, Watson Ladd  wrote:

> I don't think that's what I intended: I think the limit should be
> ciphersuite specific. Unfortunately that requires more work.
>

That makes sense. Do you think you'll be able to provide that in the not
too distant future? I can just leave this on ice till then...

-Ekr


> On Tue, Dec 15, 2015 at 4:15 PM, Eric Rescorla  wrote:
> > For context, see:
> > https://github.com/tlswg/tls13-spec/pull/372
> >
> > On Tue, Dec 15, 2015 at 1:14 PM, Eric Rescorla  wrote:
> >>
> >> Watson kindly prepared some text that described the limits on what's
> safe
> >> for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
> >> limit (2^{36} bytes), even though ChaCha doesn't have the same
> >> restriction.
> >>
> >> I wanted to get people's opinions on whether that's actually what we
> want
> >> or whether we should (as is my instinct) allow people to use ChaCha
> >> for longer periods.
> >>
> >> -Ekr
> >>
> >
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
> >
>
>
>
> --
> "Man is born free, but everywhere he is in chains".
> --Rousseau.
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Dave Garrett
Personally, I think a hard requirement to rekey every 64GiB is reasonable 
enough to just use it for every cipher. I don't think cipher-specific 
requirements are worth the effort/complexity. Something like a MUST for AES-GCM 
and a SHOULD for ChaCha seems fine, though, if really desired.


Dave


On Tuesday, December 15, 2015 04:17:34 pm Watson Ladd wrote:
> I don't think that's what I intended: I think the limit should be
> ciphersuite specific. Unfortunately that requires more work.
> 
> On Tue, Dec 15, 2015 at 4:15 PM, Eric Rescorla  wrote:
> > For context, see:
> > https://github.com/tlswg/tls13-spec/pull/372
> >
> > On Tue, Dec 15, 2015 at 1:14 PM, Eric Rescorla  wrote:
> >>
> >> Watson kindly prepared some text that described the limits on what's safe
> >> for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
> >> limit (2^{36} bytes), even though ChaCha doesn't have the same
> >> restriction.
> >>
> >> I wanted to get people's opinions on whether that's actually what we want
> >> or whether we should (as is my instinct) allow people to use ChaCha
> >> for longer periods.
> >>
> >> -Ekr

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Benjamin Beurdouche

> On 15 Dec 2015, at 22:17, Watson Ladd  wrote:
> 
> I don't think that's what I intended: I think the limit should be
> ciphersuite specific. Unfortunately that requires more work.
> 
> On Tue, Dec 15, 2015 at 4:15 PM, Eric Rescorla  wrote:
>> 
>>> I wanted to get people's opinions on whether that's actually what we want
>>> or whether we should (as is my instinct) allow people to use ChaCha
>>> for longer periods.

IMHO, if we differentiate the limit depending on the ciphersuite, it will be 
more complex to handle and cause problems at some point.
I would rather have a single value in the spec that is safe for all allowed 
ciphersuites, rekey more frequently and leave people take their own risks by 
setting higher limits if they do not negotiate AES-GCM (for example).

B.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Scott Fluhrer (sfluhrer)
Might I enquire about the cryptographical reason behind such a limit?

Is this the limit on the size of a single record?  GCM does have a limit 
approximately there on the size of a single plaintext it can encrypt.  For TLS, 
it encrypts a record as a single plaintext, and so this would apply to 
extremely huge records.

Or is this a limit on the total amount of traffic that can go through a 
connection over multiple records?  If this is the issue, what is the security 
concern that you would have if that limit is exceeded?

Thank you.

From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Eric Rescorla
Sent: Tuesday, December 15, 2015 4:15 PM
To: tls@ietf.org
Subject: [TLS] Data volume limits

Watson kindly prepared some text that described the limits on what's safe
for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
limit (2^{36} bytes), even though ChaCha doesn't have the same
restriction.

I wanted to get people's opinions on whether that's actually what we want
or whether we should (as is my instinct) allow people to use ChaCha
for longer periods.

-Ekr

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Eric Rescorla
On Tue, Dec 15, 2015 at 2:01 PM, Scott Fluhrer (sfluhrer) <
sfluh...@cisco.com> wrote:

> Might I enquire about the cryptographical reason behind such a limit?
>
>
>
> Is this the limit on the size of a single record?  GCM does have a limit
> approximately there on the size of a single plaintext it can encrypt.  For
> TLS, it encrypts a record as a single plaintext, and so this would apply to
> extremely huge records.
>
>
>
> Or is this a limit on the total amount of traffic that can go through a
> connection over multiple records?  If this is the issue, what is the
> security concern that you would have if that limit is exceeded?
>

Watson provided these, so perhaps he can elaborate.

It would be good to have a value we all agree on.

Thanks,
-Ekr


>
> Thank you.
>
>
>
> *From:* TLS [mailto:tls-boun...@ietf.org] *On Behalf Of *Eric Rescorla
> *Sent:* Tuesday, December 15, 2015 4:15 PM
> *To:* tls@ietf.org
> *Subject:* [TLS] Data volume limits
>
>
>
> Watson kindly prepared some text that described the limits on what's safe
>
> for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
>
> limit (2^{36} bytes), even though ChaCha doesn't have the same
>
> restriction.
>
>
>
> I wanted to get people's opinions on whether that's actually what we want
>
> or whether we should (as is my instinct) allow people to use ChaCha
>
> for longer periods.
>
>
>
> -Ekr
>
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Russ Housley

On Dec 15, 2015, at 4:14 PM, Eric Rescorla wrote:

> Watson kindly prepared some text that described the limits on what's safe
> for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
> limit (2^{36} bytes), even though ChaCha doesn't have the same
> restriction.
> 
> I wanted to get people's opinions on whether that's actually what we want
> or whether we should (as is my instinct) allow people to use ChaCha
> for longer periods.

Perhaps the algorithm registration can provide the limit, allowing 
implementations use the full period for each algorithm.

Russ


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Watson Ladd
On Tue, Dec 15, 2015 at 5:18 PM, Russ Housley  wrote:
>
> On Dec 15, 2015, at 4:14 PM, Eric Rescorla wrote:
>
>> Watson kindly prepared some text that described the limits on what's safe
>> for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
>> limit (2^{36} bytes), even though ChaCha doesn't have the same
>> restriction.
>>
>> I wanted to get people's opinions on whether that's actually what we want
>> or whether we should (as is my instinct) allow people to use ChaCha
>> for longer periods.
>
> Perhaps the algorithm registration can provide the limit, allowing 
> implementations use the full period for each algorithm.

That makes sense, but people might ignore these values in the
registry, and the entries might not be reviewed as well as they should
be compared to if they are in the relevant RFCs.
>
> Russ
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls



-- 
"Man is born free, but everywhere he is in chains".
--Rousseau.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Hanno Böck
On Tue, 15 Dec 2015 13:14:30 -0800
Eric Rescorla  wrote:

> Watson kindly prepared some text that described the limits on what's
> safe for AES-GCM and restricting all algorithms with TLS 1.3 to that
> lower limit (2^{36} bytes), even though ChaCha doesn't have the same
> restriction.
> 
> I wanted to get people's opinions on whether that's actually what we
> want or whether we should (as is my instinct) allow people to use
> ChaCha for longer periods.

Let me state the opinion that unlikely will get adopted: Isn't that a
good reason to reconsider whether GCM is a good mode in the first place?

How about: Let's use chacha20, let's not set any limits because we don't
have to, let's deprecate algorithms that can't keep up with that?

(I generally think even TLS 1.3 deprecates a lot of stuff there is
still far too much variation. Let's keep things simpler, let's reduce
the algorithm zoo.)

-- 
Hanno Böck
http://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: BBB51E42


pgpARhQ8AV2Cs.pgp
Description: OpenPGP digital signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Watson Ladd
On Tue, Dec 15, 2015 at 5:01 PM, Scott Fluhrer (sfluhrer)
 wrote:
> Might I enquire about the cryptographical reason behind such a limit?
>
>
>
> Is this the limit on the size of a single record?  GCM does have a limit
> approximately there on the size of a single plaintext it can encrypt.  For
> TLS, it encrypts a record as a single plaintext, and so this would apply to
> extremely huge records.

The issue is the bounds in Iwata-Ohashai-Minematsu's paper, which show
a quadratic confidentiality loss after a total volume sent. This is an
exploitable issue.

>
>
>
> Or is this a limit on the total amount of traffic that can go through a
> connection over multiple records?  If this is the issue, what is the
> security concern that you would have if that limit is exceeded?
>
>
>
> Thank you.
>
>
>
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Eric Rescorla
> Sent: Tuesday, December 15, 2015 4:15 PM
> To: tls@ietf.org
> Subject: [TLS] Data volume limits
>
>
>
> Watson kindly prepared some text that described the limits on what's safe
>
> for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
>
> limit (2^{36} bytes), even though ChaCha doesn't have the same
>
> restriction.
>
>
>
> I wanted to get people's opinions on whether that's actually what we want
>
> or whether we should (as is my instinct) allow people to use ChaCha
>
> for longer periods.
>
>
>
> -Ekr
>
>
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>



-- 
"Man is born free, but everywhere he is in chains".
--Rousseau.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Barry Leiba's No Objection on draft-ietf-tls-cached-info-20: (with COMMENT)

2015-12-15 Thread Barry Leiba
Barry Leiba has entered the following ballot position for
draft-ietf-tls-cached-info-20: No Objection

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)


Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
for more information about IESG DISCUSS and COMMENT positions.


The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-tls-cached-info/



--
COMMENT:
--

I have two comments about Section 8.2:

1. The Standards Action range starts at 0, and you've assigned 1 and 2,
but not 0.  Is it intended that 0 should remain reserved and unassigned? 
If so, you should say that.

2. For the Specification Required range, is there any guidance you
can/should give to the designated expert?  What do you expect the DE to
look for when evaluating requests?  Why might the DE not approve a
request?


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Scott Fluhrer (sfluhrer)


> -Original Message-
> From: Watson Ladd [mailto:watsonbl...@gmail.com]
> Sent: Tuesday, December 15, 2015 5:38 PM
> To: Scott Fluhrer (sfluhrer)
> Cc: Eric Rescorla; tls@ietf.org
> Subject: Re: [TLS] Data volume limits
> 
> On Tue, Dec 15, 2015 at 5:01 PM, Scott Fluhrer (sfluhrer)
>  wrote:
> > Might I enquire about the cryptographical reason behind such a limit?
> >
> >
> >
> > Is this the limit on the size of a single record?  GCM does have a
> > limit approximately there on the size of a single plaintext it can
> > encrypt.  For TLS, it encrypts a record as a single plaintext, and so
> > this would apply to extremely huge records.
> 
> The issue is the bounds in Iwata-Ohashai-Minematsu's paper, which show a
> quadratic confidentiality loss after a total volume sent. This is an 
> exploitable
> issue.

Actually, the main result of that paper was that GCM with nonces other than 96 
bits were less secure than previous thought (or, rather, that the previous 
proofs were wrong, and what they can prove is considerably worse; whether their 
proof is tight is an open question).  They address 96 bit nonces as well, 
however the results they get are effectively unchanged from the original GCM 
paper.  I had thought that TLS used 96 bit nonces (constructed from 32 bit salt 
and a 64 bit counter); were the security guarantees from the original paper too 
weak?  If not, what has changed?

The quadratic behavior in the security proofs are there for just about any 
block cipher mode, and is the reason why you want to stay well below the 
birthday bound.  However, that's as true for (say) CBC mode as it is for GCM

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Watson Ladd
On Dec 15, 2015 6:08 PM, "Scott Fluhrer (sfluhrer)" 
wrote:
>
>
>
> > -Original Message-
> > From: Watson Ladd [mailto:watsonbl...@gmail.com]
> > Sent: Tuesday, December 15, 2015 5:38 PM
> > To: Scott Fluhrer (sfluhrer)
> > Cc: Eric Rescorla; tls@ietf.org
> > Subject: Re: [TLS] Data volume limits
> >
> > On Tue, Dec 15, 2015 at 5:01 PM, Scott Fluhrer (sfluhrer)
> >  wrote:
> > > Might I enquire about the cryptographical reason behind such a limit?
> > >
> > >
> > >
> > > Is this the limit on the size of a single record?  GCM does have a
> > > limit approximately there on the size of a single plaintext it can
> > > encrypt.  For TLS, it encrypts a record as a single plaintext, and so
> > > this would apply to extremely huge records.
> >
> > The issue is the bounds in Iwata-Ohashai-Minematsu's paper, which show a
> > quadratic confidentiality loss after a total volume sent. This is an
exploitable
> > issue.
>
> Actually, the main result of that paper was that GCM with nonces other
than 96 bits were less secure than previous thought (or, rather, that the
previous proofs were wrong, and what they can prove is considerably worse;
whether their proof is tight is an open question).  They address 96 bit
nonces as well, however the results they get are effectively unchanged from
the original GCM paper.  I had thought that TLS used 96 bit nonces
(constructed from 32 bit salt and a 64 bit counter); were the security
guarantees from the original paper too weak?  If not, what has changed?
>
> The quadratic behavior in the security proofs are there for just about
any block cipher mode, and is the reason why you want to stay well below
the birthday bound.  However, that's as true for (say) CBC mode as it is
for GCM

That's correct. And when we crunch the numbers assuming 2^60 is negligible
out comes 2^36 bytes. This doesn't hold for ChaCha20.
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Eric Rescorla
On Tue, Dec 15, 2015 at 3:08 PM, Scott Fluhrer (sfluhrer) <
sfluh...@cisco.com> wrote:

>
>
> > -Original Message-
> > From: Watson Ladd [mailto:watsonbl...@gmail.com]
> > Sent: Tuesday, December 15, 2015 5:38 PM
> > To: Scott Fluhrer (sfluhrer)
> > Cc: Eric Rescorla; tls@ietf.org
> > Subject: Re: [TLS] Data volume limits
> >
> > On Tue, Dec 15, 2015 at 5:01 PM, Scott Fluhrer (sfluhrer)
> >  wrote:
> > > Might I enquire about the cryptographical reason behind such a limit?
> > >
> > >
> > >
> > > Is this the limit on the size of a single record?  GCM does have a
> > > limit approximately there on the size of a single plaintext it can
> > > encrypt.  For TLS, it encrypts a record as a single plaintext, and so
> > > this would apply to extremely huge records.
> >
> > The issue is the bounds in Iwata-Ohashai-Minematsu's paper, which show a
> > quadratic confidentiality loss after a total volume sent. This is an
> exploitable
> > issue.
>
> Actually, the main result of that paper was that GCM with nonces other
> than 96 bits were less secure than previous thought (or, rather, that the
> previous proofs were wrong, and what they can prove is considerably worse;
> whether their proof is tight is an open question).  They address 96 bit
> nonces as well, however the results they get are effectively unchanged from
> the original GCM paper.  I had thought that TLS used 96 bit nonces
> (constructed from 32 bit salt and a 64 bit counter);


TLS 1.3 uses a 96-bit nonce constructed by masking the counter with a random
value.

were the security guarantees from the original paper too weak?  If not,
> what has changed?
>
> The quadratic behavior in the security proofs are there for just about any
> block cipher mode, and is the reason why you want to stay well below the
> birthday bound.


The birthday bound here is 2^{64}, right?

-Ekr


>   However, that's as true for (say) CBC mode as it is for GCM
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Brian Smith
Watson Ladd  wrote:

> The issue is the bounds in Iwata-Ohashai-Minematsu's paper, which show
> a quadratic confidentiality loss after a total volume sent. This is an
> exploitable issue.
>

Please explain in more detail how you got "2^36 bytes" for a nonce size of
96 bits from the Iwata-Ohashai-Minematsu paper [1].

[1] https://eprint.iacr.org/2012/438.pdf

Also, the Niwa-Ohashi-Minematsu-Iwata follow-up paper [2] change things in
any way? In particular, note that it concludes "The new security bounds
improve the security bounds in [11] by a factor of 2^17, and they show that
the security of GCM is actually close to what was originally claimed in
[17,18]."

A factor of 2^17 difference is pretty significant as far as this is
concerned, AFAICT.

[2] https://eprint.iacr.org/2015/214.pdf

Cheers,
Brian
--
https://briansmith.org/
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Henrick Hellström

On 2015-12-16 00:48, Eric Rescorla wrote:



On Tue, Dec 15, 2015 at 3:08 PM, Scott Fluhrer (sfluhrer)
mailto:sfluh...@cisco.com>> wrote:
The quadratic behavior in the security proofs are there for just
about any block cipher mode, and is the reason why you want to stay
well below the birthday bound.


The birthday bound here is 2^{64}, right?

-Ekr

   However, that's as true for (say) CBC mode as it is for GCM


Actually, no.

Using the sequence number as part of the effective nonce, means that it 
won't collide. There is no relevant bound for collisions in the nonces 
or in the CTR state, because they simply won't happen (unless there is 
an implementation flaw). There won't be any potentially exploitable 
collisions.


However, theoretically, the GHASH state might collide with a 2^{64} 
birthday bound. This possibility doesn't seem entirely relevant, though.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Watson Ladd
On Dec 15, 2015 7:09 PM, "Henrick Hellström"  wrote:
>
> On 2015-12-16 00:48, Eric Rescorla wrote:
>>
>>
>>
>> On Tue, Dec 15, 2015 at 3:08 PM, Scott Fluhrer (sfluhrer)
>> mailto:sfluh...@cisco.com>> wrote:
>> The quadratic behavior in the security proofs are there for just
>> about any block cipher mode, and is the reason why you want to stay
>> well below the birthday bound.
>>
>>
>> The birthday bound here is 2^{64}, right?
>>
>> -Ekr
>>
>>However, that's as true for (say) CBC mode as it is for GCM
>
>
> Actually, no.
>
> Using the sequence number as part of the effective nonce, means that it
won't collide. There is no relevant bound for collisions in the nonces or
in the CTR state, because they simply won't happen (unless there is an
implementation flaw). There won't be any potentially exploitable collisions.

You don't understand the issue. The issue is PRP not colliding, whereas PRF
can.

>
> However, theoretically, the GHASH state might collide with a 2^{64}
birthday bound. This possibility doesn't seem entirely relevant, though.
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Andrey Jivsov


On 12/15/2015 04:08 PM, Henrick Hellström wrote:
> On 2015-12-16 00:48, Eric Rescorla wrote:
>>
>>
>> On Tue, Dec 15, 2015 at 3:08 PM, Scott Fluhrer (sfluhrer)
>> mailto:sfluh...@cisco.com>> wrote:
>> The quadratic behavior in the security proofs are there for just
>> about any block cipher mode, and is the reason why you want to stay
>> well below the birthday bound.
>>
>>
>> The birthday bound here is 2^{64}, right?
>>
>> -Ekr
>>
>>However, that's as true for (say) CBC mode as it is for GCM
>
> Actually, no.
>
> Using the sequence number as part of the effective nonce, means that 
it won't collide. There is no relevant bound for collisions in the 
nonces or in the CTR state, because they simply won't happen (unless 
there is an implementation flaw). There won't be any potentially 
exploitable collisions.

>

Here is one attack that exploits such a collision 
https://www.ietf.org/mail-archive/web/openpgp/current/msg08345.html


> However, theoretically, the GHASH state might collide with a 2^{64} 
birthday bound. This possibility doesn't seem entirely relevant, though.

>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Henrick Hellström
> Sent: Tuesday, December 15, 2015 7:09 PM
> To: tls@ietf.org
> Subject: Re: [TLS] Data volume limits
> 
> On 2015-12-16 00:48, Eric Rescorla wrote:
> >
> >
> > On Tue, Dec 15, 2015 at 3:08 PM, Scott Fluhrer (sfluhrer)
> > mailto:sfluh...@cisco.com>> wrote:
> > The quadratic behavior in the security proofs are there for just
> > about any block cipher mode, and is the reason why you want to stay
> > well below the birthday bound.
> >
> >
> > The birthday bound here is 2^{64}, right?
> >
> > -Ekr
> >
> >However, that's as true for (say) CBC mode as it is for GCM
> 
> Actually, no.
> 
> Using the sequence number as part of the effective nonce, means that it
> won't collide. There is no relevant bound for collisions in the nonces or in 
> the
> CTR state, because they simply won't happen (unless there is an
> implementation flaw). There won't be any potentially exploitable collisions.
> 
> However, theoretically, the GHASH state might collide with a 2^{64} birthday
> bound. This possibility doesn't seem entirely relevant, though.

That is a good point, and deserves to be examined more.

With CBC mode, there's a probability that two different ciphertext blocks will 
happen to be identical; when that unlikely event happens, the attacker can 
determine the bitwise difference between the corresponding plaintext blocks 
(and thereby leak a small amount of plaintext)

This doesn't happen with GCM.  Instead, the distinguisher is of this form: the 
attacker with a potential plaintext can compute the internal CTR values for 
GCM; if he sees a duplicate value, he can deduce that that potential plaintext 
wasn't the real one (because the internal CTR values never repeat).

Assuming that they cannot distinguish AES with a random key from a random 
permutation, that's the only thing they can learn.

That is, when they prove that there is no distinguisher with better than 
2^{-64} advantage, what they are referring to (in practice) is that the 
attacker could eliminate a tiny fraction (1 out of 2^{64}) of the possible 
plaintexts; they gain no more information than that.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Henrick Hellström

On 2015-12-16 01:31, Watson Ladd wrote:

You don't understand the issue. The issue is PRP not colliding, whereas
PRF can.


Oh, but I concur. This means that if you observe two same valued cipher 
text blocks, you know that the corresponding key stream blocks can't be 
identical, and deduce that the corresponding plain text blocks have to 
be different. Such observations consequently leak information about the 
plain text, in the rare and unlikely event they actually occur.


However, calling it an exploitable weakness is a bit of a stretch. 
AES-CBC is likely to loose confidentiality slightly faster, for typical 
plain texts.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Martin Thomson
On 16 December 2015 at 08:14, Eric Rescorla  wrote:
>
> I wanted to get people's opinions on whether that's actually what we want
> or whether we should (as is my instinct) allow people to use ChaCha
> for longer periods.


Whatever the actual limits are, I think that implementatios should be
encouraged to rekey more strongly.

If 2^36 is the number, then I can see that being reached in some
applications.  That means that we need the rekey feature to exist.  If
we are going to have that feature, then we need to make sure that it
works.  And suggesting a stupidly high limit (e.g., ChaCha being
greater than 2^96) leaves people thinking that they can skip
implementation and testing of the rekey facility; or it just goes
unused.  If it's not in use, then we'll have a good chance of creating
a protocol feature we can't rely on if it really is needed.

In light of that, the actual limits don't matter that much to me.  As
David McGrew suggested, set a limit at 2^32 and avoid having to think
too hard about how close to the failure point you might be.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Brian Smith
Martin Thomson  wrote:

> Whatever the actual limits are, I think that implementatios should be
> encouraged to rekey more strongly.
>

Why?


> And suggesting a stupidly high limit (e.g., ChaCha being
> greater than 2^96) leaves people thinking that they can skip
> implementation and testing of the rekey facility; or it just goes
> unused.  If it's not in use, then we'll have a good chance of creating
> a protocol feature we can't rely on if it really is needed.
>

I think this is exactly why the limit matters. If we are not in danger of
reaching the limit, then we don't need the rekeying mechanism and it should
be removed. The rekeying mechanism adds considerable complexity and that
complexity needs to be justified.

Alternatively, we'd need a new justification for the rekeying mechanism.
The new justification would affect the design of the rekeying mechanism.
For example, if the purpose of the rekeying mechanism is to get similar
effects to, say, the Axolotl ratchet, then we should ensure that the
rekeying mechanism actually has similar properties to Axolotl.

In light of that, the actual limits don't matter that much to me.  As
> David McGrew suggested, set a limit at 2^32 and avoid having to think
> too hard about how close to the failure point you might be.


First, let's figure out why TLS 1.3 needs a rekeying mechanism, if it does.
Then we can figure out how frequently we should suggest rekeying occur
based on sound reasoning.

Cheers,
Brian
-- 
https://briansmith.org/
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Martin Thomson
On 16 December 2015 at 14:01, Brian Smith  wrote:
> Martin Thomson  wrote:
> Why?

If there were a stupidly high limit, then I would argue for no
rekeying facility.

But the numbers Watson ran suggested that GCM starts to look shaky at
2^36.  That's too low for some applications.

For the rest of the argument I suggest you reread my last mail.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Eric Rescorla
On Tue, Dec 15, 2015 at 4:59 PM, Henrick Hellström 
wrote:

> On 2015-12-16 01:31, Watson Ladd wrote:
>
>> You don't understand the issue. The issue is PRP not colliding, whereas
>> PRF can.
>>
>
> Oh, but I concur. This means that if you observe two same valued cipher
> text blocks, you know that the corresponding key stream blocks can't be
> identical,


That assumes that the plaintext is identical, no? That may be true in some
limited
cases, but isn't generally true

-Ekr

and deduce that the corresponding plain text blocks have to be different.
> Such observations consequently leak information about the plain text, in
> the rare and unlikely event they actually occur.
>
> However, calling it an exploitable weakness is a bit of a stretch. AES-CBC
> is likely to loose confidentiality slightly faster, for typical plain texts.
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Watson Ladd
On Tue, Dec 15, 2015 at 7:59 PM, Henrick Hellström  wrote:
> On 2015-12-16 01:31, Watson Ladd wrote:
>>
>> You don't understand the issue. The issue is PRP not colliding, whereas
>> PRF can.
>
>
> Oh, but I concur. This means that if you observe two same valued cipher text
> blocks, you know that the corresponding key stream blocks can't be
> identical, and deduce that the corresponding plain text blocks have to be
> different. Such observations consequently leak information about the plain
> text, in the rare and unlikely event they actually occur.
>
> However, calling it an exploitable weakness is a bit of a stretch. AES-CBC
> is likely to loose confidentiality slightly faster, for typical plain texts.

The problem is that once you stack enough of those negligible
probabilities together, you end up with something big. Push up to
2^{63} bytes, and the collision probability is 1/4 or 1/2 (I didn't
recompute it just now). And while the definition seems to involve only
a minor loss of security, that's the definition people use for
security.

Using 2^{-64} as a success probability ensures that attackers who can
exploit multiple connections are still defended against. Could we do
better with 2^{-32}? Sure. But at this point we're saying you can
transport over 16 Gbyte of data before rekeying: I think that's enough
for almost all purposes.

-- 
"Man is born free, but everywhere he is in chains".
--Rousseau.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Stephen Farrell

Hi Watson,

On 16/12/15 03:36, Watson Ladd wrote:
> The problem is that once you stack enough of those negligible
> probabilities together, you end up with something big. Push up to
> 2^{63} bytes, and the collision probability is 1/4 or 1/2 (I didn't
> recompute it just now). 

The collision probability of... what? (For that to be 0.50
my gut tells me it's something that's really not at all
likely to be worrisome, but I know I'm not expert here hence
me asking:-)

Thanks,
S.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Dave Garrett
On Tuesday, December 15, 2015 09:40:41 pm Martin Thomson wrote:
> In light of that, the actual limits don't matter that much to me.  As
> David McGrew suggested, set a limit at 2^32 and avoid having to think
> too hard about how close to the failure point you might be.

+1

In fact, if we're OK with setting this rather low threshold, then we could even 
get rid of the rekey signal entirely and just have an automatic rekey after 
every 4GiB for all ciphers. That'd be one less complexity to deal with. Rekeys 
would be routine.


Dave

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Martin Thomson
On 16 December 2015 at 14:57, Dave Garrett  wrote:
> In fact, if we're OK with setting this rather low threshold, then we could 
> even get rid of the rekey signal entirely and just have an automatic rekey 
> after every 4GiB for all ciphers. That'd be one less complexity to deal with. 
> Rekeys would be routine.

I don't like automatic rekey (though I almost like the per-record
rekeying that I think was semi-facetiously suggested by someone).  An
explicit rekey allows for two things:
 - testing
 - reducing the limit if we find that the cipher is more busted than
we originally thought (with respect to key overuse)

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Eric Rescorla
On Tue, Dec 15, 2015 at 7:59 PM, Martin Thomson 
wrote:

> On 16 December 2015 at 14:57, Dave Garrett  wrote:
> > In fact, if we're OK with setting this rather low threshold, then we
> could even get rid of the rekey signal entirely and just have an automatic
> rekey after every 4GiB for all ciphers. That'd be one less complexity to
> deal with. Rekeys would be routine.
>
> I don't like automatic rekey (though I almost like the per-record
> rekeying that I think was semi-facetiously suggested by someone).  An
> explicit rekey allows for two things:
>  - testing
>  - reducing the limit if we find that the cipher is more busted than
> we originally thought (with respect to key overuse)
>

Also, allows each side to have their own opinion.

Not a fan of automatic rekey.

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Bill Frantz
So we have to trade off the risks of too much data vs. the risks 
of a complex rekey protocol vs. the risks having the big data 
applications build new connections every 2**36 or so bytes.


If we don't have rekeying, then the big data applications are 
the only ones at risk. If we do, it may be a wedge which can 
compromise all users.


Cheers - Bill

-
Bill Frantz| Re: Hardware Management Modes: | Periwinkle
(408)356-8506  | If there's a mode, there's a   | 16345 
Englewood Ave
www.pwpconsult.com | failure mode. - Jerry Leichter | Los Gatos, 
CA 95032


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Dave Garrett
On Tuesday, December 15, 2015 10:59:35 pm Martin Thomson wrote:
> On 16 December 2015 at 14:57, Dave Garrett  wrote:
> > In fact, if we're OK with setting this rather low threshold, then we could 
> > even get rid of the rekey signal entirely and just have an automatic rekey 
> > after every 4GiB for all ciphers. That'd be one less complexity to deal 
> > with. Rekeys would be routine.
> 
> I don't like automatic rekey (though I almost like the per-record
> rekeying that I think was semi-facetiously suggested by someone).  An
> explicit rekey allows for two things:
>  - testing
>  - reducing the limit if we find that the cipher is more busted than
> we originally thought (with respect to key overuse)

On Tuesday, December 15, 2015 11:01:41 pm Eric Rescorla wrote:
> On Tue, Dec 15, 2015 at 7:59 PM, Martin Thomson 
> wrote:
> Also, allows each side to have their own opinion.

We could just make the threshold a configurable parameter, with default/maximum 
at 2^32 bytes. Each endpoint could just provide its threshold in a new 
extension. Both get to specify what they want and it could be lowered 
arbitrarily for testing or panic fix.


Dave

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Martin Thomson
On 16 December 2015 at 15:08, Dave Garrett  wrote:
> We could just make the threshold a configurable parameter, with 
> default/maximum at 2^32 bytes. Each endpoint could just provide its threshold 
> in a new extension. Both get to specify what they want and it could be 
> lowered arbitrarily for testing or panic fix.

That sounds more complex than the current option.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Dave Garrett
On Tuesday, December 15, 2015 11:11:36 pm Martin Thomson wrote:
> On 16 December 2015 at 15:08, Dave Garrett  wrote:
> > We could just make the threshold a configurable parameter, with 
> > default/maximum at 2^32 bytes. Each endpoint could just provide its 
> > threshold in a new extension. Both get to specify what they want and it 
> > could be lowered arbitrarily for testing or panic fix.
> 
> That sounds more complex than the current option.

It's the difference between one signal in the handshake followed by predictable 
rekeying and an arbitrary number of signals at arbitrary points after the 
handshake.


Dave

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Andrey Jivsov

On 12/15/2015 03:47 PM, Watson Ladd wrote:



On Dec 15, 2015 6:08 PM, "Scott Fluhrer (sfluhrer)" 
mailto:sfluh...@cisco.com>> wrote:

>
>
>
> > -Original Message-
> > From: Watson Ladd [mailto:watsonbl...@gmail.com 
]

> > Sent: Tuesday, December 15, 2015 5:38 PM
> > To: Scott Fluhrer (sfluhrer)
> > Cc: Eric Rescorla; tls@ietf.org 
> > Subject: Re: [TLS] Data volume limits
> >
> > On Tue, Dec 15, 2015 at 5:01 PM, Scott Fluhrer (sfluhrer)
> > mailto:sfluh...@cisco.com>> wrote:
> > > Might I enquire about the cryptographical reason behind such a 
limit?

> > >
> > >
> > >
> > > Is this the limit on the size of a single record?  GCM does have a
> > > limit approximately there on the size of a single plaintext it can
> > > encrypt.  For TLS, it encrypts a record as a single plaintext, 
and so

> > > this would apply to extremely huge records.
> >
> > The issue is the bounds in Iwata-Ohashai-Minematsu's paper, which 
show a
> > quadratic confidentiality loss after a total volume sent. This is 
an exploitable

> > issue.
>
> Actually, the main result of that paper was that GCM with nonces 
other than 96 bits were less secure than previous thought (or, rather, 
that the previous proofs were wrong, and what they can prove is 
considerably worse; whether their proof is tight is an open 
question).  They address 96 bit nonces as well, however the results 
they get are effectively unchanged from the original GCM paper.  I had 
thought that TLS used 96 bit nonces (constructed from 32 bit salt and 
a 64 bit counter); were the security guarantees from the original 
paper too weak?  If not, what has changed?

>
> The quadratic behavior in the security proofs are there for just 
about any block cipher mode, and is the reason why you want to stay 
well below the birthday bound.  However, that's as true for (say) CBC 
mode as it is for GCM


That's correct. And when we crunch the numbers assuming 2^60 is 
negligible out comes 2^36 bytes. This doesn't hold for ChaCha20.




If 2^36 above is about confidentiality, I am getting q<2^34, assuming 
probability of a collision lower than p=2^-60.


2^34*(2^34-1) <  (2^(128-60))/0.316 (that's the formula with 'e', not 
C(q,2)).


q=2^34 blocks is 2^38 bytes (256 GiB). Close enough, although p=2^-60 
could have been higher for higher total bytes before rekey.




>



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Ryan Carboni
How often does TLS rekey anyway? I know RC4 rekeys per packet, but I've
read and searched a fair amount of documentation, and haven't found
anything on the subject. Perhaps I'm looking for the wrong terms or through
the wrong documents.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Paterson, Kenny
RC4 does not rekey per application layer fragment in TLS. The same key is used 
for the duration of a connection. 

Other protocols using RC4 do rekey per packet, eg WEP and WPA/TKIP. 

Cheers

Kenny

> On 16 Dec 2015, at 16:37, Ryan Carboni  wrote:
> 
> How often does TLS rekey anyway? I know RC4 rekeys per packet, but I've read 
> and searched a fair amount of documentation, and haven't found anything on 
> the subject. Perhaps I'm looking for the wrong terms or through the wrong 
> documents.
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls