On 12/16/15, 10:50, "Watson Ladd" <watsonbl...@gmail.com> wrote:

>On Wed, Dec 16, 2015 at 10:44 AM, Blumenthal, Uri - 0553 - MITLL
><u...@ll.mit.edu> wrote:
>> OK, let me try an extremely naïve approach.
>>
>> Say, an adversary observes a ton of TLS traffic between A and B. Using
>> approach that Watson and others outlined, he can now tell that this is
>>not a
>> truly random stream but a bunch of encrypted data. My question is, from
>> practical real-world point of view - so what? (Of course, beyond the
>>ability
>> to publish a real paper that IND-* has been compromised :)
>>
>> If there are practical consequences, like loss of confidentiality – I’m
>> dying to hear the outline of a practical attack.
>
>The problem is that people design systems assuming something like
>indistinguishability. And so when you violate that assumption, all
>bets are off.

I don’t buy this. AFAIK, TLS has not been designed based on that
assumption. And I’m not making any bets. :)

But:

>What is the actual problem with either implementing rekeying, or
>limiting data to something like 128 GByte per connection?

As far as I’m concerned - none (no problem). And if there’s a way to
enforce rekeying sooner than that (limiting data-under-one-key to a MUCH
smaller amount) - I’d be in favor of that too. I just don’t accept the
above justification for it - and want to be shown how loss of IND property
in TLS context leads to a practical attack.


>>From: TLS <tls-boun...@ietf.org> on behalf of "Dang, Quynh"
>> <quynh.d...@nist.gov>
>> Date: Wednesday, December 16, 2015 at 07:21
>> To: Eric Rescorla <e...@rtfm.com>, "tls@ietf.org" <tls@ietf.org>
>>
>> Subject: Re: [TLS] Data volume limits
>>
>> Hi Eric,
>>
>>
>> I explained the issue before and some other people recently explained it
>> again on this thread. AES has 128-bit block. Therefore, when there are
>>2^64
>> or more ciphertext blocks, there are likely collisions among the
>>ciphertext
>> blocks (the collision probability increases rapidly when the number of
>> ciphertext blocks increases above 2^64 ( 2^n/2 in generic term) ).
>>
>>
>> However, the only information the attacker can gain from ANY pair of
>> collided ciphertext blocks is that their corresponding plaintext blocks
>>are
>> probably different ones because the chance for them to be the same is
>> 1/2^128 (1/2^n in generic term) and this is NOT better than a random
>>guess.
>> So, you don't lose anything actually.
>>
>>
>> As a pseudorandom function, AES completely fails under any mode when the
>> number of ciphertext blocks gets above 2^64.  When the counter is
>> effectively only 64 bits (instead of 96 bits as in TLS 1.3), the data
>> complexity should be below 2^32 blocks because the same input block and
>>the
>> same key can be repeated 2^32 times to find a collision in the
>>ciphertext
>> blocks.  If you want a negligible collision probability, the number of
>>data
>> blocks should be way below 2^32 in this situation.
>>
>>
>> However, the confidentiality of the plaintext blocks is not lost at all
>>as
>> long as the counter number does not repeat.
>>
>>
>> Quynh.
>>
>>
>>
>>
>>
>> ________________________________
>> From: TLS <tls-boun...@ietf.org> on behalf of Eric Rescorla
>><e...@rtfm.com>
>> Sent: Wednesday, December 16, 2015 6:17 AM
>> To: Simon Josefsson
>> Cc: tls@ietf.org
>> Subject: Re: [TLS] Data volume limits
>>
>>
>>
>> On Wed, Dec 16, 2015 at 12:44 AM, Simon Josefsson <si...@josefsson.org>
>> wrote:
>>>
>>> Eric Rescorla <e...@rtfm.com> writes:
>>>
>>> > Watson kindly prepared some text that described the limits on what's
>>> > safe
>>> > for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
>>> > limit (2^{36} bytes), even though ChaCha doesn't have the same
>>> > restriction.
>>>
>>> Can we see a brief writeup explaining the 2^36 number?
>>
>>
>> I believe Watson provided one a while back at:
>> https://www.ietf.org/mail-archive/web/tls/current/msg18240.html
>>
>>>
>>> I don't like re-keying.  It is usually a sign that your primitives are
>>> too weak and you are attempting to hide that fact.  To me, it is
>>>similar
>>> to discard the first X byte of RC4 output.
>>
>>
>> To be clear: I would prefer not to rekey either, but the consensus at
>>IETF
>> Yokohama
>> was that we were close enough to the limit that we probably had to.
>>Would be
>> happy to learn that we didn't.
>>
>> -Ekr
>>
>>
>>
>>> If AES-GCM cannot provide confidentiality beyond 64GB (which would
>>> surprise me somewhat), I believe we ought to be careful about
>>> recommending it.
>>>
>>> Of course, the devil is in the details: if the risk is that the secret
>>> key is leaked, that's fatal; if the risk is that the attacker can tell
>>> whether two particular plaintext 128 byte blocks are the same or not in
>>> the entire file, that can be a risk we can live with (similar to the
>>> discard X bytes of RC4 fix).
>>>
>>> I believe 64GB is within the range that people download in a web
>>>browser
>>> these days.  More data intensive longer-running protocols often
>>>transfer
>>> significantly more.
>>>
>>> /Simon
>>
>>
>>
>> _______________________________________________
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
>
>
>
>-- 
>"Man is born free, but everywhere he is in chains".
>--Rousseau.
>

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to