What is especially cool about counter mode encryption is how its real
world security degrades more gracefully than CBC mode encryption.  I
am not sure that the FSE paper did a good job of saying it in English
as opposed to math (except for the last sentence of Section 4), but
even though CTR may be just as distinguishable as CBC after some
amount of known plaintext is encrypted, counter mode in practice gives
away much less information.

Just to be precise, no attack has been found which illustrates that CTR mode's security degrades like CBC's. Nevertheless, it might be possible to formalize your intuition.

Atul

On 2016-07-18 23:11, David McGrew wrote:
Hi Quynh,

On Jul 13, 2016, at 9:58 AM, Dang, Quynh (Fed) <quynh.d...@nist.gov> wrote:



On 7/13/16, 9:26 AM, "Watson Ladd" <watsonbl...@gmail.com> wrote:

On Wed, Jul 13, 2016 at 5:30 AM, Atul Luykx <atul.lu...@esat.kuleuven.be>
wrote:
Hey Quynh,

How can one use the distinguishing attack with the data complexity
bound I
suggested for recovering 1 bit of the encryption key in the context of
TLS
?

You cannot recover any bits of the encryption key unless you attack AES.

No-one, as far as I know, has analyzed what kind of attacks one can
perform
against GCM around and beyond the birthday bound (except for the forgery
attacks, which require repeated nonces or known forgeries). However,
for CTR
mode, the underlying encryption of GCM, David McGrew typed up a document describing an attack one could perform to recover information about the
plaintext:
http://eprint.iacr.org/2012/623
He describes it for 64 bit block ciphers, but the attacks work equally
well
for 128 bit block ciphers, at a higher data complexity of course.

Basically, there are a lot of unknowns, and it could be that the bounds
you
recommend will be good enough in practice. However, it's important to be
clear about the risks involved in venturing into unknown territory.

Atul

Furthermore the cost of avoiding this is trivial. The rekeying
mechanism has been designed to have minimal code complexity.

GCM with data complexity of about 3^38 records is not vulnerable to that plaintext recovery attack. Therefore, there are no needs to rekey before
that data complexity is reached.

That’s right, as long as the number of 16-byte blocks per record is
guaranteed to be significantly below 2^26.


For counter-mode, I think the attack works if there is a large set of
known plaintexts. In protocols such as TLS and Ipsec, there are known
plaintexts, but I don¹t think the amount of known plaintexts (even though
the amount of encrypted repeated-plaintexts can be big) is enough to
create risk for AES_128 by the targeted plaintext recovery attack.

What is especially cool about counter mode encryption is how its real
world security degrades more gracefully than CBC mode encryption.  I
am not sure that the FSE paper did a good job of saying it in English
as opposed to math (except for the last sentence of Section 4), but
even though CTR may be just as distinguishable as CBC after some
amount of known plaintext is encrypted, counter mode in practice gives
away much less information.

best

David

A known
plaintext can be encrypted multiple times with different keys, not with
the same key.

Regards,
Quynh.




On 2016-07-13 13:14, Dang, Quynh (Fed) wrote:

Hi Atul,

On 7/12/16, 3:50 PM, "Atul Luykx" <atul.lu...@esat.kuleuven.be> wrote:

To be clear, this probability is that an attacker would be able to
take a huge (4+ Petabyte) ciphertext, and a compatibly sized
potential
(but incorrect) plaintext, and with probability 2^{-32}, be able to determine that this plaintext was not the one used for the ciphertext (and with probability 0.999999999767..., know nothing about whether
his guessed plaintext was correct or not).


You need to be careful when making such claims. There are schemes for which when you reach the birthday bound you can perform partial key
recovery.

The probabilities we calculated guarantee that there won't be any
attacks (with the usual assumptions...). Beyond the bounds, there are
no
guarantees. In particular, you cannot conclude that one, for example,
loses 1 bit of security once beyond the birthday bound.


How can one use the distinguishing attack with the data complexity
bound I
suggested for recovering 1 bit of the encryption key in the context of
TLS
?


Regards,
Quynh.





Atul

On 2016-07-12 20:06, Scott Fluhrer (sfluhrer) wrote:

-----Original Message-----
From: Paterson, Kenny [mailto:kenny.pater...@rhul.ac.uk]
Sent: Tuesday, July 12, 2016 1:17 PM
To: Dang, Quynh (Fed); Scott Fluhrer (sfluhrer); Eric Rescorla;
tls@ietf.org
Subject: Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

Hi

On 12/07/2016 18:04, "Dang, Quynh (Fed)" <quynh.d...@nist.gov>
wrote:

Hi Kenny,

On 7/12/16, 12:33 PM, "Paterson, Kenny" <kenny.pater...@rhul.ac.uk>
wrote:

Finally, you write "to come to the 2^38 record limit, they assume
that
each record is the maximum 2^14 bytes". For clarity, we did not
recommend a limit of 2^38 records. That's Quynh's preferred
number,
and is unsupported by our analysis.

What is problem with my suggestion even with the record size being
the
maximum value?

There may be no problem with your suggestion. I was simply trying to
make it
clear that 2^38 records was your suggestion for the record limit and
not ours.
Indeed, if one reads our note carefully, one will find that we do
not
make any
specific recommendations. We consider the decision to be one for the
WG;
our preferred role is to supply the analysis and help interpret it
if
people
want that. Part of that involves correcting possible misconceptions
and
misinterpretations before they get out of hand.

Now 2^38 does come out of our analysis if you are willing to accept
single key
attack security (in the indistinguishability sense) of 2^{-32}. So
in
that limited
sense, 2^38 is supported by our analysis. But it is not our
recommendation.

But, speaking now in a personal capacity, I consider that security
margin to be
too small (i.e. I think that 2^{-32} is too big a success
probability).


To be clear, this probability is that an attacker would be able to
take a huge (4+ Petabyte) ciphertext, and a compatibly sized
potential
(but incorrect) plaintext, and with probability 2^{-32}, be able to determine that this plaintext was not the one used for the ciphertext (and with probability 0.999999999767..., know nothing about whether
his guessed plaintext was correct or not).

I'm just trying to get people to understand what we're talking about. This is not "with probability 2^{-32}, he can recover the plaintext"



Regards,

Kenny


_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls



--
"Man is born free, but everywhere he is in chains".
--Rousseau.


_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to