Colm MacCárthaigh <c...@allcosts.net> writes:

> On Mon, Mar 14, 2016 at 11:04 AM, Subodh Iyengar <sub...@fb.com> wrote:
> >
> > Like Kyle mentioned the thing that 0-RTT adds to this is infinite
> > replayability. As mentioned in the other thread we have ways to reduce the
> > impact of infinite replayable data for TLS, making it reasonably replay
> > safe.
> >
> 
> That too is a mis-understanding. The deeper problem is that a third party
> can do the replay, and that forward secrecy is gone for what likely is
> sensitive data. Neither is the case with ordinary retries.

Just to expand on this:

HTTP GET is idempotent and so replayable, correct?  That is, if you
send two GET requests in a row, you should get the same results, no
changes should be caused on the server side, and the attacker learns
nothing new, even if the attacker could not have issued the original
GET.

However, just because this is true for two sequential GET requests, it
may not be the case for a series of requests.  For example, a GET
followed by a PUT followed by another GET.  If the second GET is
performed by an attacker, it might reveal that the PUT has occurred
and the new size of the result.

Further issues can occur depending on the application.  For example,
if the result contains a timestamp, some sensitive numeric data, and
is compressed, then repeated queries will leak information about the
numeric data at a higher rate than if the attacker had to rely on
passive monitoring.

So, I don't think HTTP is generally safe against attacker-forced
replay, and would suggest great caution in allowing it.  Perhaps we
could say, in the TLS RFC or a new RFC covering the topic, that it
should only be allowed by servers and clients when serving/requesting
immutable static data, that is for requests that will only ever return
one result.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to