* TLS records are carried over TCP segments. What if an attacker can change
the way records are divided into segments, and thereby trigger a bug in the
record parser?
Why do you think this is possible? The sizse of the record is part of what’s
secured.
_
It seems to me that if this is a valid threat model, then all software is
potentially vulnerable.
TLS records are carried over TCP segments. What if an attacker can change
the way records are divided into segments, and thereby trigger a bug in the
record parser?
On Fri, Apr 20, 2018 at 9:40 AM, V
Did we ever reach any agreement about what to do here?
For me, the threat model here seems fairly far-fetched and infeasible, in
the sense that the threat only exists in some very specific bugs in
multithreaded decompressor.
I'd be less reluctant to do this if it were not for the fact that all
so
On Thu, Mar 22, 2018 at 07:10:00PM +0200, Ilari Liusvaara wrote:
> I think BearSSL processes messages chunk-by-chunk. I think it even can
> process individual X.509 certificates chunk-by-chunk.
That's correct. In fact, it can process a complete handshake, including
the X.509 certificate chain, eve
On Thu, Mar 22, 2018 at 05:11:33PM +, Subodh Iyengar wrote:
> Ya I think you have the right idea. The former attack is the one
> that I'm more concerned about, i.e. compression libraries almost
> always provide streaming interfaces. Another case is that TLS
> implementations which are the users
, 2018 10:11:33 AM
To: David Benjamin
Cc: tls@ietf.org
Subject: [Potential Spoof] Re: [TLS] potential attack on TLS cert compression
Ya I think you have the right idea. The former attack is the one that I'm more
concerned about, i.e. compression libraries almost always provide streaming
inter
From: David Benjamin
Sent: Thursday, March 22, 2018 9:58:57 AM
To: Subodh Iyengar
Cc: tls@ietf.org
Subject: Re: [TLS] potential attack on TLS cert compression
To make sure I understand the issue, the concern is that your decompression
function provides a chunk-by-chunk interface, there's a b
On Thu, Mar 22, 2018 at 04:58:57PM +, David Benjamin wrote:
> To make sure I understand the issue, the concern is that your decompression
> function provides a chunk-by-chunk interface, there's a bug and the
> division into chunks produces a different result? Or are you suggesting
> that, with
To make sure I understand the issue, the concern is that your decompression
function provides a chunk-by-chunk interface, there's a bug and the
division into chunks produces a different result? Or are you suggesting
that, with the same chunking pattern, the result is still non-deterministic
somehow