On Wed, Jun 14, 2017 at 2:17 AM Petr Špaček <[email protected]> wrote:

>
>
> On 13.6.2017 22:55, Ilari Liusvaara wrote:
> > On Tue, Jun 13, 2017 at 06:57:05PM +0000, Andrei Popov wrote:
> >> Regarding RFC language, I think we could be more specific:
> >>
> >>
> >>
> >> 1. A TLS implementation SHOULD/MUST only send 0-RTT application data if
> the application has explicitly opted in;
> >>
> >> 2. A TLS implementation SHOULD/MUST only accept 0-RTT application data
> if the application has explicitly opted in;
> >>
> >> 3. When delivering 0-RTT application data to the application, a TLS
> implementation SHOULD/MUST provide a way for the application to distinguish
> it from the rest of the application data.
> >
> > First of these has to be MUST, or you get problems like I outlined
> > earlier.
> >
> > And to implement checking for client only sending "safe" data, you need
> > the second and third.
>
> I support MUST for the three points above.
>

The third one is not practical as one moves up layers. Instead, I believe
we can get the same benefit with a simpler signal.

TLS fundamentally is a transformation from a vaguely TCP-like transport to
another vaguely TCP-like transport. Consider TLS records: TLS could have
decided record boundaries were meaningful and applications can use it in
their framing layers, but instead TLS exposes a byte stream, because it
intentionally looks like TCP.

Of course, 0-RTT unavoidably must stretch the “vaguely”. Suppose there is a
semantically meaningful difference between 0-RTT and 1-RTT data. I can see
why this is attractive. It moves the problem out of TLS. But this signal is
pointless if applications don’t use it. If everyone did the following, we
haven’t solved anything:

if (InEarlyData()) {
  return EarlyDataRead(...);
} else {
  return NormalRead(...);
}

So we must consider this signal’s uses. Consider HTTP/2. The goal may be to
tag requests as “0-RTT”, because we wish to reject 0-RTT POSTs or so.

What if the server receives data with the 0-RTT boundary spanning an HTTP/2
frame? Is that a 0-RTT request? 1-RTT? Invalid? If I’m parsing that, I have
to concatenate, and we’re back to that if/else strawman above. HTTP2 is
arguably an easy case. Maybe my protocol is a compressed stream. Carrying a
low-level byte boundary through layers of application data parsing and
processing is not practical.

We could say that the application profile should modify the protocol to
reject such cases. Now, we’re taking on complexity in every protocol
specification and parser.

It also brings complexity on the sender side. Perhaps I am halfway through
writing an HTTP/2 frame and, in parallel, I receive the ServerHello. We
moved 0-RTT data out of a ClientHello extension way back in draft -07 so
0-RTT data can be streamed while waiting for ServerHello. This is
especially useful for HTTP/2 where reads and writes flow in parallel. This
means the sender must synchronize with the TLS socket to delay the 1-RTT
transition.

Now suppose the TLS stack receives that atomic data and it doesn’t fit in
the 0-RTT send limit. That won’t work, so the sender must query the early
data size and send over 1-RTT if it doesn’t fit.

Now suppose this HTTP request takes multiple frames to send. One can send
multiple HEADERS frames in HTTP/2. That won’t work, so we actually need the
synchronization to cover the entire request. Maybe my request has a
response body. We need to cover that too, and we need to know the size for
send limit purposes.

Now suppose assembling the HTTP request takes an asynchronous step after
connection binding. Maybe I’m signing something for tokbind. Maybe I have
some weird browser extension API. In this worldview, all the while, the
HTTP/2 multiplexer must lock out all other requests and the client
Finished, even if the ServerHello has already come in. That last one is
particularly nasty if the server is delaying an already-sent request until
the 1-RTT point (cf.
https://www.ietf.org/mail-archive/web/tls/current/msg21486.html).

Perhaps I keep the request assembling logic, HTTP/2 multiplexers, and TLS
sockets in different threads or processes. Now this synchronization must
span all of these. As one adds layers, the complexity grows.

The root problem here is we’ve changed TLS’s core abstraction. One might
argue all this complexity is warranted for 0-RTT. We are trying to solve a
problem here. But I think we can solve it simpler:

Our problem is a server wishes not to process some HTTP requests (or other
protocol units) at 0-RTT and needs to detect this case. So check a boolean
signal for whether the connection has currently passed the 1-RTT point
before any unsafe processing. A “1-RTT request” will always return 1-RTT. A
“0-RTT request” will usually return 0-RTT, but if it spans the boundary or
the processing pipeline was just slow, perhaps we don’t query until after
the client Finished. That’s actually fine. We get the replay protection we
need out of the client Finished.

This solves our problem without the complexity. Two APIs or an explicit
boundary is one way to expose this boolean, but this boolean, unlike the
hard boundary, is much easier to carry across higher-level protocol layers,
and does not imply impractical sender constraints.

David
_______________________________________________
TLS mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/tls

Reply via email to