Dear Eric, Britta,

I am paraphrasing a long thread on the issue that we had within
the miTLS development team, and I am primarily commenting on the
analysis aspects. I also hope that it will clarify any remaining
problems of understanding that I have on the issue.

If we see EOED as a stream termination signal, then there seems
to be a difference in performance for conservative servers that
want to wait until receiving all 0RTT data before responding to
the client's request in 0.5RTT communication.

Said otherwise, we want servers to be able to respond with application 
data based on application data from the client and know that that 
that data was not truncated.

==Scenario with EOED as Handshake message==

After the Client sends 0RTT data the Server gets 

CH, APP_C1, APP_C2

and typically responds with 

SH, ... SFIN, APP_S1

But wait, the server doesn't actually know that early data is
terminated. A conservative server would instead send only

SH,... SFIN and wait of EOED

The Client receives SH,... SFIN

Only now the Client knows that the server accepts 0RTT and he can
send and hash the EOED into the transcript hash.

The server receives EOED, CFIN

Only now such a conservative Server can send APP_S1. 

This results in an additional round trip for conservative
servers, and thus discourages the use of EOED as a stream
termination signal.  ---


==Scenario with EOED as Alert==

With an Alert based design for EOED we can instead have the
following flow, where the client immediately sends the EOED after
his request without waiting for the SFIN. Here the client makes the 
conscious choice not to send any additional early data.

The server gets 

CH, APP_C1, APP_C2, 
respond with SH,...SFIN

The server continues reading EOED and is notified about termination 
of 0RTT traffic .

The server can now immediately sends  APP_S1, ....

It is important that the server would send SH regardless of
receiving EOED to avoid deadlock due to clients that wait until
receiving SFIN before sending EOED.  ---


Maybe the implicit assumption is that applications, e.g., HTTP/2
have to do their own termination of requests, and that EED serves
a different purpose? What would be the justification for
providing such a mechanism for 1RTT traffic but not for 0RTT
traffic. And why not enable different usage profiles?

Overall, I find the design based on Alert messages cleaner and
more consistent. From colleagues I heard folklore that it would
also ease verification and reduce the implementation complexity
for modular implementations that want to separate Record Layer
and Handshake Layer concerns.

This is related to verifying the handshake without relying on 
encryption at all but I will let them comment on this if they deem 
it necessary.

--markulf


> On 16. mai 2017 23:28, Eric Rescorla wrote:
>
>> On Tue, May 16, 2017 at 2:41 PM, Britta Hale <britta.h...@ntnu.no> wrote:
>>
>> EOED signals the end of data encrypted with early traffic keys, yes, 
>> and the next message is the Finished message encrypted with the 
>> handshake traffic key.
>> However,
>> the Finished message is not *data*, and use of the application traffic 
>> key is signaled by the Finished message, not EOED. The Finished 
>> message, like a KeyUpdate message, are handshake messages, and both 
>> signal the start of a new key use for application data.
>>
>> In comparison, EOED signals the end of key use for application data - which
>> correlates
>> to alert behavior.
>
> This seems like a point where reasonable people can differ, especially as 
> ultimately the motivation for this change 
> was some sense of architectural consistency.
>
> To go back to my earlier question: does this change actually present some 
> analytic difficulty, or do you just find it > > unaesthetic?
>
> -Ekr

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to