You seem to be raising a complaint with my option #1, the "1-bit" option.

You're correct -- in that scheme, a node can only assume that the
other side has read as many KeyUpdates as it has gotten in response.
If the client decides to send KeyUpdates every 5 minutes, but the
other side is only writing every 10 minutes and coalescing its
responses, the client's knowledge that its KeyUpdates have been read
will get further and further behind. In the interests of compromise, I
think we would say that in situations where the other side is only
writing very infrequently, we would just give up on trying to get
assurance that a KeyUpdate got read, since it was probably not very
useful anyway. (The other side might never write to its socket ever
again! Then it really is hopeless under any scheme.)

Obviously if we had our choice, we'd prefer a 7- or 8-bit field
(options #2, #3, and #4) or a 64-bit field (as in PR #426/580), all of
which address your concern. But, trying to get to consensus here.

-Keith

On Fri, Sep 2, 2016 at 1:42 AM, Ilari Liusvaara
<ilariliusva...@welho.com> wrote:
> On Thu, Sep 01, 2016 at 02:11:13PM -0700, Keith Winstein wrote:
>> I think we have to oppose a change to KeyUpdate that adds P4 (bounded
>> write obligations) but not P3 (ability to learn that a KeyUpdate was
>> read by other side). These are orthogonal and easily achievable with a
>> pretty small tweak. Here are four options that would work for us:
>
> Unfortunately, one can get into situations like this (saw this
> in testing):
>
> 1) Client send KeyUpdate[req=true] (and continues blasting data)
> 2) Server receives the KeyUpdate[req=true]
> 3) Server queues a KeyUpdate[req=false], it gets stuck.
> 4) Client send KeyUpdate[req=true] (and continues blasting data)
> 5) Server receives the KeyUpdate[req=true]
> 6) Server quenches the KeyUpdate, since it has fresh keys.
> 7) Client send KeyUpdate[req=true] (and continues blasting data)
> 8) Server receives the KeyUpdate[req=true]
> 9) Server quenches the KeyUpdate, since it has fresh keys.
> 10) Server sends something, the KeyUpdate gets unstuck.
> 11) The client receives the KeyUpdate[req=false]
>
>
> Considering those KeyUpdates are triggered every ~5min (by volume
> trigger[1]), effective RTT was something like ~10-15min (despite sub-
> millisecond transport latencies). And the 1 server-sent KeyUpdate
> was in reaction to first client KeyUpdate, not the last...
>
>
> [1] Set at 10,000,000 records (even for 0x1303, which can handle
> far far more).
>
>
> -Ilari

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to