On 05/24/2017 10:32 AM, Colm MacCárthaigh wrote:
>
>     > Another crazy idea would be to just say that servers MUST limit
>     the use
>     > of a single binder to at most 100 times, with the usual case
>     being just
>     > once, to allow for alternative designs that have weaker distributed
>     > consensus requirements (but still describe these current two
>     methods as
>     > examples of ways to do so).
>
>     You actually need strong distributed consensus about all accepted
>     0-RTT here.
>
>
> This pattern doesn't need strong consensus. If you have 10 servers,
> you could give each 10 goes at the ticket, and let each exhaust its 10
> attempts without any coordination or consensus. You likely won't get
> to "exactly 100", but you will get to "at most 100 times". 
>

Or (up to) 100 servers and give each server just one crack at the
ticket, which is perhaps more plausible to implement sanely.

> But the inner critical section would be inherently more complicated. 
> For the at most once case we need to a critical section that performs
> an exclusive-read-and-delete atomically. It's a mutex lock or a clever
> construction of atomic instructions.  For "at most N" we now need to
> perform a decrement and write in the critical section, and it becomes
> more like a semaphore. 
>
> It's probably not a pattern that's worth the trade-offs. 
>

That particular decrement-based design is not worth the trade-off,
definitely, but I doubt it's the only scheme that could meet the stated
requirement.  Something that attempts to access a global single-use data
structure but has some failure tolerance for that operation, and some
plausible story for bounding how often that can happen, say (like
stopping accepting 0-RTT at all on that server for 10 seconds once a
threshold level of errors occurs).

But, as indicated by the "crazy idea" prefix, I am not actually pushing
for this scheme.

-Ben
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to