On Sat, Oct 17, 2015 at 02:53:57PM -0700, Eric Rescorla wrote:

> > It also has a slightly better collision risk, though it's already down
> > quite low
> 
> Given that the TCP checksum has a false negative rate far higher than
> 2^{-56} and
> any TCP errors cause TLS handshake failures, this doesn't seem like much of
> an argument.

This argument is not complete.  The false negative rate from TCP
is not by itself sufficient to determine the observed error rate.
One needs to combine that with the undetected error rate from
underlying network to obtain the frequency of TCP errors that
percolate up to the TLS layer.

There are of course also errors in the memory subsystem, or machine
crashes, ... that all interrupt TLS connections.

Indeed 2^{-64} is a very low error probability, it is I would guess
substantially smaller that any of the other possible error sources,
and this is a good thing.

The question is not so much whether 48, 56 or 64 bits is the right
amount of protection against random false positives, though if 64
bits is way overkill and the original 48 is more than enough, we
could look more closely at that.  Rather, I think the question is
whether this work-around should be as simple as possible, or should
be a more feature-full new sub-protocol.  I'm in the keep it simple
camp (if we should do this at all).

-- 
        Viktor.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to