On Sat, Oct 17, 2015 at 3:05 PM, Viktor Dukhovni <ietf-d...@dukhovni.org>
wrote:

> On Sat, Oct 17, 2015 at 02:53:57PM -0700, Eric Rescorla wrote:
>
> > > It also has a slightly better collision risk, though it's already down
> > > quite low
> >
> > Given that the TCP checksum has a false negative rate far higher than
> > 2^{-56} and
> > any TCP errors cause TLS handshake failures, this doesn't seem like much
> of
> > an argument.
>
> This argument is not complete.  The false negative rate from TCP
> is not by itself sufficient to determine the observed error rate.
> One needs to combine that with the undetected error rate from
> underlying network to obtain the frequency of TCP errors that
> percolate up to the TLS layer.
>

A bit old but see:
http://www.ir.bbn.com/documents/articles/crc-sigcomm00.pdf

"After an analysis we conclude that the checksum will fail to detect
errors for roughly 1 in 16 million to 10 billion packets".

-Ekr



> The question is not so much whether 48, 56 or 64 bits is the right
> amount of protection against random false positives, though if 64
> bits is way overkill and the original 48 is more than enough, we
> could look more closely at that.  Rather, I think the question is
> whether this work-around should be as simple as possible, or should
> be a more feature-full new sub-protocol.  I'm in the keep it simple
> camp (if we should do this at all).
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to