On Sat, Oct 17, 2015 at 03:10:01PM -0700, Eric Rescorla wrote:

> > This argument is not complete.  The false negative rate from TCP
> > is not by itself sufficient to determine the observed error rate.
> > One needs to combine that with the undetected error rate from
> > underlying network to obtain the frequency of TCP errors that
> > percolate up to the TLS layer.
> 
> A bit old but see:
> http://www.ir.bbn.com/documents/articles/crc-sigcomm00.pdf
> 
> "After an analysis we conclude that the checksum will fail to detect
> errors for roughly 1 in 16 million to 10 billion packets".

That's all fine and good, but my point is that this is a distraction.
Though the specific numbers depend greatly on the underlying layer-2
networks traversed by the TCP frame, let's accept the 1:10^10
estimate, in which case anything better than ~2^{-40} is quite
enough.  If so, send a shorter sentinel.

> > The question is not so much whether 48, 56 or 64 bits is the right
> > amount of protection against random false positives, though if 64
> > bits is way overkill and the original 48 is more than enough, we
> > could look more closely at that.  Rather, I think the question is
> > whether this work-around should be as simple as possible, or should
> > be a more feature-full new sub-protocol.  I'm in the keep it simple
> > camp (if we should do this at all).

However, the question of simplicity still remains...  I would go
with at most a 1 bit field for "TLS 1.2" vs. "TLS 1.3" in whatever
length sentinel is used.

-- 
        VIktor.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to