On 22/01/2014, at 7:19 PM, Henning Brauer wrote:

> * Richard Procter <richard.n.proc...@gmail.com> [2014-01-22 06:44]:
>> This fundamentally weakens its usefulness, though: a correct
>> checksum now implies only that the payload likely matches
>> what the last NAT router happened to have in its memory
> 
> huh?
> we receive a packet with correct cksum -> NAT -> packet goes out with
> correct cksum.
> we receive a packet with broken cksum -> NAT -> we leave the cksum
> alone, i. e. leave it broken.

Christian said it better than me: routers may corrupt data
and regenerating the checksum will hide it.

That's more than a theoretical concern. The article I
referenced is a detailed study of real-world traces
co-authored by a member of the Stanford distributed systems
group that concludes "Probably the strongest message of this
study is that the networking hardware is often trashing the
packets which are entrusted to it"[0].

More generally, TCP checksums provide for an acceptable
error rate that is independent of the reliability of the
underlying network[*] by allowing us to verify its workings.
But it's no longer possible to verify network operation if 
it may be regenerating TCP checksums, as these may hide 
network faults. That's a fundamental change from the scheme 
Cerf and Khan emphasized in their design notes for what 
became known as TCP:

"The remainder of the packet consists of text for delivery
to the destination and a trailing check sum used for
end-to-end software verification. The GATEWAY does /not/
modify the text and merely forwards the check sum along
without computing or recomputing it."[1]

> It doesn't seem you know what you are talking about. the
> cksum is dead simple, if we had bugs in claculating or
> verifying it, we really had a LOT of other problems.

I'm not saying the calculation is bad. I'm saying it's being
calculated from the wrong copy of the data and by the wrong
device. And it's not just me saying it: I'm quoting the guys 
who designed TCP. 

> There is no "undetected error rate", nothing really changes
> there.

I disagree. Every TCP stream containing aribitrary data may
have undetected errors as checksums cannot detect all the
errors networks may make (being shorter than the data they
cover). The engineer's task is to make network errors
reliably negligible in practice.

As network regenerated checksums may hide any amount of
arbitrary data corruption I believe it's correct to say the
network error rate undetected by TCP is then "unknown and
unbounded".

best, 
Richard. 

[*] Under reasonable assumptions of the error modes most likely
in practice. And some applications require lower error rates 
than TCP checksums can provide.

[0]
http://conferences.sigcomm.org/sigcomm/2000/conf/paper/sigcomm2000-9-1.pdf

Jonathan Stone and Craig Partridge. 2000. When the CRC and
TCP checksum disagree.  In Proceedings of the conference on
Applications, Technologies, Architectures, and Protocols for
Computer Communication (SIGCOMM '00). ACM, New York, NY,
USA, 309-319.  DOI=10.1145/347059.347561
http://doi.acm.org/10.1145/347059.347561

[1] "A Protocol for Packet Network Intercommunication" 
V. Cerf, R. Khan, IEEE Trans on Comms, Vol Com-22, No 5 May
1974 Page 3 in original emphasis.

Reply via email to