You know, modern hard drives actually checksum written data so you never
get corrupted data back. You might get NO data back (that is, an I/O
error) but that would make any operation on the file fail, including
burning the file to a CD. I've read this in a "howto" for Linux'es
software raid dri
It is actually for data integrity as well (more than security, in my
opinion.) When it comes to large file download, there might be corrupted
bytes. Then this is more likely caused by HD errors then network errors.
>> Conclusion: I think data corruption might be a problem in some cases.
>> Notice
> Conclusion: I think data corruption might be a problem in some cases.
> Notice how all Linux distributions include MD5 hashes for all downloads,
> so they can be checked on the receiving end?
This is not to detect data corrumption because of data transmission but to
detect "man in the middle" a
Francois PIETTE wrote:
>> (3) In the code running after a failed download I'm removing the last
>> portion of the received data, just in case it's corrupted. I noticed
>> this behavior in a freeware download manager I used to use some time
>> ago. But now I'm asking: is this really necessary? HTTP
it, it is correct data.
--
Contribute to the SSL Effort. Visit http://www.overbyte.be/eng/ssl.html
--
[EMAIL PROTECTED]
http://www.overbyte.be
- Original Message -
From: "Cosmin Prund" <[EMAIL PROTECTED]>
To: "ICS support mailing"
Sent: Thursday, November 1
Hello.
I've build a "download manager" module for my application arround
THttpCli and I'm making use of ContentRangeBegin to resume failed
downloads. It all works very nice and smooth, and all my downloads
resume fine. Unfortunately my downloads only fail when I unplug the
network cable from m