I have been following this thread. 

I am working on rsync for an embedded application, but it has nothing to do with 
program loading. 

Donovan recently provided some formulas on figuring out the required checksum size 
relative to file size and acceptable failure rate.

In the formulas, he assumes that the block size is square root of file size.

I have done some benchmarking of rsync and for large files (ie. 1 -55 Gigbytes for 
example), the block size is larger than a square root of file size. This is based upon 
the xxxx.rsync_csums file size produced when you --write-batch=xxxx. I have done some 
calculations assuming that a check sum per block is 20 bytes (32 bits + 128 bits) and 
24 bytes (assuming there is an extra 32 bits of overhead data) and the number of 
checksums appears to be square root of file size divided by a number between 2 and 3.

This observation isnt central to what I am doing, so I havent really tried to pin this 
down.

I thought that I should bring it to the attention of anyone following the thread.

wally
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Reply via email to