On Sat, Sep 14, 2013 at 04:13:41PM +0000, hru...@gmail.com wrote:
> Marc Espie <es...@nerim.net> wrote:
> 
> > On Sat, Sep 14, 2013 at 03:09:48PM +0000, hru...@gmail.com wrote:
> >
> > > A completely other thing is to conclude that two *arbitrary* pieces of
> > > data are the same only because they have the same hash. Arbitrary 
> > > means here that the one was not a copy of the other. And this is what
> > > rsync seems to do as far as I understand the wikipedia web-page.
> >
> > The probability of an electrical failure in your hard drive causing
> > it to munge the file, or of a bug in the software using that file
> > is much higher than this happening.
> 
> This is a conjecture. Do you have a proof that the probability is so
> small? For me it is difficult to accept it. Is this conjecture used
> elsewhere?

Oh, for crying out loud.

There's a REPORT included with rsync, that describes the algorithm.

Rsync uses 128 bits checksums to ensure files are not corrupted.  The 16 bit
checksums are just for *identifying blocks for transfer*. The end check is
*of course* a full checksum.

I consider 1/2^128 to be *vanishingly small*.  It's ways more likely for a
cpu or memory bug to occur.  Cosmic ray radiation, or something,  which you
generally don't consider to be a big problem, is ways more probably to affect
your memory, and storage.

> I dont like rsync and similars!!!!

Just because you're irrational doesn't mean we have to cater to your 
irrational fears.

Reply via email to