Okay, here is the problem as I see it:
The algorithm is deterministic - know exactly what it will do next and what conditions
are present at any given point in time.
The overall result is not. It is probabilitic, and the probalilities are 1:2^128 or
something of that order, but still, there is
". But if you had a heart pacemaker whose operation
depended on appropriate updates to a control data file, would you trust
rsync to send that file update to the pacemaker?
- Original Message -
From: Martin Pool <[EMAIL PROTECTED]>
To: Berend Tober <[EMAIL PROTECTED]>
Subje
Martin Pool [[EMAIL PROTECTED]] writes:
> To put it in simple language, the probability of an file transmission
> error being undetected by MD4 message digest is believed to be
> approximately one in one thousand million million million million
> million million.
I think that's one duodecillio
On 17 Apr 2002, Berend Tober <[EMAIL PROTECTED]> wrote:
> So while the software algorithm of ftp and cp are deterministic,
> there must be some quantifiable probablity of failure
> non-the-less. The difference with rsync is that not only are the
> same effects of data corruption at work as with
Not in the least. The only checksum that guarantees that two files are
identical is one from which the entire file can be regenerated in only a
single way, in other words, some form of compression. If you want to send
the whole file, that's fairly straightforward. Rsync is a way of
optimizi
On 17 Apr 2002 at 13:46, David Bolen wrote:
> Berend Tober [[EMAIL PROTECTED]] writes:
>
> > That was my point about comparing rsync to sending the entire file
> > using say, ftp or cp. ...
>
> Except of course that rsync uses its own final checksum ...
>
> ...so one could argue it's
> actuall
Berend Tober [[EMAIL PROTECTED]] writes:
> That was my point about comparing rsync to sending the entire file
> using say, ftp or cp. That is, one might think that sending the
> entire file via ftp or cp will produce a exact file copy, however the
> actual transmission of the data takes the fo