Paul Slootman a écrit :
On Sat 14 Oct 2006, Fabrice Lorrain wrote:
Any progress on this bug ?
I'm afraid not...
I'll talk to the upstream maintainer to see what possibilities there are
for extending the protocol to handle this.
Thanks.
The way rsync is handling sparse file is suboptimal. It leaves any
backup policy based on rsync open to a trivial DoS with thinks link the
following :
dd if=/dev/zero of=bigfake bs=1k count=1 seek=2000000000
rsync -e ssh -avS bigfake [EMAIL PROTECTED]:/tmp
At that point you wait for 2TB of unusfull zeros been transferred
between the src-server and the backup_server... Annoying.
I understand...
I've been beaten by this feature twice already. Students borking some
seek/lseek maths while writing to files... We got several 100GB files
to transfert during the backup at night...
Using -z will speed things up quite a lot, as the zeroes compress well.
Yep, if I can apply this option only on sparse files. It will slow down
quit a bit our backups if we use it per default.
However, perhaps a better workaround in the meantime is to exclude
(student) files that are larger than a reasonable amount via the
--max-size option.
Or ask the student to clean his/her mess. Thank's for the tip nontheless.
@+,
Fab