On Fri, 2009-08-07 at 18:44 +0200, devz...@web.de wrote: > > devz...@web.de wrote: > > > so, instead of 500M i would transfer 100GB over the network. > > > that`s no option. > > > > I don't see how you came up with such numbers. > > If files change completely then I don't see why > > you would transfer more (or less) over the network. > > The difference that I'm thinking of is that > > by not using the rsync algorithm then you're > > substantially reducing the number of disk I/Os. > > let me explain: all files are HUGE datafiles and they are of constant > size. > they are harddisk-images and the contents being changed inside, i.e. > specific blocks in the files being accessed and rewritten. > > so, the question is: > is rsync rolling checksum algorithm the perfect (i.e. fastest) > algorithm to match > changed blocks at fixed locations between source and destination > files ? > i`m not sure because i have no in depth knowledge of the mathematical > background > in rsync algorithm. i assume: no - but it`s only a guess...
I really don't think it's a good idea to sync large data files in use, which is modified frequently, e.g. SQL database, VMware image file. As rsync do NOT have the algorithm to keep those frequently modified data file sync with the source file. And this will course data file corrupted. If I'm wrong, please correct me. Thanks. -- Daniel Li -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html