Re: Rsyncing really large files

2005-02-28 Thread Lars Karlslund
Hi everyone, Thank you for your replies. On tor, 2005-02-24 at 17:59 +0100, Paul Slootman wrote: > It would certainly be possible to change the algorithm to not cache the > data (and thus only allow the current block to be compared), but I don't > think that idea has general enough interest

Re: Rsyncing really large files

2005-02-28 Thread Shachar Shemesh
Lars Karlslund wrote: Also as far as I could read, the default block size is 700 bytes? What kind of application would default to moving data around 700 bytes at a time internally in a file? I'm not criticizing rsync, merely questioning the functionality of this feature. I believe you may have m

Re: Rsyncing really large files

2005-02-28 Thread Lars Karlslund
On man, 2005-02-28 at 12:23 +0200, Shachar Shemesh wrote: > Also as far as I could read, the default block size is 700 bytes? What > kind of application would default to moving data around 700 bytes at a > time internally in a file? I'm not criticizing rsync, merely > questioning the funct

Re: Rsyncing really large files

2005-02-28 Thread Shachar Shemesh
Lars Karlslund wrote: Maybe I didn't express myself thoroughly enough :-) Or me. Yes, a block is a minimum storage unit, which is considered for transfer. In size, yes. Not in position. But it's a fact that the rsync algorithm as it is now checks to see if a block should have moved. And in that c

Re: Rsyncing really large files

2005-02-28 Thread Shachar Shemesh
Shachar Shemesh wrote: No, because the rsync algorithm can detect single byte moves of this 700 bytes block. I will just mention that I opened the ultimate documentation for rsync (the source), and it says that the default block size is the rounded square root of the file's size. This means that

[Bug 2395] problems copying from a dir that includes a symlink in the path

2005-02-28 Thread samba-bugs
https://bugzilla.samba.org/show_bug.cgi?id=2395 --- Additional Comments From [EMAIL PROTECTED] 2005-02-28 11:15 --- Your script is creating a symlink to a non-existant file, which can't be copied with -L. If you change every instance of "referent" to "file", the script will work fine

Re: Rsyncing really large files

2005-02-28 Thread Shachar Shemesh
Wayne Davison wrote: However, you should be sure to have measured what is causing the slowdown first to know how much that will help. If it is not memory that is swapping on the sender, it may be that the computing of the checksums in maxing out your CPU, and removing the caching of the remote che

re[2]: Rsyncing really large files

2005-02-28 Thread Kevin Day
Shachar,It does use a hash table.  rsync adds the two components of the rolling checksum together to come up with a 16 bit hash, and performs a table lookup.  The table contains an offset into the sorted rsync checksum table (which contains both weak and strong checksums), and does a line

Re: Rsyncing really large files

2005-02-28 Thread Wayne Davison
On Mon, Feb 28, 2005 at 08:33:52PM +0200, Shachar Shemesh wrote: > If so, we can probably make it much much (much much much) more > efficient by using a hash table instead. That's what "tag_table" is -- it's an array of 65536 pointers into the checksum data sorted by weak checksum. The code then

[Bug 2395] problems copying from a dir that includes a symlink in the path

2005-02-28 Thread samba-bugs
https://bugzilla.samba.org/show_bug.cgi?id=2395 --- Additional Comments From [EMAIL PROTECTED] 2005-02-28 13:01 --- (In reply to comment #6) I will now go crawl into a hole. -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail

Limit the total bytes transfered?

2005-02-28 Thread Michael Best
Has anyone got a method for limiting the total number of bytes transfered with rsync? I was thinking running with -n and then using the output to check how much will been transfered. I ask because a client had a broken filesystem that occasionally has 2T+ files on it (broken filesystem, so the

Re: Limit the total bytes transfered?

2005-02-28 Thread Wayne Davison
On Mon, Feb 28, 2005 at 05:24:36PM -0700, Michael Best wrote: > I ask because a client had a broken filesystem that occasionally has > 2T+ files on it (broken filesystem, so they weren't actually that big) > but we happily ran up a huge b/w bill with rsync. Rsync 2.6.4 has a new option, --max-size

rsync + ssh -o -p -g -l

2005-02-28 Thread michael mendoza
Hi, i need move 20 GB of data from a old computer to a new server and I need than the permiss of user, group, other and symlink be the same in the new server. Y try with rsync -avzpogl -e ssh archivoOrigen [EMAIL PROTECTED]:/dir2/ but when a see in the new server, the data dont have the sam

Re: rsync + ssh -o -p -g -l

2005-02-28 Thread Wayne Davison
On Mon, Feb 28, 2005 at 10:51:32PM -0600, michael mendoza wrote: > rsync -avzpogl -e ssh archivoOrigen [EMAIL PROTECTED]:/dir2/ > but when a see in the new server, the data dont have the same permiss > of owner, group than the old server. The problem is that rsync need to be running as root in ord

Rsync 2.6.4pre2 released

2005-02-28 Thread Wayne Davison
I've released rsync 2.6.4pre2. It was a little sooner than expected, but I wanted to get the new keep-alive packets out for testing. Plus, one backward-compatibility bug had the potential to be rather annoying. The changes since 2.6.3 are here: http://rsync.samba.org/ftp/rsync/preview/rsync

Re: Rsyncing really large files

2005-02-28 Thread Craig Barratt
Lars Karlslund writes: > Also the numbers speak for themselves, as the --whole-file option is > *way* faster than the block-copy method on our setup. At the risk of jumping into the middle of this thread without remembering everything that was discussed... Remember that by default rsync writes a

Re: Rsyncing really large files

2005-02-28 Thread Shachar Shemesh
Kevin Day wrote: I would *strongly* recommend that you dig into the thesis a bit (just the section that describes the rsync algorithm itself). I tried a few weeks ago. I started to print it, and my printer ran out of ink :-). I will read it electronically eventually (I hope). Now, if you have hu