Interesting i will try this.
What woudl be nice is an option to keep a filelist on nfs mount
and a filelist locally then compare the 2 and only copy over what has
changed. Even if i could just run something from cron every so often to
just check the dir structure and update the filelist....i heard of
something called rsync++ that i think does something similar.
All we can really do right now is if load gets to high then split up load
by having multiple rsync servers that keep constant updates of each other
and other clients can update between the 2.
I will try your -W option much appreciated. But as far as LOad between
rsync socket vs nfs i was unclear which is faster and better LA wise.
On Wed, 11 Apr 2001 [EMAIL PROTECTED] wrote:
> Date: Wed, 11 Apr 2001 16:30:34 -0500
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: rsync across nfs
>
> One thing you can do to decrease your load is to ad the -W option. If you're
>reading it via nfs to do the checksumming, you have to read the entire file anyway,
>so you might as well just move the entire file, instead of wasting processor power,
>and
> reading the entire file twice (or more, actually).
> The single-processor bsd machine would max out at one transfer at a time, probably.
>using it as rsyncd, though, gives you the advantage of letting you use the "max
>connections" option, and having the individual machines retry until they succeed,
>thus
> controlling the load on the solaris machine. I am developing a similar solution for
>our system, where we have a single master copy of a set of tools, with identical
>copies all over the world that must be kept up to date.
>
> Tim Conway
> [EMAIL PROTECTED]
> 303.682.4917
> Philips Semiconductor - Colorado TC
> 1880 Industrial Circle
> Suite D
> Longmont, CO 80501
>
>
>
>
>
> [EMAIL PROTECTED]@[EMAIL PROTECTED] on 04/11/2001 03:08:28 PM
> Sent by: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]@SMTP
> cc:
> Subject: rsync across nfs
> Classification:
>
>
>
> I currently use rsync across an nfs mount.
> This nfs server is a sparc solaris machine
> mounting to approx 30 freebsd and 10 linux machines.
>
> When a typical rsync occurs to replicate data across all these
> machines....
> they all rsync
> /usr/local/bin/rsync -az --delete --force /home/cvs/website/config /website
>
> where /home/cvs is an nfs mount and /website is just the local drive.
> Problem is they all hit the solaris box at once driving it to load average
> as high as 75 for the 10-20 seconds that this occurs.
>
> My question is would i be better off taking a single processor freebsd
> machine....running a rsync server socket type deal...all getting 40
> machines or so to connect this way....or would that be worse.
> I like having all my machines update at once....just looking for an
> efficient way that a) i could even dedicate 1 box as just an rsync server
> and b) a single processor machine could actually handle that kind of load.
> My guess is i would prob have to stripe some drives together as IO may be
> a problem with that many webservers connecting at once? or will cpu be
> more a factor.
>
> I have considered changing rsync times around so that they don;t all
> connect to same server at once...this is done from crontab btw, but I like
> the way it currently is. Any suggestions would be much appreciated.
>
>
> --
> Dan
>
> +------------------------------------------------------+
> | BRAVENET WEB SERVICES |
> | [EMAIL PROTECTED] |
> | make installworld |
> | ln -s /var/qmail/bin/sendmail /usr/sbin/sendmail |
> | ln -s /var/qmail/bin/newaliases /usr/sbin/newaliases |
> +______________________________________________________+
>
>
>
>
>
>