We went over this about 5 months ago ...pushing data is way slower that
getting it....that is why this approach was taken.
But that is a good idea as server load just gets hit to hard.
On Wed, 11 Apr 2001, Eric Whiting wrote:
> Date: Wed, 11 Apr 2001 21:08:51 -0600
> From: Eric Whiting <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED], [EMAIL PROTECTED]
> Subject: Re: rsync across nfs
>
> I have a similar setup and goal as Tim.
>
> As another approach to the problem I've been pushing the sync's out to
> the destinations rather than having the destinations pull the data. I
> have 2G RAM on the source box (Solaris with a netapps disk) and I push
> the data via rsync/ssh to destinations in parallel with a simple perl
> script that forks a child per destination. (and loops over a few top
> level dirs)
>
> That doesn't really relate to Dan's question does it?? Oh well, I
> guess my opinion is that if you can rsync using rsh/ssh (push or pull)
> you will be much happier than if you have to use NFS.
>
>
> eric
>
>
>
>
> [EMAIL PROTECTED] wrote:
> >
> > One thing you can do to decrease your load is to ad the -W option. If you're
>reading it via nfs to do the checksumming, you have to read the entire file anyway,
>so you might as well just move the entire file, instead of wasting processor power,
>and
> > reading the entire file twice (or more, actually).
> > The single-processor bsd machine would max out at one transfer at a time,
>probably. using it as rsyncd, though, gives you the advantage of letting you use the
>"max connections" option, and having the individual machines retry until they
>succeed, thus
> > controlling the load on the solaris machine. I am developing a similar solution
>for our system, where we have a single master copy of a set of tools, with identical
>copies all over the world that must be kept up to date.
> >
> > Tim Conway
> > [EMAIL PROTECTED]
> > 303.682.4917
> > Philips Semiconductor - Colorado TC
> > 1880 Industrial Circle
> > Suite D
> > Longmont, CO 80501
> >
> > [EMAIL PROTECTED]@[EMAIL PROTECTED] on 04/11/2001 03:08:28 PM
> > Sent by: [EMAIL PROTECTED]
> > To: [EMAIL PROTECTED]@SMTP
> > cc:
> > Subject: rsync across nfs
> > Classification:
> >
> > I currently use rsync across an nfs mount.
> > This nfs server is a sparc solaris machine
> > mounting to approx 30 freebsd and 10 linux machines.
> >
> > When a typical rsync occurs to replicate data across all these
> > machines....
> > they all rsync
> > /usr/local/bin/rsync -az --delete --force /home/cvs/website/config /website
> >
> > where /home/cvs is an nfs mount and /website is just the local drive.
> > Problem is they all hit the solaris box at once driving it to load average
> > as high as 75 for the 10-20 seconds that this occurs.
> >
> > My question is would i be better off taking a single processor freebsd
> > machine....running a rsync server socket type deal...all getting 40
> > machines or so to connect this way....or would that be worse.
> > I like having all my machines update at once....just looking for an
> > efficient way that a) i could even dedicate 1 box as just an rsync server
> > and b) a single processor machine could actually handle that kind of load.
> > My guess is i would prob have to stripe some drives together as IO may be
> > a problem with that many webservers connecting at once? or will cpu be
> > more a factor.
> >
> > I have considered changing rsync times around so that they don;t all
> > connect to same server at once...this is done from crontab btw, but I like
> > the way it currently is. Any suggestions would be much appreciated.
> >
> > --
> > Dan
> >
> > +------------------------------------------------------+
> > | BRAVENET WEB SERVICES |
> > | [EMAIL PROTECTED] |
> > | make installworld |
> > | ln -s /var/qmail/bin/sendmail /usr/sbin/sendmail |
> > | ln -s /var/qmail/bin/newaliases /usr/sbin/newaliases |
> > +______________________________________________________+
>