Martin> I'm dealing with a big and ugly filesystem that looks like this:
Martin> $ du -sk .
Martin> 1526500 .
Martin> $ find . -depth -print | wc -l
Martin> 152221
Welcome to the club! Is this filesystem local or NFS mounted? And
how are you sending the data to another filesystem? Also, which
version of rsync are you using?
Martin> rsync seems to run into some 20M limit on this Slowaris 2.6
Martin> machine. CPU usage goes down to zero, 20M memory allocation,
Martin> no activity from rsync.
Can you do a 'truss -p' on all three sub-processes and let us know
what's happening? One thing to look for the parent process looping in
waitid() all the time.
How is the swap space on your system? I'm on a solaris 2.6 system and
I've seen rsync take upto 800mb of RAM building it's file list and it
works just fine, but once the data starts transfering, it tends to
crap out at points.
Martin> This looks pretty much like the "out of memory" problem
It could be, goto the latest rsync and try that out instead.
John
John Stoffel - Senior Unix Systems Administrator - Lucent Technologies
[EMAIL PROTECTED] - http://www.lucent.com - 978-952-7548