Hello everyone! I'm dealing with a big and ugly filesystem that looks like this: $ du -sk . 1526500 . $ find . -depth -print | wc -l 152221 rsync seems to run into some 20M limit on this Slowaris 2.6 machine. CPU usage goes down to zero, 20M memory allocation, no activity from rsync. This looks pretty much like the "out of memory" problem outlined in the FAQ. Is there a generic workaround available for this problem? Maybe some kind of "blocking" within rsync? Dealing with batches of several 100 files would be absolutely fine for me. Any hints welcome. Thanks in advance, -martin
- Re: The "out of memory" problem with large numbe... Schmitt, Martin
- Re: The "out of memory" problem with large ... John Stoffel
- Re: The "out of memory" problem with la... Dave Dykstra
- RE: The "out of memory" problem with large ... Schmitt, Martin
- RE: The "out of memory" problem with large ... Schmitt, Martin
- Re: The "out of memory" problem with la... Dave Dykstra
- RE: The "out of memory" problem with large ... Schmitt, Martin
- RE: The "out of memory" problem with la... John Stoffel
- The "out of memory" problem with large numb... Lenny Foner
- Re: The "out of memory" problem with la... Dave Dykstra
- RE: The "out of memory" problem with large ... David Bolen