Hi everyone,
I recently read a thread about the problems people are having with file
systems with a large number of files on them. We have a 80GB file system
with ~10 million files on it. Rsync runs out of memory on a 512M RAM
machine while (I assume) reading in the list of files to send.
To avo
At 17:09 30/06/2002 -0700, you wrote:
>Olivier,
>
>> Well, the first comment: during my work, I wanted to verify that the
>> theorical optimal block size sqrt(24*n/Q) given by Andrew Tridgell in his
>> PHd Thesis was actually the good one, and when doing the tests on randomly
>> generated & modi
I compile rsync on Mac OSX (not sure of the osx version, but uname -a
says it's darwin 5.5) but haven't tried running as a daemon. I suggest
that you try to debug it further. setgroups() is only called one place,
in clientserver.c, with parameters setgroups(0, NULL), and only if
the define HAVE_