Wayne Davison wrote:

The best way to do this is to use a starting dir, the --recursive
option, and then exclude all subdirs:

rsync -av exclude='*/' host::module/path/ .

That works (thanks), but isn't exactly intuitive ;-)


Since the arg-list created for file globbing is allocated memory extra
to the file list that will be created for the transfer, it's certainly
more efficient to use the aforementioned exclusion method.  The current
limit of 1000 files seems rather arbitrary, but I'm not sure that it
really needs to be made larger than this (given that there are ways to
work around the limit).  You can feel free to disagree, if you like --
let me know your reasons, if you do.

Hew so more efficient? The glob code doesn't have to descend into any subdirs, and will only expand to the list of required files, rather than having to expand the list then filter it with excludes.


If you are really worried about memory you could malloc say a 64-entry argv and realloc by powers of 2 - that would actually use *less* memory than the current mechanism in the default case of relatively small directories.

The root of my concern is that if people have a large leaf directory they wouldn't expect to have to specify --recurse and --exclude just to copy the contents, and the diagnostics returned when it fails are completely opaque - in this case I had to resort to a debugger to find out what the root cause was. I can't see that it is reasonable to expect to have to debug rsync to find out that my directory is too big for rsync to cope with.

The problem can be made to go away entirely with a relatively code minor change, which will be one less cause of the "rsync: connection unexpectedly closed" problems that people hit so often, and in addition I'm happy to submit a patch, so minimal work is involved for anyone else.

--
Alan Burlison
--

--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Reply via email to