I must confirm. I'm currently using rsync to backup several Gb of dynamic files every
night. It works. It just prints a message to stderr for unreadable files (those who
were moved/deleted between the file list creation and the real file copy).
I get the backup report on my email every day and I
My own tests on this subject (as I face the same situation) were done with 600K files
in the same dir structure, having 6 subdirs with 100K files each (I expect I'll need
to handle more than 5M with our increasing production rates).
I found that a simple 'find . -type f| wc -l' would take up t
> I've noticed that the -v can bring some freezing of rsync.
>
> The test I've made , have shown that the more "v". I use, the more
> problems happend.
>
> I'am now using this command without problem any more
> rsync -axW --progress --stat --delete --exclude \"lost+found\" /src/
> /target/
>
>
> On Tuesday 06 March 2001 13:09, you wrote:
> > Hi all,
> >
> > I'm new to the mailing list.I run into troubles trying to sync servers
with
> > millions of files ... did anyone had any previous experience on this
> > subject ?
> >
> > In most cases, it ends with something like : "unexpected EOF i
Hi all,
I'm new to the mailing list.I run into troubles trying to sync servers with
millions of files ... did anyone had any previous experience on this subject
?
In most cases, it ends with something like : "unexpected EOF in
read_timeout".
The servers I'm trying to sync have between 1 and 3 m