John,
Wayne's fixes solved 2.4.6 hangs I had been seeing for a long time.
Actually I've been running 2.3.2 because the 2.4.6 hangs were killing
me... Now I'm slowly moving all my apps up to the 2.4.6+wayne patch
version of rsync. I hope his patch (or something similar) gets
included in the offici
> Remi Laporte wrote:
> > Rsync 2.4.6 -v option is bugged and cause hangs of the transfers, so if
> > you have such problem, first think to remove the -v.
On Thu, 7 Jun 2001, John E. Mayorga wrote:
> Can anyone else confirm this?
My recently posted anti-hang patch should hopefully fix this for y
I've been getting this problem, as mentioned in a prior post:
@ERROR: max connections (16) reached - try again later
even after 1 connection and after killing that connection. But it
appears to be a problem in the Linux kernel 2 point 4 DOT OH!
An upgrade to 2.4.5 clears it up.
--
-
If that stores the examined file name/status in the file INSTEAD
of in memory, this could work. If it memory maps that file, it
could still end up being a problem at a later date (this whole
thing could end up being scaled up to 50x the size if it gets
committed, and a solid backup would be cruci
All,
Can anyone else confirm this? I've been getting aborts, and I just tried
taking out the "-v", but the stats are not always generated by the "--stats"
option, so I think it is still aborting.
Thanks,
John
Remi Laporte wrote:
> Hi all,
>
> Rsync 2.4.6 -v option is bugged and cause hangs o
I got around this by using the --temp=/SomeLargeFreeDir
But again - its getting around it not fixing it.
Dale
--- [EMAIL PROTECTED] wrote:
> I'm seeing this problem too, and the problem isn't the
> amount of data or number of files transferred. It's the
> number of files examined. It'll fail r
I had the same error this is what fixed it for me
/usr/bin/rsync -avc --stats --timeout=0 \ Hostname:/source/
/TargetOnLocalHost
Note the --timeout and you can also speed the process
up by dropping the c from the -avc.
Hope this helps
Dale
--- Wileczek_Mickaël <[EMAIL PROTECTED]>
wrote:
>
I'm seeing this problem too, and the problem isn't the amount of data or number of
files transferred. It's the number of files examined. It'll fail rsyncing two
already identical enormous directory structures.
Tim Conway
[EMAIL PROTECTED]
303.682.4917
Philips Semiconductor - Colorado TC
1880
I am trying to rsync
2Go file systems with lots of tiny file thru a low link
Aproximatively 2
hours after the beginning, the rsync process stops with the error 'unexpected
EOF in read_timeout'
leaving rsh(rsync)
process on the distant machine.
I've seen such
problem several times on the
> I'm going to try to split things up into multiple rsync runs. The
> great difficulty with this is that names at the first level I could
> split on are changing, meaning I am going to have to manually handle
> this about 2 or 3 times a week. It should be an automated thing
> that just does th
10 matches
Mail list logo