>In the exchange I argued that proper use of ram as a buffer would have cut down backup time to minutes instead of days.
could you give an example where rsync is slowing things down so much due to ram constraints or inefficient ram use?
please mind that disk bandwith and file copy bandwith is no
was broucht up before indeed:
> https://lists.samba.org/archive/rsync/2012-June/027680.html
>
> On 12/30/18 9:50 PM, devzero--- via rsync wrote:
> >> There have been addons to rsync in the past to do that but rsync really
> >> isn't the correct tool for the job.
> >
> There have been addons to rsync in the past to do that but rsync really
> isn't the correct tool for the job.
why not correct tool ?
if rsync can greatly keep two large files in sync between source and destination
(using --inplace), why should it (generally spoken) not also be used to keep
two
But don‘t forget —inplace, otherwise snapshots would not be efficient
> Gesendet: Mittwoch, 18. Juli 2018 um 21:53 Uhr
> Von: "Kevin Korb via rsync"
> An: rsync@lists.samba.org
> Betreff: Re: link-dest and batch-file
>
> If you are using ZFS then forget --link-dest. Just rsync to the same
> zfs
>The difference is not crazy. But the find itself takes so much time !
38m for a find across 2,8m files looks a little bit slow, i'm getting 14k lines/s when doing "find . | pv -l -a >/dev/null" in my btrfs volume located via iscsi on a synology storage (3,5" ordinary sata disks) - while
most likely, you ovestrain your NAS with disk random IOPs. furthermore, iSCSI is an additional throttle here, making things worse.
your issue is probably centered around metadata reads/latency...
have a look on IO-Wait on the server/nas side...
regards
roland
Gesendet: Mittwoch
hat is what rsync says at the end of the run in case you missed the
> file names fly by in the other output. The names should be in the rest
> of the output from when rsync found the problem.
>
> On 11/15/2017 04:53 AM, devzero--- via rsync wrote:
> > Hi !
> >
> > I`
Hi !
I`m getting "rsync warning: some files vanished before they could be
transferred (code 24) at main.c(1518) [generator=3.0.9]" on one of my systems
i`m backing up with rsync , but rsync doesn`t show WHICH files.
Does anybody have a clue under which circumstances rsync doesn`t show these ?
What's the value of "i" when this happens and what are the system ulimit values
for the user running that?
Roland
> Gesendet: Freitag, 14. April 2017 um 19:22 Uhr
> Von: "Boris Savelev via rsync"
> An: rsync@lists.samba.org
> Betreff: rsync buffer overflow detected
>
> Hello!
>
> I use rsync
interesting.
apparently, there is some logic inside which checks for change during save.
i also found this one:
https://git.samba.org/rsync.git/?p=rsync.git;a=commit;h=e2bc4126691bdbc8ab78e6e56c72bf1d8bc51168
i`m curious why it doesn`t handle my test-case then.
regards
roland
> Gesendet: Mon
>http://olstrans.sourceforge.net/release/OLS2000-rsync/OLS2000-rsync.html
But the filename twice can happen under other circumstances; if you've seen this happen, it's almost certainly because the file changed during transfer. Rsync does no locking. Which means that: if you are modifying a file
for pre 3.0.9 which is still standard in centos7 with recent updates, --stats
does neither show number of deleted, nor added files
Am 17. Dezember 2016 18:06:56 MEZ, schrieb Kevin Korb :
>--stats has most of that information in it.
>
>On 12/17/2016 08:01 AM, devz...@web.de wrote:
>> is there a sc
is there a script which analyses rsync output with --itemize-changes ?
i.e. i would like to have extended information on number of deleted files,
created directories, changed files
i know rsync 3.1.x is better with this, but it`s still not in centos 5/6/7 and
i don`t want to update tons of
try archivemount or squashfs
Am 8. Dezember 2016 11:43:07 MEZ, schrieb Simon Hobson :
>Ed Peschko wrote:
>
>> As it stands right now, we use xz for our compression, so if rsync
>had
>> a similar option for xz that would probably be an improvement.
>
>Have xz as an option for what ?
>As others hav
does it crash reproducable at the same file or is it randomly?
Am 2. Dezember 2016 07:50:20 MEZ, schrieb VigneshDhanraj G
:
>Any update on this issue.
>
>On Wed, Nov 30, 2016 at 6:29 PM, VigneshDhanraj G <
>vigneshdhanra...@gmail.com> wrote:
>
>> Hi Team,
>>
>> While Running rsync rsync://usernam
i'm using rsync for backup and, as rsync can detect if files have vanished
during transfer, i wonder how rsync can tell which files got modified during
transfer (i.e. which are not consistent on the destination side after transfer)
apparently, rsync can't show that information?
wouldn't that b
Hello,
since we are using rsync for backing up millions of files in a virtual
environment, and most of the virtual machines run on SSD cached storage, i`d be
curious how that negatively impacts lifetime of the SSD`s when we do rsync run
every night for backup
my question:
does rsync normal fi
hi,
is there a reason why error code 255 is not mentioned in the manpage
and wouldn`t it make sense to add "255 Unexplained Error" there
for completeness ?
I`m writing a script which checks exit values and while testing it, i
got value 255, looked into the manpage and scratched my head wh
what does lsof tell? does rsync hang on a specific file?
i would wonder if this is a rsync problem. as you told you killed all
processes.
so, on the second run rsync knows nothing from before...
roland
Am 18. Oktober 2016 12:08:00 MESZ, schrieb Bernd Hohmann
:
>On 18.10.2016 07:03, Kip Warn
is it still reproducible after a fresh boot? no stale nfs or fuse mounts or similar?
--
Diese Nachricht wurde von meinem Android Mobiltelefon mit WEB.DE Mail gesendet.Am 06.08.2016, 00:52, Tom Horsley schrieb:
I was working on a backup script today and doing lots
of runs with the --dr
yes, agreed.
and if the home directory contains ms word files, it's typically on a windows system - which is not the primary platform for using rsync, anyway.
--
Diese Nachricht wurde von meinem Android Mobiltelefon mit WEB.DE Mail gesendet.Am 27.07.2016, 13:56, Marcus Fonzarelli
>Over ssh/nfs
>rsync -nuvaz --delete /source/ r...@nfsserver.domain.co.uk:
i don`t see nfs here, i see rsync syncing a local dir via ssh to a local dir on
the target system.
does nfsserver.domain.co.uk just export NFS shares to other systems or are
there other NFS shares being mounted there,
> You should not be using rsync's --checksum during routine backups.
you know that excel can change a file`s contents without changing a file`s
timestamp - do you? ;)
> Gesendet: Donnerstag, 09. April 2015 um 18:37 Uhr
> Von: "Kevin Korb"
> An: rsync@lists.samba.org
> Betreff: Re: rsync error:
>build and about 1.5GB is used during the actual transfer. The client has 16GB of RAM with a peak usage of 8.5GB.
1.5GB + 8.5GB of systems memory, including buffers etc?
give it a closer look to the rsync process with ps (as mentioned below)
also have a look at:
https://rsync.samba.org/
Hi Aron,
i hope it`s ok for you if i bring this back on-list. Your issue and the way or possible fix to resolve it may be interesting for others too (that may include future searches etc)
so with 3.1.1 we are a step further
i don`t really have a clue what`s happening here but my ne
Hi,
rsync 3.0.9 is quite ancient, more than 3 years old. A lot of bugs have been fixed since then.
Is there a chance to update to the latest rsync version and retry with that ?
regards
Roland
Gesendet: Dienstag, 17. März 2015 um 11:51 Uhr
Von: "Aron Rotteveel"
An: rsync@lists.sam
i`m sure that if you use ext3/4 as the destination filesystem (instead of hfs+) and also access that via netatalk the same way as you access the source, then all is fine again.
i think it`s the transition from ext4->hfs+ and the dbehaviour of netatalk accessing and handling metadata on ext4 an
but this does not explain why the files on the destination are 0kb sized - with that background information you delivered, i assume they may already be 0kb on the source side - but Ram may have overseen that because he did not look from the shell`s point of view but from the OSX point of view via
what is the source and destination filesystem?
here is some report that rsync has some problem with HFS+ filesystems and
"ressource forks": http://quesera.com/reynhout/misc/rsync+hfsmode/
but as you are using ubuntu and not osx i`m curious what`s the problem, so i
think we need more information
you mean, rsync "silently" creates 0kb sized files and only a special type of
file shows this behaviour?
try increasing rsync verbosity with "-v" , delete the 0kb files and retry. you
can send the output of rsync to this list if it`s not too long if you don`t get
a clue from that. mind that it
> You are missing the point of the checksum. It is a verification that
> the file was assembled on the target system correctly. The only
> post-transfer checksum that would make any sense locally would be to
> make sure that the disk stored the file correctly which would require
> a flushing of t
from a security perspective this is bad. think of a backup provider who wants
to make rsyncd modules available to the end users so they can push backups to
the server. do you think that such server is secure if all users are allowed to
open up an ssh shell to secure their rsync transfer ?
ok, y
yes, i`d second that.
maybe you just try using metaflac to generate the appropriate list of files to
sync and then feed that list to rsync.
> Gesendet: Dienstag, 02. Dezember 2014 um 08:37 Uhr
> Von: "Fabian Cenedese"
> An: rsync@lists.samba.org
> Betreff: Re: Comparing FLAC header before sync
you may have a look here:
http://superuser.com/questions/192766/resume-transfer-of-a-single-file-by-rsync
http://stackoverflow.com/questions/16572066/resuming-rsync-partial-p-partial-on-a-interrupted-transfer
if you use inplace or append, for security reason you could even run another
rsync "dif
Hi,
it seems that one already has been fixed in 3.1.0, see
https://bugzilla.samba.org/show_bug.cgi?id=9789
and
https://git.samba.org/?p=rsync.git;a=commit;h=2dc2070992c00ea6625031813f2b6c886ddc3ade
you are still using 2.6.9 ? that`s rather old (~ 8yrs?) and may have bugs and
security issues
it`s probably not exactly what you want, but have a look at this:
https://bugzilla.samba.org/show_bug.cgi?id=7120#c3
regards
Roland
>List: rsync
>Subject:Adjusting transfer rate on the fly
>From: mr queue
>Date: 2014-07-20 0:43:12
>Message-ID: CACcQGfYPLTbyg-dfYzTKytxxK
Great to see that there is a native rsync now.
This is NOT a derived work of GPL´ed rsync but a re-implementation from scratch
?
regards
Roland
List: rsync
Subject:Alternative rsync client for Windows
From: "Gilbert (Gang) Chen"
Date: 2014-04-08 16:41:36
Message-ID: CAPQ
regarding dynamic slowdown, you may also have a look at:
https://bugzilla.samba.org/show_bug.cgi?id=7120
regards
Roland
List: rsync
Subject:Re: RFC: slow-down option
From: Marian Marinov
Date: 2014-04-03 12:52:53
Message-ID: 533D59A5.4080503 () yuhu ! biz
[Download message
What do "They" recommend instead?
If it`s all about copying and network bandwidth is not an issue, you can use
scp or whatever dumb tool which just shuffle the bits around "as is". rsync is
being used when you want to keep data in sync and if you want to save bandwidth
to handle that task. You
this is a linux kernel or hardware issue, please update (if not yet done) your system to the latest patches, especially the kernel package.
if that does not help, jump onto these bugreports or open a new one for your distro. (kernel.org bugtracker is typically not the best choice for normal
i´m sure this is no rsync issue. i guess rsync is just triggering it.
https://groups.google.com/forum/#!msg/fa.linux.kernel/bxYvmkgvwGo/-gbIAVLz0zAJ
maybe clocksource=jiffies or nohz=off is worth a try to see if it makes a difference
regards
roland
you mean, when the
Hi Pavel,
could you try if --inplace --no-whole-file makes a difference?
normally, if using rsync --inplace on local host without network in betweens
makes rsync switch to --whole-file, and i assume, that for some reason your
rsync is rewriting the whole file inplace. and any rewritten block o
Hi Pavel,
maybe that´s related to zfs compression ?
on compressed zfs filesystem, zeroes are not written to disk.
# dd if=/dev/zero of=test.dat bs=1024k count=100
/zfspool # ls -la
total 8
drwxr-xr-x 3 root root 4 Feb 26 10:18 .
drwxr-xr-x 27 root root 4096 Mar 29 2013 ..
drwxr-
if you use ssh as transport, you can try
rsync -e 'ssh -oBindAddress='
man 5 ssh_config is telling:
BindAddress
Use the specified address on the local machine as the source
address of the connection. Only useful on systems with more than
one address.
>It seems that Microsoft knows how to change a file without altering
>the modification time.
yes, they do.
see https://bugzilla.samba.org/show_bug.cgi?id=1601
but it`s not too difficult.
the issue is, that you don`t expect a program actively changing a files
contents when just opening it for
or better try pipe viewer. it seems less buggy (kill -SIGUSR1 $PIDOFTHROTTLE
doesn't work for me) , has realtime progress and there is a homepage and
maintainer ( http://www.ivarch.com/programs/pv.shtml )
linux-rqru:/tmp # cat /tmp/pv-wrapper
#!/bin/bash
pv -L 1000 | "$@"
Adjust Transfer-Rate:
mhh - interesting question..
what about combining the power of throttle (
http://linux.die.net/man/1/throttle ) or similar tools (there are some more
like this) with rsync ?
via this hint: http://lists.samba.org/archive/rsync/2006-February/014623.html i
got a clue how to combine rsync and
Why put that extra effort into rsync, if you can chain things together ?
The power of unix is exactly that - it`s not about using specialiced tools, but
it`s about combining them in innumerable ways, thus multiplying their
capabilities.
>Another good reason for a SSL-version of rsync: non-Unix
>Anyway, i tried not to give up.
>
>And found
>
>https://rsync.samba.org/ftp/rsync/rsync-patches-3.0.9.tar.gz
>
>In there, i found copy-devices.diff, which could be applied successfully :-)
>A write-devices.diff is missing :(
Mh, apparently rsync-patches-$release.tar.gz and rsync-patches.git are n
Hello,
i have found, that major distros (especially opensuse) ship their rsync
packages with a lot of patches which i don`t find in the official rsync-patches
git.
Maybe there is a reason for that or i missed something or looked wrong, but
for convenience/transparency i have compiled a list o
Hello,
if you have a backup server with rsync running in daemon mode - is there a way
for a client to obtain information about free diskspace via rsync ?
I searched through all the docs, but could not find anything about it.
if there is no way, i guess implementing it would need the rsync proto
>The only place that an SSL would make some sense, is if you are going to do it to/from an rsync daemon, yes, exactly. >but then how would that be "better" than a ssh-only account with keys/etc. only allowing the rsync to execute? I think that`s far more secure by design, because you won`t all
Hi,
i`m wondering - can't THIS one
http://gitweb.samba.org/?p=rsync-patches.git;a=blob;f=openssl-support.diff
be completely replaced with THIS one ?
http://dozzie.jarowit.net/trac/wiki/RsyncSSL
http://dozzie.jarowit.net/git?p=rsync-ssl.git;a=tree
Isn`t RsyncSSL (wrap rsync with stunnel via stdi
> I really don't think it's a good idea to sync large data files in use,
> which is modified frequently, e.g. SQL database, VMware image file.
>
> As rsync do NOT have the algorithm to keep those frequently modified
> data file sync with the source file. And this will course data file
> corrupted.
> devz...@web.de wrote:
> > so, instead of 500M i would transfer 100GB over the network.
> > that`s no option.
>
> I don't see how you came up with such numbers.
> If files change completely then I don't see why
> you would transfer more (or less) over the network.
> The difference that I'm thinki
it`s even worse:
> Number of files: 44
> Number of files transferred: 1
> Total file size: 59 bytes
> Total transferred file size: 27793 bytes
this is wrong. that`s the size of the file which failed to transfer. so it
should not be added to the total transfer file size, shouldn`t it ?
>
hello,
with --stats, shouldn`t we differ between "number of files transferred" and
"number of files failed" ?
the problem is, that i have files which ALWAYS fail on transfer, and to check
for "number of files failed"<=2 would be the best way for me to check if the
overall transfer was ok.
if
Hello,
i just came across the sparse-block patch.
i`m using rsync to store vmware vmdk virtual disks to a zfs filesystem.
vmdk files have large portions of zeroed data and when thin provisioned (not
being used yet), they even may be sparse.
on the target, after writing to zfs the zeroes are al
so, instead of 500M i would transfer 100GB over the network.
that`s no option.
besides that, for transferring complete files i know faster methods than rsync.
one more question:
how safe is transferring a 100gb file, i.e. as rsync is using checksums
internally to compare the contents of two fil
Hello,
i`m using rsync to sync large virtual machine files from one esx server to
another.
rsync is running inside the so called "esx console" which is basically a
specially crafted linux vm with some restrictions.
the speed is "reasonable", but i guess it`s not the optimum - at least i donŽ
hello,
i`m trying to use rsync via inetd and having problems.
i`m getting
2009/06/15 18:25:29 [41082] name lookup failed for : ai_family not supported
2009/06/15 18:25:29 [41082] connect from UNKNOWN ()
when trying to write to rsync daemon via inetd.
reading works fine.
inetd.conf looks like
>wondering if the only option to have rsync 3 running is have a glibc 2.4+?
who is telling this?
sure you can`t run rsync on system with old glibc if it`s compiled on system
with newer glibc - but you can compile it agains old glibc
regards
roland
> -Ursprüngliche Nachricht-
> Von:
for now there is no caching - anyway - how should checksums be cached?
if mtime/size is no reliable method for detecting file changes and checksum is
the only method - to detect if you need to update the cache you need to ...
checksum and thus a checksum cache is quite nonsense, imho.
> I s
>could this copy correctly opened files?
it`s not a question if it`s open - it`s a question if you get a consistend copy.
with this, there is nothing which makes sure that the files doesn`t change
during transfer - so if it happens, on the target side you have a file
different from the source.
oh, this is interesting patch - thanks for giving pointer.
i have tried it and it and it looks interesting, but somewhat incomplete.
i can transfer remote files to a local dir and they are being compressed on the
local side, but (quite logical) this breaks size/content-checking.
being also ment
sounds interesting - are you speaking about a special rsync version or about
this helper script:
http://marc.info/?l=rsync&m=115822570129821&w=2
?
> -Ursprüngliche Nachricht-
> Von: "Stephen Zemlicka" <[EMAIL PROTECTED]>
> Gesendet: 16.09.07 19:43:54
> An: "'roland'" <[EMAIL PROTECTED]
> Note that back-to-back rsyncs make the window of opportunity much
> much smaller for things to change during transit.
yes, but it still leaves room for corrupted transfers nobody would probably
know about !?
> -Ursprüngliche Nachricht-
> Von: <[EMAIL PROTECTED]>
> Gesendet: 16.09.07 1
> Handling of concurrent changes to source files is one of rsync's
> weaknesses.
too bad, but good to know :)
>The rsync sender naively reads each source file from
> beginning to end and sends what it sees; it doesn't detect if source
> files change while being transferred.
that`s what i fea
hello matt,
thank you for your reply.
as is see, the method you describe is just "theoretical", because it won`t work
due to buffering issue.
furthermore, it still needs ssh or maybe another remote shell.
i'd like to leave out ssh or probably any remote shell entirely because
encryption is sl
69 matches
Mail list logo