I'm copying files using --link-dest to avoid duplication. I'm also
using a de-duplicator (rmlint) to further reduce duplication. For
files that are duplicates, I've rmlint set to use the timestamp of the
oldest file.
This ends up with starting conditions where the source of a copy might
have bee
On Tue, 14 Jul 2015 08:59:25 +0200, Paul Slootman wrote:
> btrfs has support for this: you make a backup, then create a btrfs
> snapshot of the filesystem (or directory), then the next time you make a
> new backup with rsync, use --inplace so that just changed parts of the
> file are written to th
On Mon, 13 Jul 2015 17:38:35 -0400, Selva Nair wrote:
> As with any dedup solution, performance does take a hit and its often
> not worth it unless you have a lot of duplication in the data.
This is so only in some volumes in our case, but it appears that zfs
permits this to be enabled/disabled
On Mon, 13 Jul 2015 15:40:51 +0100, Simon Hobson wrote:
> The think here is that you are into "backup" tools rather than the
> general purpose tool that rsync is intended to be.
Yes, that is true. Rsync serves so well as a core component to backup, I
can be blind about "something other than rsy
On Mon, 13 Jul 2015 02:19:23 +, Andrew Gideon wrote:
> Look at tools like inotifywait, auditd, or kfsmd to see what's easily
> available to you and what best fits your needs.
>
> [Though I'd also be surprised if nobody has fed audit information into
> rsync before; y
On Thu, 02 Jul 2015 20:57:06 +1200, Mark wrote:
> You could use find to build a filter to use with rsync, then update the
> filter every few days if it takes too long to create.
If you're going to do something of that sort, you might want instead to
consider truly tracking changes. This catches
On Thu, 11 Jul 2013 15:46:06 +, Andrew Gideon wrote:
> rsync: recv_generator: mkdir
>
> "/backup/host/vol/snapshot.2013.07.11.0.in_progress/live/lms/trylesson/toefl"
> failed: No space left on device (28)
> *** Skipping any contents from this failed directo
Hello:
[I apologize if this is a repeat. I had to rebuild my posting profile,
and I think I didn't do it correctly before I sent a previous version
of this.]
We use rsync as a part of a home-grown backup solution. In the specific
case at hand, we're using rsync to copy volumes off-site. The
On Thu, 22 Mar 2012 17:48:44 +0100, Paul Slootman wrote:
> --fuzzy aside, I'm a great believer of logrotate's "dateext" option.
So am I, and not just for backups. It's easier to find the log file one
needs with datestamps.
Unfortunately, that machine not using it isn't my machine to administer
On Thu, 22 Mar 2012 14:23:13 +, Andrew Gideon wrote:
> Assuming my expectation is correct, and that --fuzzy should have an
> effect in this case, I'm wondering how best to test what's occurring.
> I've tried using --itemize-changes in a --dry-run, but all it tells me
I've identified a situation where the combination of --fuzzy --fuzzy
(yes: two of them) and --link-dest is not behaving as I'd expect. I'm
first wondering if my expectation is wrong. Assuming that it is not,
then I'm wondering how best to figure out the problem.
The double use of --fuzzy is
On Wed, 30 Jun 2010 01:43:02 +, Andrew Gideon wrote:
> I've thought of two solutions: (1) deliberating breaking linking (and
> therefore wasting disk space) or (2) using a different file system.
>
> This is running on CentOS 5, so xfs was there to be tried. I've had
On Fri, 21 Oct 2011 19:14:00 -0700, Ido Magal wrote:
>>error while loading shared libraries: libperl.so.5.10: cannot open
>>shared object file: No such file or directory
This doesn't appear to be a complaint about something the Perl script is
doing, but about Perl itself not working.
-
On Sat, 22 Oct 2011 12:07:33 -0400, Kevin Korb wrote:
> If that is the only thing that is different you might be using rsync
> incorrectly.
And that's why you left it being reported? Interesting idea. Thanks for
explaining.
- Andrew
--
Please use reply-all for most replies to avoid o
On Fri, 21 Oct 2011 12:10:09 -0400, Kevin Korb wrote:
> If you want something you can run after the fact here is a tool I wrote
> a while back that does a sort of diff across 2 --link-dest based
> backups:
> http://sanitarium.net/unix_stuff/rspaghetti_backup/diff_backup.pl.txt It
> will also tell
I'm trying to understand the point of the --checksum-seed option. As I
understand it from a little reading, checksums are not cached over
executions of rsync. So...what is the point of fixing the seed?
Is this in support of patches which *do* support caching of checksums?
I've read about cac
On Mon, 23 May 2011 19:45:57 +0200, AZ 9901 wrote:
> Well, when using -A (--acls), same (unchanged) files between 2 rsync
> runs are not linked together. Removing -A (--acls) makes things fine,
> files are hard linked together.
Where you write "unchanged", do you mean completely unchanged or do y
On Sun, 08 May 2011 18:21:23 +0200, AZ 9901 wrote:
[...]
> So why Rsync does not hard link them ?
If I understand what you're asking correctly, you've two files that are
identical but for the ACLs which are different. You're asking why these
two files aren't hard-linked?
The answer is that th
On Tue, 29 Mar 2011 08:53:22 -0400, Matt McCutchen wrote:
>> So it is "du" that is fooling me? Very interesting
>
> Right, "du" counts a multiply linked file only the first time it is
> seen.
It's actually fairly nice once you get used to it. I use --link-dest for
backups too, and this lets me
We do backups using rsync --link-dest. On one of our volumes, we just
hit a limit in ext3 which generated the error:
rsync: link "..." => ... failed: Too many links (31)
This appears to be related to a limit in the number of directory entries
to which an inode may be connected. In other word
On Sat, 29 May 2010 11:34:56 -0700, Wayne Davison wrote:
> If anyone has a suggested auth method, let me know.
Lifting the key-pair solution used by openssh?
- Andrew
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://l
On Thu, 20 May 2010 23:38:24 +, Andrew Gideon wrote:
> Copying this volume takes hours...far more than other volumes of similar
> size. I blame the much larger amount of directory traversal (and
> comparisons between source and destination) that are occurring.
BTW, running a trac
Using rsync --link-dest, I end up with a file system that has a
relatively large number of directory entries but relatively small number
of inodes. Copying this volume takes hours...far more than other volumes
of similar size. I blame the much larger amount of directory traversal
(and compar
I do backups using rsync, and - every so often - a file takes far longer
than it normally does. These are large data files which typically change
only a little over time.
I'm guessing that these large transfers are caused by occasional changes
that "break" (ie. yield poor performance) in the
On Sat, 10 Oct 2009 20:01:58 +0200, Matthias Schniedermeyer wrote:
> let xargs fill up the rest of
> the commandline
I'd never noticed that -I implies -L 1. That's the key, as it forces one
command per input rather than batching of the input.
Thanks for helping to clear that up.
- And
On Sat, 10 Oct 2009 15:54:25 +0200, Matthias Schniedermeyer wrote:
> It makes a tremendous difference if you have to fork/exec one program
> per file for, say, 100,000 files. Or (-t here) about 10 instances doing
> 10,000 files.
I'm afraid I'm still too obtuse (or perhaps just coffee-deprived) to
On Sat, 10 Oct 2009 00:22:11 -0400, Sam wrote:
> As far as I know it's still there
That's what I thought. So what is the point behind --target-dir?
Sorry for the puzzlement...
Andrew
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change
On Mon, 05 Oct 2009 12:47:54 -0400, Sanjeev Sharma wrote:
> They added the option to get cp & mv working well with xargs
What happened to the -I option to xargs? This permits one to do the
replacement anywhere on the command line being repeated.
- Andrew
--
Please use reply-all for mo
On Sun, 27 Sep 2009 17:14:42 +, Andrew Gideon wrote:
> I was thinking that an alternative to links, which do nothing to
> preserve space when small file changes have been made, would be using
> LVM snapshots. Instead of creating a new directory for a new backup,
> and specifying
On Sun, 27 Sep 2009 17:14:42 +, Andrew Gideon wrote:
> Where this "fails" is for large files that have received small changes.
> The directory containing my main IMAP account, for example, typically
> generates between 1 and 2 G of daily backup data as I file mes
I currently do incremental backups using --link-dest. Unchanged files
are hard links to the previous snapshot; changed files are new copies.
Where this "fails" is for large files that have received small changes.
The directory containing my main IMAP account, for example, typically
generates
On Tue, 15 Sep 2009 22:01:04 +, Andrew Gideon wrote:
> It can also potentially be extended in other directions. For one crazy
> example, the utility (or some other utility that modifies the first
> utilities configuration) could listen on a port for messages from -
> pres
On Tue, 15 Sep 2009 03:04:46 -0400, Matt McCutchen wrote:
> One thing you can do is
> temporarily attach strace.
I find lsof very informative with respect to rsync's status.
- Andrew
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change op
On Tue, 15 Sep 2009 12:11:03 -0400, Eric S. Johansson wrote:
> run rsync till a given time deadline, killing off the original program
> instance and then restart with a new bandwidth limit. I would probably
> use a small program invoking rsync and then sending a signal when "it's
> Time" then sta
> I would think priority queuing is
> better than shaping in this case.
I'm afraid I'm not following you here. As I've learned it, priority
queuing is one of several tools available to achieve shaping.
No?
[...]
>
> If there is one or more bottleneck link in the network (places where
> traf
On Mon, 14 Sep 2009 13:09:41 -0400, Eric S. Johansson wrote:
> On 9/14/2009 9:25 AM, Andrew Gideon wrote:
>
>> So control is most effective at the sending rsync, which suggests that
>> bwlimit is a good approach. But the most information is available at
>> the receiving
On Mon, 14 Sep 2009 14:45:02 +1200, Nathan Ward wrote:
> Unless you do it properly, and do your QoS on routers in the middle.
This is true. But there are considerations. I became curious about
this, so I did some reading to refresh my memory.
First, keep in mind that we're talking about contr
On Sun, 13 Sep 2009 22:22:34 -0400, Eric S. Johansson wrote:
> It is all within one tool and there's no way you can hurt or damage
> anyone else through its use.
It is also within one instance of the tool. What if two of your remote
users rsync at the same time? Twenty? What if someone has a
On Sun, 13 Sep 2009 21:20:01 -0400, Matt McCutchen wrote:
> How about the suggestions you were given on the rsnapshot list?
Assuming that you're using Linux somewhere in the mix, its ability to put
different network traffic into different pools for purposes of rate
management is (1) admittedly
On Thu, 03 Sep 2009 16:23:24 -0400, Matt McCutchen wrote:
> but the non-atomicity of read(2) calls was not considered
If a frozen snapshot is constructed, then I don't see how read()'s
inatomicity (if that's a word {8^) would matter.
However, I see a related issue.
My experience with DB engine
On Mon, 31 Aug 2009 13:37:16 -0700, Wayne Davison wrote:
>> Anyone knows a trick that the server only answer if the client is use
>> the compression?
>
> This is not currently possible.
What if rsync-path is set to a little script that only accepts the
connection (and exec()s the real rsync bin
On Fri, 28 Aug 2009 10:51:31 +0530, Jignesh Shah wrote:
> Could you please
> let me know if there is any way to get rid of this error message in
> rsync-3.0.6?
Rsync cannot do this [as far as I know], but there are other tools. For
example, if you use LVM for managing your volumes (and you shou
On Thu, 27 Aug 2009 16:30:55 +1200, Nathan Ward wrote:
> --rsync-path="sudo rsync"
Another way to achieve something similar would be to have PermitRoot set
to without-password, and then set up a key pair for remote login. In
authorized_keys2, the remote access for this key pair can be limited
On Thu, 16 Jul 2009 08:47:17 -0700, Laurent Luce wrote:
> I am looking into redundancy now. Does anyone use a similar setup and
> has redundancy. I am looking for some advices.
I'm not clear to what redundancy you're referring. Files common to
multiple clients? Files unchanged from copy to cop
On Fri, 17 Jul 2009 09:07:15 -0300, Jon Watson wrote:
> We've been using a backup script which uses rsync for months now on a
> Xen server without issue. A few weeks ago we had some work done on the
> server which included upgrading the kernel. We are now running
> 2.6.18-128.1.10.el5xen and now t
On Sun, 21 Jun 2009 21:46:31 -0400, Matt McCutchen wrote:
>> 3. At t2, f0 to f3 are deleted from location B, and we don’t ever want
>> the deleted files to be copied again from location A
>>
What if these files are subsequently modified on location A? Should the
new versions of the files be cop
On Tue, 26 May 2009 11:02:53 +0200, Matthias Schniedermeyer wrote:
> The important thing is that all
> the data is from the same point in time.
That's what I was thinking, and there are numerous tools which support
this at the file system/volume level.
One consideration, though, is that snapsho
On Sun, 24 May 2009 00:45:09 +0200, Matthias Schniedermeyer wrote:
> On the other hand the quiescent and device/filesystem snapshotting
> results in a rsyncable copy.
Another possibility is to have the files on a volume or file system that
supports snapshots. That won't guarantee "quiescent", b
I use rsync --link-dest as the basis of a backup solution. It works
extremely well.
But one thing I've noticed is that it can spend hours in a loop of stat
(), getxattr(), link(), repeat. Is there some way to optimize this?
It's not a big deal, but I'm just wondering.
I did consider doing a
On Wed, 29 Apr 2009 10:58:21 +0100, Jeremy Sanders wrote:
> I've tried switching to rsh, but that doesn't help a great deal. I get
> close to maximum gigabit speeds in simple data copy tests however.
It may be clear to others, but I'm missing what you mean by this. I
gather that rsh yielded poo
I want to use rsync under the control (ie. initiated from) the client
side, but with the bandwidth controlled by the server side. I can force
the bwlimit option on the command line executed on the server, but will
this make a difference given that the files are being sent from client to
server
On Sun, 02 Nov 2008 16:10:23 -0500, Matt McCutchen wrote:
> Fixing this in a way that works with all combinations of mask-requiring
> and non-mask-requiring systems will take some care. We discussed
> similar issues a while ago:
>
> http://lists.samba.org/archive/rsync/2006-October/016400.html
>
On Sun, 02 Nov 2008 16:10:23 -0500, Matt McCutchen wrote:
> Fixing this in a way that works with all combinations of mask-requiring
> and non-mask-requiring systems will take some care.
Any thoughts on this? The code has changed significantly from when I did
my futzing around in 2.6.2, so - eve
On Sun, 02 Nov 2008 16:10:23 -0500, Matt McCutchen wrote:
[...]
> I
> guess one could still make the argument that the ACLs should be copied
> exactly.
That would be my assertion. Regardless of the reason for the mask being
present - added by the user or required by the file system - the defaul
On Sun, 02 Nov 2008 15:33:05 -0500, Matt McCutchen wrote:
> You need to pass -A to preserve ACLs. -X does not process "system.*"
> extended attributes.
Sorry. I actually [think I] know that, but copied the wrong test.
As you'll see below, -A yields the same results:
[EMAIL PROTECTED]
I've been using a 2.6.2 that I modified myself to get ACLs as I like.
I'm trying now to get back into the public version of rsync, but am
finding difficulties.
This one seems pretty basic. It's on a CentOS 4.5 machine with rsync rpm
rsync-3.0.4-1.el4.rf and kernel 2.6.9-55.0.2.plus.c4. After
On Sat, 16 Dec 2006 09:04:22 -0500, Matt McCutchen wrote:
> You might want to make it clear that *preserving ACLs* when sending files
> to an older version of the ACL patch is not supported.
Be aware that this is going to have a large impact on some sites (ie.
mine). We use rsync as the underlyi
On Wed, 06 Dec 2006 08:47:42 +0800, woo robbin wrote:
> How can I merge the two increment backups into one directory,say
> /backupdir/increment ?
I suggest you look into exploiting the --link-dest option in your backups.
Using this, each backup has the performance of an incremental (both in
term
On Sat, 04 Nov 2006 19:48:22 -0300, Manuel Kissoyan wrote:
> im trying to work out my backup with a monthly retention, wondering how i
> could keep a backup that is 30 days old all the time
You'll keep yourself confused as long as you word the problem that way.
You [probably!] don't always want
On Mon, 02 Oct 2006 23:53:11 -0700, Wayne Davison wrote:
> FYI, I ran a test on a file with the ACLs you mentioned, and it worked
> fine coyping it from Solaris to Linux.
I'm a little uncomfortable putting an unreleased version into production,
which is where this is going (assuming all is well).
On Mon, 02 Oct 2006 22:14:06 -0400, Matt McCutchen wrote:
> Maybe I didn't make this clear: On Linux, if a file's ACL contains an
> ACL_MASK entry, the file's group permission bits (S_IRWXG) are linked to
> that entry instead of the ACL_GROUP_OBJ entry. Statting shows the
> ACL_MASK entry, and ch
On Mon, 02 Oct 2006 20:56:34 -0400, Matt McCutchen wrote:
> Rsync expects that stat(2) on a file whose ACL contains a mask entry will
> return the mask entry as the S_IRWXG mode bits. Perhaps Solaris returns
> the group-owner entry no matter what; that would explain the trouble.
> Would you plea
I've found an error: ACLs are not properly preserved when a file is moved
from Solaris to a 2.6 Linux (I'm testing using CentOS 4 update 3 plus
updates). This is using 2.6.8 built with the acl patch on both platforms.
The file on the source Solaris machine:
[truffle:/opt]# getfacl /xxx/x
# fil
On Thu, 10 Aug 2006 08:53:21 -0400, Matt McCutchen wrote:
> If you want ACLs, apply the patch and pass
> --enable-acl-support when you configure. That gives you observance of
> default ACLs when -p is disabled and an option -A, --acls to preserve
> ACLs. The man page is patched to document -A an
On Thu, 03 Aug 2006 18:18:11 -0700, Wayne Davison wrote:
> Did you check the ACL patch in CVS?
No; I didn't know that there was one with the ACL differences checking
added. I'll take a look at it.
>
>> What else - besides --link-dest use - should I do to test this patch
>> before I post it?
>
On Thu, 03 Aug 2006 18:19:53 -0700, Wayne Davison wrote:
>> First of all, the patch places sysacls.[hc] one directory too high.
>> These need to be in the libs directory, lest 'make' will fail.
>
> This means that you didn't use a -p option to patch -- you should use
> either -p1 (modern patches)
A while ago (2.6.2), I built and posted a patch which caused rsync to "do
the right thing" where --link-dest was being used and where files had been
changed only in their ACLs. I've recreated this for 2.6.8 (there were
some small differences).
I've tested this using --link-dest copying from Linu
On Tue, 02 May 2006 22:51:37 -0700, Wayne Davison wrote:
> On Mon, May 01, 2006 at 10:58:01PM -0400, Matt McCutchen wrote:
>> At some point in the future, I will get back to improving the ACL
>> support.
>
> In the meantime, the patch in CVS has been improved significantly, and
> needs testing to
Marc Perkel wrote:
> Seems to me it should warn but continue to copy the files anyway ignoring
> the ACLs.
Not necessarily. Failing to copy the ACLs could result in an insufficiently
protected file. In that case, better to not copy w/o the ACLs.
And that's just an example off the top of my hea
dtra wrote:
> but rsync still takes up more resources than we want it to
> it takes up to 95% (fluctuating) cpu load and a fair bit of memory too
>
> the cron job uses nice -19 rsync
> but that doesn't seem to do anything, is there anyway to make it use
> like 5% cpu or something?
If the other u
We currently do backup using rsync amongst Linux and Solaris machines.
Modulo an ACL issue that we had to patch, this is working extremely well.
But I want to add our OSX machines to the mix. This is, unfortunately,
leaving me confused about rsyncx.
I can do the normal thing from OSX using rsy
Paul Slootman wrote:
> There's a difference between giving a 5xx response during SMTP, and
> first accepting a message and then later bouncing it to the (supposed)
> envelope sender. I believe spamcop is protesting the latter, not the
> first. I agree with them. 20% of the junk I get are bogus bou
Lewis Franklin wrote:
> This works well as two separate processes. However, having read the
> documentation it seems that I should be able to run the ssh commands
> "inline" using the -e flag. However, I have not been able to
> successfully sync using this method.
[...]
>
> rsync -azve "ssh -l s
Wayne Davison wrote:
> There is a patch in the "patches" dir called delete-sent-files.diff that
> probably does what you want. It deletes any files that got successfully
> transferred, but does not delete files that were already up-to-date, nor
> does it delete things like directories, symlinks,
I'm using the -A patch on v2.6.2, and I'm doing the usual "incremental
backup using links" thing. The destination is a machine running Fedora
(both 2 and 3), and the sources are machines running various Linuxes and
Solaris.
During my initial testing, I found a lot of diskspace being wasted. I
On Fri, 2005-04-01 at 17:22 -0800, Wayne Davison wrote:
>
> No, rsync doesn't yet handle ACLs. I believe that the rawhide version
> of rsync (for redhat) is going to be patched to work with extended
> attributes, so hopefully we'll get something integrated into rsync
> before too long (I haven't
Wayne Davison wrote:
> Earlier in the development cycle, I noticed that rsync was not updating
> a file that differed in attributes when using --compare-dest, so I
> decided to fix that for 2.6.4.
Does this also fix the problem I reported in:
http://lists.samba.org/archive/rsync/2005-February
Eli wrote:
> Andrew wrote:
>> Is there some philosophical or practical reason why rsync
>> cannot use some persistent external database to map remote
>> inodes to local inodes?
>
> No idea if this is done or not, but couldn't inodes be recycled if a file
> is
> deleted and the inode marked free?
[EMAIL PROTECTED] wrote:
> I'll leave this open for now as a suggestion for a more extensive rename
> detector.
Is there some philosophical or practical reason why rsync cannot use some
persistent external database to map remote inodes to local inodes? Having
that information persist would make
This is less a question about rsync and more about how rsync can be used as
a backup solution in a particular case. If one of the tools built over
rsync for this purpose solves this, I'm eager to hear how. Otherwise,
suggestions are welcome.
Server-initiated backups are easy. The rsync process
Wayne Davison wrote:
> This hasn't been fixed yet, so I'd like to see your changes so I can
> incorporate them into the patches/acls.diff for 2.6.4. Thanks!
Okay. I was hoping that someone else had done this (better than
I). Please keep in mind my caveats.
- Andrew
To acls.c I just added
I'm using the -A patch on v2.6.2, and I'm doing the usual "incremental
backup using links"
thing.ÂÂTheÂdestinationÂisÂaÂmachineÂrunningÂFedoraÂ
(both 2 and 3), and the sources are machines running various Linuxes and
Solaris.
During my initial testing, I found a lot of diskspace being wasted.ÂÂ
82 matches
Mail list logo