Windows backups?

2021-12-27 Thread Dan Stromberg via rsync
Can rsync back up an NTFS using a Windows 10 kernel? So far I've had good luck backing up NTFS filesystems on a dual boot system when booted into Linux, but not when booted into Windows. I've been bitten in the past by /usr/bin/find (for EG) having problems with Windows junctions over sshfs. The

Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-13 Thread Dan Stromberg via rsync
x27;s files and then the suggested re-link command would decide > it could join them together. You'd probably then need to keep going and > re-link day5's pictures (since it was probably linking to the old day4's > pictures). > > ..wayne.. > I totally get why some f

Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-13 Thread Wayne Davison via rsync
I should also mention that there are totally valid reasons why the dir might be huge on day4. For instance, if someone changed the mode on the files from 664 to 644 then the files cannot be hard-linked together even if the file's data is unchanged. The same goes for differences in preserved xattrs,

Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-13 Thread Wayne Davison via rsync
You could rsync the current day4 dir to a day4.new dir, and list all the prior days as --link-dest options. Make sure that you're using the same xatt/acl options as your official backup command (the options may or may not be present) so that you are preserving the same level of info as the backup.

Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-11 Thread Chris Green via rsync
Guillaume Outters via rsync wrote: > On 2020-12-11 12:53, Chris Green wrote : > > > […] wrote a trivial[ish] script that copied > > all the backups to a new destination sequentially (using --link-dest) > > and then removed the original tree, having checked the new backu

Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-11 Thread Guillaume Outters via rsync
On 2020-12-11 12:53, Chris Green wrote : […] wrote a trivial[ish] script that copied all the backups to a new destination sequentially (using --link-dest) and then removed the original tree, having checked the new backups were OK of course. With the same cause as yours, I once worked out

Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-11 Thread Chris Green via rsync
de number. There is no other relation > between those directory entries. > > So you will have to incrementally process each next day against the > previous day. > Yes, that's what I have done, wrote a trivial[ish] script that copied all the backups to a new destination sequenti

Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-11 Thread Paul Slootman via rsync
On Thu 10 Dec 2020, Chris Green via rsync wrote: > > Occasionally, because I've moved things around or because I've done > something else that breaks things, the hard links aren't created as > they should be and I get a very space consuming backup increment. > > Is there any easy way that one can

Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-10 Thread Dan Stromberg via rsync
Hi. Is it possible that, if day4 is consuming too much space, that day3 was an incomplete backup? The rsync wrapper I wrote goes to a little trouble to make sure that incomplete backups aren't allowed. It's called Backup.rsync, and can be found at: https://stromberg.dnsalias.org

Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-10 Thread Chris Green via rsync
I run a simple self written incremental backup system using rsync's --link-dest option. Occasionally, because I've moved things around or because I've done something else that breaks things, the hard links aren't created as they should be and I get a very space consuming backup increment. Is ther

Re: A strange problem with my daily backups performed via rsync

2020-11-02 Thread Wayne Davison via rsync
On Mon, Nov 2, 2020 at 3:03 AM Manish Jain wrote: > rsync -av --delete src dst # but protect dir dst/XYZ from deletion > > I tried "--filter 'protect dst/XYZ'" but that does not prevent the > directory dst/XYZ from being deleted. > The "dst" dir isn't in the transfer, so it can't appear in a fil

Re: A strange problem with my daily backups performed via rsync

2020-11-02 Thread Manish Jain via rsync
e situation with my daily backups performed via rsync. I primarily use Manjaro KDE Linux (LTS kernel), but also have FreeBSD and Windows 10 bare-metal installations. I have an all-OS-writable ext2 partition /dev/sda2 mounted at /mnt/wall My USB backup device is a Sony SSD mounted at /mnt/sony This is

Re: A strange problem with my daily backups performed via rsync

2020-11-02 Thread Manish Jain via rsync
it to your attention. On 11/2/20 3:30 AM, Manish Jain via rsync wrote: Hi, I am facing a strange situation with my daily backups performed via rsync. I primarily use Manjaro KDE Linux (LTS kernel), but also have FreeBSD and Windows 10 bare-metal installations. I have an all-OS-writable

Re: A strange problem with my daily backups performed via rsync

2020-11-02 Thread Kevin Korb via rsync
your filesystem and you may as well have rsync bring it to your attention. On 11/2/20 3:30 AM, Manish Jain via rsync wrote: > > Hi, > > I am facing a strange situation with my daily backups performed via > rsync. I primarily use Manjaro KDE Linux (LTS kernel), but also have > Free

A strange problem with my daily backups performed via rsync

2020-11-02 Thread Manish Jain via rsync
Hi, I am facing a strange situation with my daily backups performed via rsync. I primarily use Manjaro KDE Linux (LTS kernel), but also have FreeBSD and Windows 10 bare-metal installations. I have an all-OS-writable ext2 partition /dev/sda2 mounted at /mnt/wall My USB backup device is a

[Bug 14529] Please add option to save metadata to single file to speed up backups

2020-10-11 Thread just subscribed for rsync-qa from bugzilla via rsync
https://bugzilla.samba.org/show_bug.cgi?id=14529 --- Comment #1 from Andras Korn --- It's completely fine if using this "database" in writable modules implies or requires `max connections = 1` to avoid concurrency/locking issues. -- You are receiving this mail because: You are the QA Contact fo

[Bug 14529] New: Please add option to save metadata to single file to speed up backups

2020-10-11 Thread just subscribed for rsync-qa from bugzilla via rsync
https://bugzilla.samba.org/show_bug.cgi?id=14529 Bug ID: 14529 Summary: Please add option to save metadata to single file to speed up backups Product: rsync Version: 3.2.0 Hardware: All OS: All

Re: rsync script for snapshot backups

2016-06-21 Thread Petros Angelatos
On 19 June 2016 at 10:27, Simon Hobson wrote: > Dennis Steinkamp wrote: > >> i tried to create a simple rsync script that should create daily backups >> from a ZFS storage and put them into a timestamp folder. >> After creating the initial full backup, the follo

Re: rsync script for snapshot backups

2016-06-21 Thread Dennis Steinkamp
Am 20.06.2016 um 22:01 schrieb Larry Irwin (gmail): The scripts I use analyze the rsync log after it completes and then sftp's a summary to the root of the just completed rsync. If no summary is found or the summary is that it failed, the folder rotation for that set is skipped and that folder

Re: rsync script for snapshot backups

2016-06-20 Thread Larry Irwin (gmail)
r...@alum.wustl.edu Skype: larry_irwin About: http://about.me/larry_irwin On 06/19/2016 01:27 PM, Simon Hobson wrote: Dennis Steinkamp wrote: i tried to create a simple rsync script that should create daily backups from a ZFS storage and put them into a timestamp folder. After creating the initial f

Re: rsync script for snapshot backups

2016-06-19 Thread Joe
tried to create a simple rsync script that should create daily backups from a ZFS storage and put them into a timestamp folder. After creating the initial full backup, the following backups should only contain "new data" and the rest will be referenced via hardlinks (-link-dest) This

Re: rsync script for snapshot backups

2016-06-19 Thread Dennis Steinkamp
Am 19.06.2016 um 19:27 schrieb Simon Hobson: Dennis Steinkamp wrote: i tried to create a simple rsync script that should create daily backups from a ZFS storage and put them into a timestamp folder. After creating the initial full backup, the following backups should only contain "new

Re: rsync script for snapshot backups

2016-06-19 Thread Simon Hobson
Dennis Steinkamp wrote: > i tried to create a simple rsync script that should create daily backups from > a ZFS storage and put them into a timestamp folder. > After creating the initial full backup, the following backups should only > contain "new data" and the rest

rsync script for snapshot backups

2016-06-19 Thread Dennis Steinkamp
Hey guys, i tried to create a simple rsync script that should create daily backups from a ZFS storage and put them into a timestamp folder. After creating the initial full backup, the following backups should only contain "new data" and the rest will be referenced via hardlinks (

Re: Verifying backups

2016-03-07 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 FWIW, the one time I had corruption in my backups the problem was a bad DIMM randomly flipping bits. I now insist on ECC RAM. On 03/07/2016 03:51 PM, Henri Shustak wrote: > Just chiming in slightly off topic. > > As a first step if you are

Re: Verifying backups

2016-03-07 Thread Henri Shustak
somehow. Check the drive media for bad blocks, check that all the cables are working well. Ensure the mother board of the system is in good working order etc. As a second step if you are going to be performing backups (with a file system based tool such as rsync) to any kind of file system in

Re: Verifying backups

2015-10-01 Thread Kevin Korb
-link-dest there isn't much advantage to >> using rsync for backups. The only thing you get beyond cp -au is >> --delete. > > I just now remembered the (forehead slap) bloody obvious reason I > decided to use rsync to make and maintain my backup drive(s). > > Yes

Re: Verifying backups

2015-10-01 Thread Ronald F. Guilmette
In message <560ce706@sanitarium.net>, Kevin Korb wrote: >Yes, when it comes to local copies cp is significantly faster than >rsync. Without --link-dest there isn't much advantage to using rsync >for backups. The only thing you get beyond cp -au is --delete. I jus

Re: Verifying backups

2015-10-01 Thread Ronald F. Guilmette
In message <560cca51.5cwvptqce1nflu+u%per...@pluto.rain.com>, per...@pluto.rain.com (Perry Hutchison) wrote: >Just because rsync is an awesome hammer, it does not necessarily follow >that every problem involving backups closely resembles a nail :) An excellent and very apropos po

Re: Verifying backups

2015-10-01 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Yes, when it comes to local copies cp is significantly faster than rsync. Without --link-dest there isn't much advantage to using rsync for backups. The only thing you get beyond cp -au is --delete. Also, when it comes to static data like

Re: Verifying backups

2015-09-30 Thread Perry Hutchison
l, and just repeatedly > invoke the cmp command on all of the regular files found therein. Just because rsync is an awesome hammer, it does not necessarily follow that every problem involving backups closely resembles a nail :) Since your source and backup are both local, I suspect using rsyn

Re: Verifying backups

2015-09-30 Thread Ronald F. Guilmette
In message <560c7e98.3090...@sanitarium.net>, Kevin Korb wrote: >Remove the -n and look at the results. You would be copying the one >dir into the two dir instead of copying the contents of the one dir >into the two dir. AHHH! OK. Yes. My bad. I keep on forgetting how critical to

Re: Verifying backups

2015-09-30 Thread Ronald F. Guilmette
In message <560c79ff.5010...@sanitarium.net>, Kevin Korb wrote: >Because you are making two/one. Change to: >rsync -n -v --itemize-changes -checksum -a one/ two/ OK, I tried it with your suggested command line, and yes, that produces rather more substantially useful results. However... Perh

Re: Verifying backups

2015-09-30 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Remove the -n and look at the results. You would be copying the one dir into the two dir instead of copying the contents of the one dir into the two dir. On 09/30/2015 08:28 PM, Ronald F. Guilmette wrote: > > In message <560c79ff.5010...@sanitarium.

Re: Verifying backups

2015-09-30 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Because you are making two/one. Change to: rsync -n -v --itemize-changes -checksum -a one/ two/ On 09/30/2015 07:22 PM, Ronald F. Guilmette wrote: > rsync -n -v --itemize-changes -checksum -a one two - -- ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~

Re: Verifying backups

2015-09-30 Thread Ronald F. Guilmette
In message <560c660f.5000...@sanitarium.net>, Kevin Korb wrote: >Just add --itemize-changes and --checksum to what you were doing >before and know that it will take a long time. I'm still not getting to where I need to be. Maybe you can explain what has gone wrong in this very simple example:

Re: Verifying backups

2015-09-30 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Just add --itemize-changes and --checksum to what you were doing before and know that it will take a long time. On 09/30/2015 06:42 PM, Ronald F. Guilmette wrote: > > Kevin Korb , > > I thank you greatly for your attempts to educate me, however as w

Re: Verifying backups

2015-09-30 Thread Ronald F. Guilmette
Kevin Korb , I thank you greatly for your attempts to educate me, however as we get deeper into discussing more and more different rsync options, I feel that I am actually just getting more confused and frustrated. I've been sitting here, trying all sorts of different combinations and permutatio

Re: Verifying backups

2015-09-30 Thread Kevin Korb
eeBSD system. >> Second, you should look into using either ZFS subvolume snapshots >> or rsync --link-dest to maintain multiple backups. > > Thank you, but I have no real interest in switching to ZFS just > now. > >> Now, for your actual question... Add --itemize-

Re: Verifying backups

2015-09-30 Thread Ronald F. Guilmette
eserve file-flags (aka chflags) --force-change affect user/system immutable files/dirs >Second, you should look into using either ZFS subvolume snapshots or >rsync --link-dest to maintain multiple backups. Thank you, but I have no real interest in switching to ZFS just now. &g

Re: Verifying backups

2015-09-30 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 First off, --fileflags --force-change are not in my man rsync so I don't know what those are. Second, you should look into using either ZFS subvolume snapshots or rsync --link-dest to maintain multiple backups. Now, for your actual question..

Verifying backups

2015-09-30 Thread Ronald F. Guilmette
For some time now I've been using rsync on FreeBSD to make my system backups. Recently, I accidentally rm'd some files from one directory and I had to go and fetch copies off of my backup drive. After I had done so, I found that about 1/5 of them were corrupted. (They were all .jpg

Recycling and keeping backups - Tower of Hanoi management of backups using rsync

2014-09-15 Thread Robert Bell
Thanks to Kevin and Paul for responses. We use a modified Tower of Hanoi scheme (on top of rsync and --link-dest and recycling) for deciding which backups to keep. Here is a sample of our holdings for one area: home.2024.seq.0 set 0 home.20130512.seq.512 set 10

Re: Changing path in old backups - WAS: Re: Changing permissions on existing backup source

2014-01-04 Thread Kevin Korb
easiest solution is to just do the same chmod on the most >> recent backup before running the next backup. > > Hi Kevin, > > This worked perfectly, and my shiny new server is now live, so > thanks very much. > > I have a related question though... > > I want to con

Changing path in old backups - WAS: Re: Changing permissions on existing backup source

2014-01-04 Thread Charles Marcus
continue using my old backups, but the new server has a new hostname too. So, similar to your suggestion above... Currently, the backups snapshot dir hierarchy is: /path/to/mnt/snapshots/hourly.#/hostname/ Can I mv the hostname to the new hostname before I start the backups on the new server

Re: Rsync backups up files that have not changed

2013-11-10 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 You aren't using -a or -t so rsync isn't copying timestamps so it has no idea what has changed and what hasn't. On 11/10/13 22:39, Felipe Alvarez wrote: > rsync -b --suffix=.felipe /tmp/file1 /tmp/file2 remote_host:/tmp/ > > If file1 has not changed,

Rsync backups up files that have not changed

2013-11-10 Thread Felipe Alvarez
rsync -b --suffix=.felipe /tmp/file1 /tmp/file2 remote_host:/tmp/ If file1 has not changed, I still get a /tmp/file1.felipe file which is 100% exactly the same as /tmp/file1 Can this be prevented, while still backing up files that have changed? Cheers, Felipe -- Please use reply-all for most re

Space Machine - an rsync helper script for easier offsite backups

2013-04-09 Thread Czeko, Elmar
Dear rsync community, As Time Machine doesn't support offsite backups, I started developing an rsync helper script for synchronizations between Macs and network attached storage (NAS) devices. After running and refining my script for some time now, it works very well for my purposes. L

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-02-01 Thread Karl O. Pinc
Here's my rsync based backup system. http://wikisend.com/download/377440/rsync_backup-0.26.tar.gz It's an rsync based backup system utilizing hard links to reduce storage requirements. It supports both push and pull. It uses public keys with ssh for the transport. It works and I've used it for

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-02-01 Thread Joe
Thanks. I'm taking a look at it. May take awhile to see if it fits my needs. It sounds promising. Joe On 01/31/2013 04:36 PM, Henri Shustak wrote: > You may be interested in having a look at LBackup , > an open source (released under the GNU GPL) backup system. > > Ess

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-31 Thread Henri Shustak
You may be interested in having a look at LBackup , an open source (released under the GNU GPL) backup system. Essentially, LBackup is a wrapper for rsync. If you are working on your own script. Feel free to look at how LBackup works (primely written in bash at present)

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-23 Thread Ashley M. Kirchner
of failed syncs? Absolutely. On Wed, Jan 23, 2013 at 8:25 AM, Kevin Korb wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > I handle this by actually backing up to "backupname.incomplete". Once > the backup is complete I then rename it to > "backupname.yyyy-mm

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-23 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I handle this by actually backing up to "backupname.incomplete". Once the backup is complete I then rename it to "backupname.-mm-dd.HH-MM-SS". That way all of the backups with date+time stamps in the name are completed bac

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-23 Thread Karl O. Pinc
On 01/23/2013 02:15:06 AM, Voelker, Bernhard wrote: > Kevin Korb wrote: > > On 01/22/13 18:12, Kevin Korb wrote: > > > That is the old way that pre-dates --link-dest. Instead of cp - > al > > > > daily.02 daily.01 you can do a mkdir daily.01 then an rsync ... > > > --link-dest=../daily.02 daily.

RE: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-23 Thread Voelker, Bernhard
Kevin Korb wrote: > On 01/22/13 18:12, Kevin Korb wrote: > > That is the old way that pre-dates --link-dest. Instead of cp -al > > daily.02 daily.01 you can do a mkdir daily.01 then an rsync ... > > --link-dest=../daily.02 daily.01 > > > > Rsync then doesn't need any --delete and you don't both

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-22 Thread Ashley M. Kirchner
gle 1 terabyte drive. > > > >> A > > > > > > > >> On Tue, Jan 22, 2013 at 3:31 PM, Fran￧ois >> <mailto:daithe...@free.fr>> wrote: > > > >> Hi Joe, > > > >> If you want to understand hard-links, just take a look at >

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-22 Thread Kevin Korb
weeks worth for >> each one. They're all going to a single 1 terabyte drive. > >> A > > > >> On Tue, Jan 22, 2013 at 3:31 PM, Fran￧ois > <mailto:daithe...@free.fr>> wrote: > >> Hi Joe, > >> If you want to understand hard-links, ju

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-22 Thread Kevin Korb
> > > On Tue, Jan 22, 2013 at 3:31 PM, Fran￧ois <mailto:daithe...@free.fr>> wrote: > > Hi Joe, > > If you want to understand hard-links, just take a look at Wikipedia > : http://en.wikipedia.org/wiki/Hard_link#Example > > I think it's pretty easy t

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-22 Thread Ashley M. Kirchner
Jan 22, 2013 at 3:31 PM, François wrote: > Hi Joe, > > If you want to understand hard-links, just take a look at Wikipedia : > http://en.wikipedia.org/wiki/Hard_link#Example > > I think it's pretty easy to understand. > > To understand how hard-links (and rsync) can help

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-22 Thread Joe
rd-links (and rsync) can help you make strong incremental > backups, head over > http://blog.interlinked.org/tutorials/rsync_time_machine.html > > Cheers, > -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://l

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-22 Thread Joe
2:31, Joe wrote: > > There have been a lot of posts on the list lately about issues with > > hard links. It has been very interesting, but I don't understand > > it very thoroughly. I haven't used hard links for anything yet. > > I've used symlinks - not for bac

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-22 Thread François
Hi Joe, If you want to understand hard-links, just take a look at Wikipedia : http://en.wikipedia.org/wiki/Hard_link#Example I think it's pretty easy to understand. To understand how hard-links (and rsync) can help you make strong incremental backups, head over http://blog.interlinke

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-22 Thread Joe
le. > Hard links may not normally refer to directories and may not span file > systems. > > Assuming you do many backups and many of the files do not change, > hard links are your friend. > > Backing up soft links: > Do you back up the link or what the link points to? >

Re: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-22 Thread Kevin Korb
; hard links. It has been very interesting, but I don't understand > it very thoroughly. I haven't used hard links for anything yet. > I've used symlinks - not for backups, of course - and have seen > them get broken or deleted in backups. > > Is there a tutorial

RE: Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-21 Thread Tony Abernethy
backups and many of the files do not change, hard links are your friend. Backing up soft links: Do you back up the link or what the link points to? (Even that simple thing has interesting ways to get complicated.) -Original Message- From: rsync-boun...@lists.samba.org [mailto:rsync-boun

Is there a howto/tutorial on backups/rsync that covers the use of hard and soft links?

2013-01-21 Thread Joe
There have been a lot of posts on the list lately about issues with hard links. It has been very interesting, but I don't understand it very thoroughly. I haven't used hard links for anything yet. I've used symlinks - not for backups, of course - and have seen them get broke

Re: Use rsync's checksums to deduplicate across backups

2011-11-20 Thread Dan Stromberg
On Sun, Nov 6, 2011 at 2:29 PM, Dan Stromberg wrote: > > http://stromberg.dnsalias.org/~strombrg/backshift/documentation/comparison/index.html > I've updated the above URL to include a comparison against Lessfs and git wrappers. The table has also become easier to navigate recently, due to usin

Re: Use rsync's checksums to deduplicate across backups

2011-11-19 Thread Dan Stromberg
On Sat, Nov 19, 2011 at 5:43 AM, Andrea Gelmini wrote: > 2011/11/3 Alex Waite : > >Recently I learned that rsync does a checksum of every file > > transferred. I thought it might be interesting to record the path and > > checksum of each file in a table. On future bac

Re: Use rsync's checksums to deduplicate across backups

2011-11-19 Thread Andrea Gelmini
2011/11/3 Alex Waite : >    Recently I learned that rsync does a checksum of every file > transferred.  I thought it might be interesting to record the path and > checksum of each file in a table.  On future backups, the checksum of I guess you can be interested in these projects:

Re: Use rsync's checksums to deduplicate across backups

2011-11-06 Thread Dan Stromberg
tware), and it does hardlink across all backups, but believe it > does its own checksum on top of what rsync already does. I imagine > this would make performance noticeably worse than what I currently > have, though I could be wrong. > An additional checksum (digest) shouldn't cha

Re: Use rsync's checksums to deduplicate across backups

2011-11-06 Thread Cameron Simpson
On 04Nov2011 10:27, Chris Dunlop wrote: | On Thu, Nov 03, 2011 at 09:34:53AM -0500, Alex Waite wrote: | >> Not a direct answer, but this may do what you want: | >> | >>  http://gitweb.samba.org/?p=rsync-patches.git;a=blob;f=link-by-hash.diff | >> | >>  This patch adds the --link-by-hash=DIR option

Re: Use rsync's checksums to deduplicate across backups

2011-11-03 Thread Chris Dunlop
On Thu, Nov 03, 2011 at 09:34:53AM -0500, Alex Waite wrote: >> Not a direct answer, but this may do what you want: >> >>  http://gitweb.samba.org/?p=rsync-patches.git;a=blob;f=link-by-hash.diff >> >>  This patch adds the --link-by-hash=DIR option, which hard >> links received >>  files in a link fa

Re: Use rsync's checksums to deduplicate across backups

2011-11-03 Thread Carlos Carvalho
Alex Waite (alexq...@gmail.com) wrote on 2 November 2011 20:09: >Recently I learned that rsync does a checksum of every file >transferred. I thought it might be interesting to record the path and >checksum of each file in a table. On future backups, the checksum of >a file

Re: Use rsync's checksums to deduplicate across backups

2011-11-03 Thread Alex Waite
> Not a direct answer, but this may do what you want: > >  http://gitweb.samba.org/?p=rsync-patches.git;a=blob;f=link-by-hash.diff > >  This patch adds the --link-by-hash=DIR option, which hard links received >  files in a link farm arranged by MD4 file hash.  The result is that the > system >  wi

Re: Use rsync's checksums to deduplicate across backups

2011-11-03 Thread Alex Waite
> > Check out http://backuppc.sourceforge.net/, it's perl-based backup tool, > using rsync and doing exactly what you ask for. > I have looked at BackupPC before (and it is a nice piece of software), and it does hardlink across all backups, but believe it does its own checks

Re: Use rsync's checksums to deduplicate across backups

2011-11-03 Thread Johannes Totz
file in a table. On future backups, the checksum of > a file being backed up could be looked up in the table. If there's a > matching checksum, a hard link will be created to the match instead of > storing a new copy. This means that the use of hard link won't be > limited to j

Re: Use rsync's checksums to deduplicate across backups

2011-11-02 Thread Chris Dunlop
it together. > Remote machines are backed up regularly using hardlinks across each > snapshot to reduce disk usage. > Recently I learned that rsync does a checksum of every file > transferred. I thought it might be interesting to record the path and > checksum of each file in

Use rsync's checksums to deduplicate across backups

2011-11-02 Thread Alex Waite
using hardlinks across each snapshot to reduce disk usage. Recently I learned that rsync does a checksum of every file transferred. I thought it might be interesting to record the path and checksum of each file in a table. On future backups, the checksum of a file being backed up could be

[Bug 8529] New: Extend --batch to a local cache for backups

2011-10-14 Thread samba-bugs
https://bugzilla.samba.org/show_bug.cgi?id=8529 Summary: Extend --batch to a local cache for backups Product: rsync Version: 2.6.9 Platform: All OS/Version: All Status: NEW Severity: enhancement Priority: P5

issues with batch mode for incremental backups

2010-04-13 Thread Andrew Pimlott
Hi! I use rsync batch mode for incremental backups. That is, I create an on-line backup with rsync, and use the --write-batch flag to additionally generate my delta, which I send off-site. To restore, I download a full backup and apply the deltas with --read-batch. This is quite a lovely setup

DO NOT REPLY [Bug 6996] syncing backups - autodetect older variants already existing on receiver

2009-12-21 Thread samba-bugs
https://bugzilla.samba.org/show_bug.cgi?id=6996 way...@samba.org changed: What|Removed |Added Status|NEW |RESOLVED Resolution|

DO NOT REPLY [Bug 6996] New: syncing backups - autodetect older variants already existing on receiver

2009-12-17 Thread samba-bugs
https://bugzilla.samba.org/show_bug.cgi?id=6996 Summary: syncing backups - autodetect older variants already existing on receiver Product: rsync Version: 3.1.0 Platform: Other OS/Version: Linux Status: NEW

Re: Backups & Directory Timestamps Not Preserved

2009-08-04 Thread Michal Soltys
Backup options are concered with files from what I can see after some tests. Extra directory (--backup-dir) is an addition to keep things more tidy. I guess the idea was to have some sort of protection against careless rsync invocation, etc. - not as a solution for incremental backups. I don&#

Backups & Directory Timestamps Not Preserved

2009-08-02 Thread Robert Boucher
Hello, I've been testing out using rsync for nightly incremental backups through the '--backup' & '--backup-dir' options. So far, I've noticed two issues. 1) First, if an empty directory is removed from the source, rsync will remove it from the destination b

Re: Help creating incremental backups using --backup-dir.

2009-04-16 Thread Eric Bravick
- I use this method extensively and haven't had any issues with it. I currently back up over 100 MACs with various forms of NAS mounts, sparse files, ditto, asr, and rsync. I might have a couple failures a year running nightly backups, and honestly even when I've unmounted a sparse f

Re: Help creating incremental backups using --backup-dir.

2009-04-15 Thread henri
Hi David, I am also interested to know if anyone has found a file system which will store Mac OS X meta data. In the mean time, I would suggest that you back up to another Mac OS X machine with a pull backup strategy. On 10/04/2009, at 11:05 AM, David Miller wrote: Ok, I figured out th

Re: Help creating incremental backups using --backup-dir.

2009-04-14 Thread henri
. The following link contains documentation regarding pulling and pushing backups with LBackup : http://connect.homeunix.com/lbackup/network_backup The way LBackup handles push backups is via disk images (virtual file systems). This means that before the backup starts the virtual filesystem

Re: Help creating incremental backups using --backup-dir.

2009-04-09 Thread David Miller
Ok, I figured out the problem. I had to put in the full path for the -- backup-dir option. However, I have ran into another problem that makes doing this just about useless. If I rsync to an HFS+ volume it works correctly. If I rsync to a Samba share it gives me errors and puts files it th

Help creating incremental backups using --backup-dir.

2009-04-09 Thread David Miller
Normally I would use the --link-dest option to do this but I can't since I'm rsyncing from a Mac to a Samba share on a Linux box and hard links don't work. What I want to do is create a 10 day rotating incremental backup. I used the first script example on the rsync examples page as a templ

Re: how to crypt hard linked backups?

2009-02-03 Thread Cyril
Hello, i gived a try to rsyncrypto ( http://sourceforge.net/projects/rsyncrypto ) and it seems good i currently use encfs to mount a crypted folder where rsync upload through ssh i recently found areca backup "Win32, Linux" ( http://sourceforge.net/projects/areca/ ) with a java UI, that does in-b

how to crypt hard linked backups?

2009-02-03 Thread Michael Renner
Moin, I wrote a backup script, that use rsync and its hardlink features. Now I want to implement a new feature to my script: crypt the backuped files. But I wonder how this can be done? Because rsync check if the source file is different from the target file. But the target file will be allways

Re: OS/X Leopard Server and rsync backups

2008-06-20 Thread Michal Soltys
Few remarks: Greg Shenaut wrote: --delete-excluded: delete files no longer present & any excluded files Yes. You can achieve the same with 'H' or 'R' instead of '-', without having to specify --delete-excluded. 'H' is sender-only exclude, 'R' is receiver-only include. In your case:

OS/X Leopard Server and rsync backups

2008-06-20 Thread Greg Shenaut
er and let me know of any looming problems I haven't seen. I have one other question: there are several different kinds of databases in /private/var, including openldap, squirrelmail, and so on. It seems to me that one way to handle the problem of getting valid backups of a variety of different

Re: large backups taking longer with 3.0.2

2008-05-10 Thread Wayne Davison
On Fri, May 09, 2008 at 09:06:02AM -0400, Robert DuToit wrote: > I havn't compiled 3.0.3 pre1 yet but have been seeing considerable longer > backup times on OSX 5.2, using 3.0.2 over 3.0.1. There is nothing in the changes for 3.0.2 would affect rsync's speed. Perhaps the patches you applied diffe

large backups taking longer with 3.0.2

2008-05-09 Thread Robert DuToit
Hi All, I havn't compiled 3.0.3 pre1 yet but have been seeing considerable longer backup times on OSX 5.2, using 3.0.2 over 3.0.1. Has anyone else noticed this? The progress doesn't seem to show any problems or differences from 3.0.1 either. backing up 100+GB internal drive, with 1 milli

Re: using rsync with scripts (cronjobs) and automated backups

2008-04-23 Thread lewis Eklund butler
"/backup/" --exclude-from=/var/.rexcludes \ --link-dest="${BAKLOC}.daily.1" / ${BAKLOC}.daily.0 /usr/bin/touch ${BAKLOC}.sday.0/.rsync_bak_complete echo "...done." This will create a daily backup for 7 days, a weekly backup for 4 weeks, and a monthly backup for 6

Re: using rsync with scripts (cronjobs) and automated backups

2008-04-22 Thread Hans-Juergen Beie
Peter Heiss schrieb am 22.04.2008 14:59 Uhr: Hello all, I am wondering if it would be possible to write a script or a cronjob in linux using Rsync to run an automated backup of a server, or serveral servers if possible. I am very new with writing scripts and such, so any help or suggestions with

using rsync with scripts (cronjobs) and automated backups

2008-04-22 Thread Peter Heiss
ahead of time for the help!!! - Computers are like air conditioners. They both dont work, if you open windows. -- View this message in context: http://www.nabble.com/using-rsync-with-scripts-%28cronjobs%29-and-automated-backups-tp16824035p16824035.html Sent from the Samba - rsync mailing

Incremental backups too large

2008-03-05 Thread Robert DuToit
Hi All, Continued good results with rsync 3.0 but I have been noticing that incremental no-change backups of my Home folder (15Gb, 50,000+- files) have been using up on average about 500+-MB of disk space. Thinking back a ways to rsync3.0pre7, or earlier, each incremental took up very

Re: preserving Mac OS X metadata in rsync backups and restores

2008-01-21 Thread Robert DuToit
sults here for OSX and can't thank you all enough for putting rsync 3 together. I did some incremental speed tests with my 16GB Home folder from my old powerbook G4: from do shell script; destinations all to external firewire drive on OS 10.5.1 with patches: initial backup 54 minut

  1   2   3   >