poke at the overall logic and ended up pondering the same fix it sounds
like you’re settling on, but I wasn’t confident of my analysis so I went for
the shortest patch that I was sure impacted only my use case.
Thor
From: Wayne Davison
Sent: Tuesday, September 20, 2022 1:11 AM
To: Thor Simon
When running in daemon mode with a module rooted at "/", it is not possible to
"escape" the module.
Not by prefixing a link target with "../../../../../../..".
Not by prefixing a link target with "/" nor "".
So it seems to me that path sanitization is not useful in this case. And it
breaks
> I looking for a solution to display overall rsync progress on an LCD display
> as a bargraph.
> I have found 2 parameters:
>
> --progress
> This option tells rsync to print information showing the
> progress of the transfer. This gives a bored user something to
Ed Peschko wrote:
> As it stands right now, we use xz for our compression, so if rsync had
> a similar option for xz that would probably be an improvement.
Have xz as an option for what ?
As others have already pointed out, rsync works with files on filesystems - it
does not work with files emb
Fabian Cenedese wrote:
> I'm having problems with the user rights. The backup works
> ok but after restoring some files I can't access them.
That's to be expected.
NTFS has a very rich permissions system and rsync won't be capturing that.
While it's a PITA, your best best is resetting permissio
Fabian Cenedese wrote:
>> rsync: write failed on
>> "/Volumes/durack1ml_bak/160405_1234/Backups.backupdb/durack1ml/2016-02-10-091749/durack1ml_hdd/Applications/Adobe
>> Media Encoder CC 2015/Adobe Media Encoder CC
>> 2015.app/Contents/Resources/pdfl/CMaps/ETen-B5-UCS2": Result too large (34)
Dennis Steinkamp wrote:
> i tried to create a simple rsync script that should create daily backups from
> a ZFS storage and put them into a timestamp folder.
> After creating the initial full backup, the following backups should only
> contain "new data" and the rest will be referenced via har
On 9 Jun 2016, at 11:35, Arnaud Aujon Chevallier wrote:
> I ran some more test and it show that the lstat calls are only responsible
> for 3.7 % of the total time.
>
> So we could avoid about a third of them (the errors numbers), which will be
> about 1%, not very interesting :)
> % time
per...@pluto.rain.com (Perry Hutchison) wrote:
> Best choice for magtape is probably something like tar, cpio, or pax
> (for a file-oriented backup), or the appropriate variant of dump(8)
> (to back up an entire filesystem -- but not all FS formats have a
> dump/restore suite available).
I wouldn
"McDowell, Blake" wrote:
> The storage is just an regular HDD in a mac pro tower.
Ah, is this the version of rsync that comes with OS X ? Are these HFS+
filesystems ?
I vaguely recall that the OS X version is "hacked" to handle the file semantics
of HFS+ filesystems. Hopefully someone else ac
Fabian Cenedese wrote:
> This script is bash and also uses the "remote shell hacks" using SSH.
> As I want to run it also from Windows I'm looking for a rsync solution.
Assuming you have control of the server, can you do a bit of semaphore ?
Eg, do your backup with rsync, then when it's complet
Fabian Cenedese wrote:
> Are there pure rsync ways to solve these two problems?
Short answer - no I don't think there is.
My feeling is that rsync (at least, rsync on it's own) isn't the right tool for
the job.
One think I would comment on though is that, IMO, making backup policies under
the
dbonde+forum+rsync.lists.samba@gmail.com wrote:
> NAS <--ethernet--> computer <--FW800--> external disk
>
> The NAS can handle external USB drives and the external disk has USB2 so I
> could set it up like this
>
> external disk <--USB2--> NAS <--ethernet--> computer
>
> but I assumed it m
Robert DuToit wrote:
> Mike Bombich has a good piece on benchmarks for various source/destination
> scenarios with rsync.
>
> https://bombich.com/kb/ccc3/how-long-should-clone-or-backup-take
I hadn't seen that link, thanks.
There's an interesting anomaly in the first chart. Not unsurprisingly,
Simon Hobson wrote:
>> The other option is
>>
>> HD <--FW800--> Computer <--USB2 or Ethernet 1000Mbit --> NAS
>
> If you use a network connection then you've still got that network layer.
Just thinking a bit more about that ...
Is your normal setup
dbonde+forum+rsync.lists.samba@gmail.com wrote:
> Thank you. I will try your suggestions. First I will connect the NAS
Ah, you didn't mention NAS ! How is it connected to the computer hosting "A" ?
If via network then you've added *another* layer.
> directly to the computer (Do you recommen
rs.
As a quick test, I've just created a 100M sparse image, here's the contents
before I've added any files :
> $ ls -lRh a.sparsebundle/
> total 16
> -rw-r--r-- 1 simon staff 496B 25 Jan 14:36 Info.bckup
> -rw-r--r-- 1 simon staff 496B 25 Jan 14:36 Info.plist
>
Michael Havens wrote:
> why does deleting a file move it to .Trash but not make the space available
> for reuse?
Because, as already said, as far as the filesystem is concerned it's still a
file taking up space.
Longer answer: I guess you are using a "desktop" interface, click the file, and
Dear all,
Can someone let me know if it is possible to setup an Rsync server with domain
authenticated users e.g. Active Directory users rather than having
username:passwords in plain text?
Thanks,
Simon
The University of Dundee is a registered Scottish Charity, No: SC015096
--
Please use
some_dir_mounted_via_nfs
>
>regards
>roland
>
>
>> Gesendet: Dienstag, 21. Juli 2015 um 10:48 Uhr
>> Von: "Simon Wong (Staff)"
>> An: "rsync@lists.samba.org"
>> Betreff: Rsync differences using NFS & SMB
>>
>> Hi,
>>
>> I
Hi,
I’m having difficulties trying to understand the performance differences
between NFS and SMB. I have used rsync (OS X) over SMB (mounted network
storage) and using rsync (OS X) over SSH (NFS mounted storage)
>From my test, rsync over SMB builds a file list each time comparing
modified source
Cal Sawyer wrote:
2 lines with a whole load of quoted text.
Please bottom post, and when replying to a digest message - as a very minimum
reset the subject correctly and trim *ALL* unnecessary text.
> This sounds like a job for Relax and Recover:
>
> http://relax-and-recover.org/
Not one I'
Thierry Granier wrote:
> the "backup" is created on the source machine
> i don't see how to get this backup on the destination machine and how to boot
> on this machine (for this backup)
By specifying "user@address:path" you are telling rsync to copy the files ot a
remote machine - that's how
Thierry Granier wrote:
> i have a machine A with 2 disks 1 et 2 running Debian Jessie
> on 1 is the system and the boot and the swap
> on 2 different partitions like /home /opt ETC.
>
> i have a machine B with 1 disk running kali-linux and 100G free
>
> Can i clone the disk 1 of machine A o
Andrew Gideon wrote:
>> btrfs has support for this: you make a backup, then create a btrfs
>> snapshot of the filesystem (or directory), then the next time you make a
>> new backup with rsync, use --inplace so that just changed parts of the
>> file are written to the same blocks and btrfs will ta
Ken Chase wrote:
> And what's performance like? I've heard lots of COW systems performance
> drops through the floor when there's many snapshots.
For BTRFS I'd suspect the performance penalty to be fairly small. Snapshots can
be done in different ways, and the way BTRFS and (I think) ZFS do it
Andrew Gideon wrote:
> However, you've made be a little
> apprehensive about storebackup. I like the lack of a need for a "restore
> tool". This permits all the standard UNIX tools to be applied to
> whatever I might want to do over the backup, which is often *very*
> convenient.
Well if y
Andrew Gideon wrote:
> These both bring me to the idea of using some file system auditing
> mechanism to drive - perhaps with an --include-from or --files-from -
> what rsync moves.
>
> Where I get stuck is that I cannot envision how I can provide rsync with
> a limited list of files to move
Ken Chase wrote:
> You have NO IDEA how long it takes to scan 100M files
> on a 7200 rpm disk.
Actually I do have some idea !
> Additionally, I dont know if linux (or freebsd or any unix) can be told to
> cache
> metadata more aggressively than data
That had gone through my mind - how much RA
> The goal was not to reduce storage, it was to reduce work. A full
> rsync takes more than the whole night, and the destination server is
> almost unusable for anything else when it is doing its rsyncs. I
> am sorry if this was unclear. I just want to give rsync a hint that
> comparing files an
john espiro wrote:
> The remote location is rather remote, so that wouldn't work in this
> particular case.
I don't follow, weren't you planning to take the USB device to the remote
location to copy some of the files ? If you are doing that, then it doesn't
really matter if you have extra fil
john espiro wrote:
> I have a local directory that I am trying to sync with a remote directory.
> That's fine, but there's a lot of data that is out of sync so I decided to
> make a local copy of the difference to then bring to the remote location.
>
>
> So I ran a dry-run between the two to
Kevin Korb wrote:
> No, rsync would store the files with the numeric UID and GID as is on
> the source without regard to the existence or non-existence of a
> matching account. This could mean that ls would show numbers in those
> columns, or incorrect names in those columns, or (perhaps by
> co
Michael wrote:
> I would like to know how rsync would manage the situation where the UID and
> GID(s), of a file being copied to a Remote system, have been reserved on the
> Remote system. Does rsync search for and allocate the next available IDs?
AFAIK, if you don't specify the "numeric-ids" op
As an aside to this, part of the problem I've been having is the transfer
timing out/getting interrupted during a particular large file (1G, new file,
2-3 hours if it works).
So I've been experimenting with --partial and --partial-dir=.rsync-partial
which weren't working. It appears to work at
Lorenz Weber wrote:
> rsync -avH ${all_gubbins} / user@remote.machine:/dest/ && ssh
> user@remote.machine touch /etc/donefile
No SSH access between them, only rsync. Besides, it would add the overhead of
managing ssh access (users and keys) as well as Rsync.
--
Please use reply-all for most
Michael Johnson - MJ wrote:
> rsync -av /src/ /dst/ && touch /dst/done
A, knew I'd miss some detail.
All the syncs are pushed to the backup server.
But that does give me an idea. I guess I could do that on the source, then sync
the flag file over.
rsync -avH ${other_gubbins} / user@rem
As part of my backup system, I use Rsync to keep a copy of each server on one
central backup server. This backup server then uses StoreBackup to keep
multiple iterations of each clone directory.
So that the StoreBackup archives don't keep adding "redundant" and misleading
backups, I update a fla
side.
Please note that -xx does exactly what it says, but -x does not!
What did I miss?
Thanks,
Simon
>
> On 09/06/11 18:27, Simon Matter wrote:
>>>> -BEGIN PGP SIGNED MESSAGE-
>>>> Hash: SHA1
>>>>
>>>> https://bugzilla.samba.org/show
is the latest stable.
Anyone cares to check this out on his on box?
Thanks,
Simon
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
m :)
Thanks,
Simon
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
thanks for CC'ing me.
Thanks,
Simon
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
; On modern hardware I see 1000's of files per second when scanning for
> changed files.
> >
> > On Jun 6, 2011, at 12:39 PM, Steven Levine
> wrote:
> >
> >> In
> >>
> e.de
> >> >,
> >> on 06/06/11
> >> at 12:04 PM,
Hi Paul,
Thank you for your reply!
Hm...I´m using 3.0.3 at the Dest-Server, but now I saw that the Source-Server
has 2.6.9
Do I have to enable incremental recursion and from which version is incremental
supported?
Cliff Simon
> -Ursprüngliche Nachricht-
> Von: Paul Sl
--exclude=/some/pathes/ --rsh=/usr/bin/ssh
--link-dest=/dest.path/daily.1/ root@192.x.x.x:/path.to.backup/
Do you have an idea to reduce the backup time?
Btw: The bwlimit should not be the problem, because generating the filelist is
the most time.
Thank you very much!
Cliff Simon
--
Please use
You could just cat it?
On 28 Aug 2009, at 03:57, Mag Gam wrote:
Is it possible to stream the content of a file using rsync to stdout
instead of placing it into a file?
--
Please use reply-all for most replies to avoid omitting the mailing
list.
To unsubscribe or change options: https://lis
Try rsync -av *.txt:u...@remote.machine/path/to/where/you/want/it/to/go/
(assuming you only want to rsync the txt files from the current
working directory on the A side - else put the full path in with a
trailing slash).
Quoting e-letter :
Readers,
I have tried the following command:
rsy
Yeah - this is from a Mac OS X server to a Linux box. It just sees
the / and then stops as it expects a directory and sees a file.
On 23 Jul 2009, at 21:46, Paul Slootman wrote:
On Thu 23 Jul 2009, monkeymoped wrote:
Hi there - I am trying to do a site to site nightly rsync between
two bo
Matt McCutchen wrote:
On Mon, 2009-03-02 at 15:46 +, Simon Brown wrote:
My scenario: I am trying to rsync from an external HFS+ (USB2) to a
FAT32 external NAS drive, using rsync version 2.6.9 protocol version 29
(as supplied with Mac OS X Tiger 10.4.11). The data being synced is MP3
Hi. This is my first post to this list. I have searched the archives but
cannot find anything that touches on this particular issue.
My scenario: I am trying to rsync from an external HFS+ (USB2) to a
FAT32 external NAS drive, using rsync version 2.6.9 protocol version 29
(as supplied with Ma
rsync version 2.6.0 protocol version 27
what am I doing wrong?
thanks
Simon
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Just in case anyone is interested, I've just got confirmation from Oracle
that it is valid to use OCFS without O_DIRECT for reads, it's only writes
that are an issue.
So it is perfectly valid to use rsync to clone an OCFS Oracle archive log
area to another server on standard filesyste
Guys, posted this last week and had no response so far. Just posting
again in case anyone missed it. I really could do with knowing as it's
delaying the rollout of a new project I'm working on.
Thanks, Simon
--
Hi
We currently use rsync for various jobs at our company. We are now
t for this purpose?
Thanks
Simon
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
rsion 2.6.5
on
Darwin Kernel Version 8.1.0 (OS X Tiger)
Thanks,
Simon.
===
rm -rf "/Volumes/Rotating backup/simonallfrey.6"
mv -f "/Volumes/Rotating backup/simonallfrey.5" "/Volumes/Rotating
backup/s
ld be deleted, so:
* (after incremental) update secondary to master with --delete and
--files-from based on secondary accumulation
Thanks for any help,
Simon
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/
Since the hosts are productive mail servers, I can't easily test different
libc6 versions, so I hope the stack trace is sufficient to locate the error.
I'm running debian sid and debian sarge... libc6 2.3.2.ds1-12.
--
Baptiste SIMON
aka BeTa
Administrateur systèmes GNU/Linux & Uni
x27;s sure.
--
Baptiste SIMON
aka BeTa
Administrateur systèmes GNU/Linux & Unix / IPv6
http://www.e-glop.net/
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
wo sources, but it pass w/ only one :c/
any idea or solution (just for "production" time, I'm doing my rsync in
two different times) ?
--
Baptiste SIMON
aka BeTa
Administrateur systèmes GNU/Linux & Unix / IPv6
http://www.e-glop.net/
--
To unsubscribe or change options: http:/
Dear all,
I would like to propose rysnc feature to our customer.
Do you have any reference case from other company that
there is using rsync in the production server?
regards,
Simon
_
§ÚªºÅº¶Æ(®e¯ª¨à)¡A¥ª¾F¥k¨½(§õ§J¶Ô)¡A§A³Ì¬õ(Twins)...
¹L
o@ozzy:~$ rsync --version
rsync version 2.4.6 protocol version 24
Written by Andrew Tridgell and Paul Mackerras
nho@ozzy:~$
Thanks for a fantastic piece of software,
Imre Simon
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read
mnt/ad3s1e
So the addition of the extra '-v' is what kills it for me,
though this works in 2.3.2 for me.
I hope this is a useful info for the developers.
Thanks to Dave for -W and to Eric for suggesting I test
with 2.3.2.
regards
Simon Lai
auses
---
make_file(4,protected/home/simon/ccc/ccc-sql/lcrt.sql)
make_file(4,protected/home/simon/ccc/ccc-sql/schedule.sql)
make_file(4,protected/home/simon/ccc/ccc-sql/lcr.sql)
make_file(4,protected/home/simon/ccc/ccc-sql/groups.sql)
make_file(4,protected/home/simon/ccc/db_acc
64 matches
Mail list logo