>
> Paul
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
--
View this message in c
On Mon 29 Apr 2013, Kevin Korb wrote:
>
> Simply put, if you have something that is screwing with your file data
> without touching your time stamps then you have been infected with a
> rootkit. You should be thanking rsync for not backing up your
As he's running QNX I don't expect he's infected
ync -a -c -v --delete --timeout=250
>>>>>>> --contimeout=30 --temp-dir= srcfiledstfile
>>>>>>> rsync is running in daemon mode. OS - QNX rsync version
>>>>>>> - 3.0.9
>>>>>>>
>>>>>>
;>
>>>>> On trying to sync same files again sync happened properly.
>>>>>
>>>>> Does anybody have similar kind of issues?
>>>>>
>>>>> Please help me in this regard.
>>>>>
>>>>>
>>>>>
of issues?
>>>>
>>>> Please help me in this regard.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>
>&g
RZ2KLb+SRZiAsOW82FWxJ0v
> =IsDw
> -END PGP SIGNATURE-
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/f
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
In use databases can't be copied with rsync or any file based tool.
The database must be inactive to be copied. You didn't mention which
kind of database you are using but most have a function to freeze the
database for backups. I don't know if QNX h
I am using rsync in my project for copying database binary files over the
network from active module to standby module.
Once in a while files found to be corrupt on standby module when copying is
done first time.When I checked the active module
at that time files found to be proper.
rsync is comple
ing it for reading/viewing.
regards
roland
>List: rsync
>Subject: Re: file corruption
>From: joop g
>Date: 2013-03-09 10:33:54
>Message-ID: 3286855.GThHFMr6je () n2k6
>[Download message RAW]
>
>You said, the diff concerned just one byte, right?
No they are some random files..
It's really weird that rsync over ssh seems to catch the corruption ,
with warning messages " failed verification -- update discarded
(will try again)". However rsync to remote daemon just corrupts files
without any warning. Is it possible that I need to specify
You said, the diff concerned just one byte, right?
Were the corrupted files all Microsoft Office files? I have seen this behaviour
once, and then it turned out to be the originals that had been changed in the
meantime. It seems that Microsoft knows how to change a file without altering
the modif
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ouch. That sounds scary.
The first time I had such a problem I discovered it while trying to
burn a 600+MB avi file to a CDR. I burned a CDR and the md5sum of the
avi file on the disc didn't match. So I burned another one and it
didn't match either
> Date: Fri, 08 Mar 2013 22:26:24 -0500
> From: Kevin Korb
> If it were me, based on my previous experience, I would shut down both
> systems and run memtest86+ or "Windows Memory Diagnostics" on both
> systems. Make sure to enable the extended tests. Let them run
> over
AS. To my surprise, a few files were
>>>> corrupted (only 3 out of 17K files). "cmp -l" shows single
>>>> byte difference between the original and rsync'd files.
>>>> However, I didn't see any error message on the NAS side or in
>>>&
see any error message on the NAS
>> side or in the rsyncd logs. How is that possible? Doesn't rsync
>> always do a checksum verification after copying the files? I have
>> been using rsync for years for local backup. T
after copying the files? I have
> been using rsync for years for local backup. The feeling of silent
> file corruption is scary. Could someone point me to the right
> direction? I really want to get to the bottom of this. Much
> appreciated.
>
> Regards, Xiaolong
>
r message on the NAS side or in the rsyncd logs. How is
that possible? Doesn't rsync always do a checksum verification after
copying the files? I have been using rsync for years for local
backup. The feeling of silent file corruption is scary. Could someone
point me to the right direction? I
rsync definitely works on sans with Solaris 9/10. I've used it that
way very intensively, both remotely to and from the san via ssh and
also local disks to san and san to local disks.
Are you sure something else isn't modifying your files in the mean time?
Run something like:
find /path/to/files
I have run into a problem using 'rsync' to copy files from local disk
to a SAN mounted LUN / file-system.
The 'rsync' seems to run fine and it reports no errors, but some
files are corrupted (check-sums don't match originals,
and file data is changed).
So far, I have found this problem on bot
esent). This is the method I like the best, but the code
>is in an early state that needs more work.
IIUC, all of those patches are designed to maintain a checksum cache and
update it when the mtime changes. I don't see an obvious way to use any
of them to detect file corruption (def
Thank you Wayne for pointing me to these patches. I've never noticed
them before.
Am 24.05.2009 um 23:53 schrieb Wayne Davison:
On Sun, May 24, 2009 at 06:32:40PM +0200, Christian Hecht wrote:
Such a tool i plan to write for Mac OS X. The first time it should
store checksums and mod times f
On Sun, 2009-05-24 at 22:13 -0400, Matt McCutchen wrote:
> On Mon, 2009-05-25 at 10:09 +0800, Daniel.Li wrote:
> > On Mon, 2009-05-25 at 09:58 +0800, Daniel.Li wrote:
> > > What if a video editor?
> > >
> > > Lots of work with video files, which is very large, about 500MB per
> > > file. Editor on
On Mon, 2009-05-25 at 10:09 +0800, Daniel.Li wrote:
> On Mon, 2009-05-25 at 09:58 +0800, Daniel.Li wrote:
> > What if a video editor?
> >
> > Lots of work with video files, which is very large, about 500MB per
> > file. Editor only delete or rearrange frames in that file.
> >
> > And then it will
On Mon, 2009-05-25 at 09:58 +0800, Daniel.Li wrote:
> On Sun, 2009-05-24 at 16:04 +0200, Mac User FR wrote:
> > Hard-linking an
> > unchanged dir takes very few place.
>
> What if a video editor?
>
> Lots of work with video files, which is very large, about 500MB per
> file. Editor only delete
On Sun, 2009-05-24 at 16:04 +0200, Mac User FR wrote:
> Hard-linking an
> unchanged dir takes very few place.
What if a video editor?
Lots of work with video files, which is very large, about 500MB per
file. Editor only delete or rearrange frames in that file.
And then it will be back up 500MB
On Sun, May 24, 2009 at 06:32:40PM +0200, Christian Hecht wrote:
> Such a tool i plan to write for Mac OS X. The first time it should
> store checksums and mod times for all files to verify.
There are various patches in the "patches" dir that deal with cached
checksums in different ways:
- check
Am 24.05.2009 um 18:01 schrieb Daniel Carrera:
Jamie Lokier wrote:
Daniel Carrera wrote:
But there is no way to distinguish between file corruption and a
legitimate change. All you can do is keep old backups for a few
days or weeks and hope that you detect the file corruption before
the
Daniel Carrera wrote:
> Jamie Lokier wrote:
> >Daniel Carrera wrote:
> >>But there is no way to distinguish between file corruption and a
> >>legitimate change. All you can do is keep old backups for a few days or
> >>weeks and hope that you detect t
Jamie Lokier wrote:
Daniel Carrera wrote:
But there is no way to distinguish between file corruption and a
legitimate change. All you can do is keep old backups for a few days or
weeks and hope that you detect the file corruption before the backup
rotation deletes all the good copies.
I
Daniel Carrera wrote:
> But there is no way to distinguish between file corruption and a
> legitimate change. All you can do is keep old backups for a few days or
> weeks and hope that you detect the file corruption before the backup
> rotation deletes all the good copies.
I
But there is no way to distinguish between file corruption and a
legitimate change. All you can do is keep old backups for a few days or
weeks and hope that you detect the file corruption before the backup
rotation deletes all the good copies.
Christian Hecht wrote:
This can minimize the
This can minimize the risk, but if you don't need the corrupted file
actually, you can't detect that it is corrupted.
The corrupted file will be copied to a newer backup folder.
If you delete the old backups due to rotation, at any time the backup
is worthless because it only contains the co
A simple way to prevent this is to store the backup with a rotating
system and hard-linking files.
You can do it with rsync --link-dest=DIR option and a post-exec script
that moves the backup dir to something like backup-200905241335
In this way if the file got corrupted it won't be hard-link
This will safe you if your file is corrupted on the backup side. Rsync
will copy it then again from the source because the checksum is
different and everything is okay.
But imagine the file is corrupted on the source side. Then rsync will
copy the corrupted file again and if you delete olde
Matthias Schniedermeyer wrote:
Exactly.
But you can (periodically) add "-c", then rsync while checksum the whole
content of all files.
Thanks. I'll add a -c for the Saturday backup.
But IF you have (or suspect) such type of corrution, you have have an
even greater problem: Your hardware is
On 22.05.2009 13:43, Daniel Carrera wrote:
> Hello,
>
> Suppose that every day cron runs this:
>
> rsync -a --times --delete $HOME /my/backups/dir/latest
>
>
> In general, rsync will only update a file if it has been modified. Now,
> imagine that one of the files becomes corrupted in the backup d
Hello Daniel
The default check is the time of last modification and size so if your
corruption also leaves the file size the same the file will not be
included for update.
You can use the --checksum option that also checks the file contents but
this will significantly increase your disk I/O.
Ok, thanks. Do you have any idea if a corruption that leaves the file
size intact is common or rare? What I could do is add the --checksum
option only once a week:
if [ `date +%a` = "Sat" ]; then
OPT='-a --numeric-ids --delete --times --checksum'
else
OPT='-a --numeric-ids --del
On Fri 22 May 2009, Daniel Carrera wrote:
>
> In general, rsync will only update a file if it has been modified. Now,
> imagine that one of the files becomes corrupted in the backup directory,
> but the timestamp hasn't changed. Will rsync detect this?
Not in the usual case.
You may want to
Hello,
Suppose that every day cron runs this:
rsync -a --times --delete $HOME /my/backups/dir/latest
In general, rsync will only update a file if it has been modified. Now,
imagine that one of the files becomes corrupted in the backup directory,
but the timestamp hasn't changed. Will rsync
eed the backup.
(while in the process of
destroying the backup you need;)
-Original
Message-From: rsync-bounces+tony=[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of darrin hodgesSent: Friday, May 12, 2006 12:06
AMTo: rsync@lists.samba
ent: Friday, May 12, 2006 12:06 AMTo:
rsync@lists.samba.orgSubject: random file corruption on
NTFSHi,We are using Rsync version 2.6.8
protocol version 29 on a winNT box to backup a linux (RedHat 9.0) box (same
version of rsync) and everynight a different file on the NT server is report
f Of
darrin hodgesSent: Friday, May 12, 2006 12:06 AMTo:
rsync@lists.samba.orgSubject: random file corruption on
NTFSHi,We are using Rsync version 2.6.8
protocol version 29 on a winNT box to backup a linux (RedHat 9.0) box (same
version of rsync) and everynight a different file on
Hi,We are using Rsync version 2.6.8 protocol version 29 on a winNT box to backup a linux (RedHat 9.0) box (same version of rsync) and everynight a different file on the NT server is reported as being corrupt, there are no errors in the rsync logs on either side. NT Event log records:
Event Type:
Hi,
I encountered a weird file corruption problem with rsync.
I have a perl script that generates and writes a data file to disk, then
rsyncs the file to a remote machine. A perl script running on the remote
machine periodically reads in the data file.
However, occasionally the remote script
> but even this would not do anything to avoid sending a partially-written file
> that was not yet
> complete (if the data remained unchanged while rsync was reading it).
Good point.,
That's was what I was thinking. A post transfer stat() (md5, crc, etc)
on the source file to help those who are
On Mon, Sep 12, 2005 at 09:36:27AM -0600, Kevin Stussman wrote:
> if rsync does do a checksum after the file has been transferred, but
> the original file has been changed during transfer, wouldn't the final
> checksum then fail?
No, as you suspected, the checksum is for the data that rsync read f
We have DB consultants here that must have missed this piece
of info re: "inserted after the online redo log is successfully
archived". I'm not a Oracle expert, all I know is that rsync
sees a file that needs transferring (completed or not).
The bit about rsync behavior was just a suggestion for a
Stefan Nehlsen wrote:
On Mon, Sep 12, 2005 at 09:36:27AM -0600, Kevin Stussman wrote:
rsync will have a second try if this happens and I think it will warn.
This seems like a waste of resources to me. Why not query V$ARCHIVE_LOG?
From the manual:
This view displays archived log information
On Mon 12 Sep 2005, Stefan Nehlsen wrote:
> >
> > I'm going to have a look at that now :-/
>
> FIRST: I do not know if the corruption where really caused by rsync!
>
> I had made a copy of the corrupt tree and use it know to find out
> what kind of corruption ocurred.
>
> I knew that there wher
On Mon, 12 Sep 2005 08:41:17 -0700, Stefan Nehlsen <[EMAIL PROTECTED]> wrote:
-3126060 093f b647 71af 8d62 1159 fbd0 3e30 e36b
+3126060 093f b647 71af 9d62 1159 fbd0 3e30 e36b
Is it always the same bit position (0x1000 here)?
If so, might be faulty RAM...
You probably already know about memte
On Mon, Sep 12, 2005 at 09:36:27AM -0600, Kevin Stussman wrote:
> Thanks for the explanation.
>
> Since I have no way to know if Oracle has completed the file, I solved
> the problem by making the transfer process smarter and only deleting the
> file once it has been successfully loaded into the s
On Mon, Sep 12, 2005 at 03:04:50PM +0200, Stefan Nehlsen wrote:
>
> I'm going to have a look at that now :-/
FIRST: I do not know if the corruption where really caused by rsync!
I had made a copy of the corrupt tree and use it know to find out
what kind of corruption ocurred.
I knew that there
Thanks for the explanation.
Since I have no way to know if Oracle has completed the file, I solved
the problem by making the transfer process smarter and only deleting the
file once it has been successfully loaded into the secondary. This way
if the file was corrupted (i.e. sent before completed)
On Fri, Sep 09, 2005 at 12:06:50PM -0600, Kevin Stussman wrote:
> We are using rsync to transfer Oracle redo logs from one system to
> another over a WAN/VPN. The problem we are having is that 1 out of about
> 500 or so files sent is corrupted. The receiving Oracle server produces
> a message like
On Fri, Sep 09, 2005 at 12:06:50PM -0600, Kevin Stussman wrote:
> - Is there way to ensure that rsync checks the integrity of the
> transferred file when it is complete?
Rsync validates that all the data it writes out matches the checksum of
the data on the sending side. The only known way to cr
We are using rsync to transfer Oracle redo logs from one system to
another over a WAN/VPN. The problem we are having is that 1 out of about
500 or so files sent is corrupted. The receiving Oracle server produces
a message like this:
---
Specify log: {=suggested | filename | AUTO | CANCEL}
ORA-0028
When using the -z and -B65536 options together, sometimes there is file
corruption. Client and server are both compiled against zlib 1.1.4, so
the gzip corruption shouldn't be there, right?
With -z and -B32768 there is an error, but it is detected in a different
way.
With -B65536 and wi
58 matches
Mail list logo