https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #28 from Dave Gordon ---
(In reply to Carson Gaspar from comment #27)
Hmm? If you're referring to line 810 of io.c, which is the only write(2) call I
can see in perform_io(), in the current HEAD it looks like this:
810 if ((n = write
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #27 from Carson Gaspar ---
(In reply to Dave Gordon from comment #23)
Reading this, I took a look at the rsync sources, and, indeed, rsync has a bug.
perform_io() does not correctly check the return code from write().
safe_write() does
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #26 from Rui DeSousa ---
(In reply to Ben RUBSON from comment #25)
That is awesome. Thanks you for all of your efforts!
--
You are receiving this mail because:
You are the QA Contact for the bug.
--
Please use reply-all for most r
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #25 from Ben RUBSON ---
And here is the patch applied to FreeBSD stable/10 :
https://reviews.freebsd.org/rS326428
And to FreeBSD stable/11 :
https://reviews.freebsd.org/rS326427
Will then be in FreeBSD 11.3.
Note that this patch depen
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #24 from Ben RUBSON ---
ZFS only shares between files with dedup on.
So first rsync / diff succeeded, second gave same result as before :
# rsync -a --inplace $tmpfs/f1 $f/f3 ; echo $? ; diff $tmpfs/f1 $f/f3
0
Files /mnt/f1 and /test/f3
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #23 from Dave Gordon ---
Looks like this ZFS problem could be a FreeBSD-specific issue; one of the
commits mentioned in this FreeNAS bug report has the subject
zfs_write: fix problem with writes appearing to succeed when over quota
S
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #22 from Dave Gordon ---
(In reply to Ben RUBSON from comment #19)
Just to be totally certain about what ZFS may or may not share between files,
could you try this variant of your testcase:
# zfs destroy $z
# zfs create $z
# zfs set c
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #21 from Dave Gordon ---
Created attachment 14019
--> https://bugzilla.samba.org/attachment.cgi?id=14019&action=edit
Test patch to see whether fdatasync() or fsync() detects a write failure
--
You are receiving this mail because:
Yo
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #20 from Dave Gordon ---
So, looking like a ZFS issue but triggered in some way by the specific
behaviour of rsync (e.g. writing a certain block size/pattern causes the quota
error to be lost). The truss listing from a failing case shou
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #19 from Ben RUBSON ---
I managed to reproduce the issue on 11.0-RELEASE-p16.
Below a simple test case, without compression, without deduplication.
Note that issue is reproductible with quota, but not with userquota.
# f=/test
# z=zroo
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #18 from Rui DeSousa ---
I also wrote a little util as well; I get the correct error in write spot.
[postgres@hades ~]$ cat 0001005E0017 | ./fwrite/fwrite
arch/0001005E0017
fwrite: write: Disc quota exceeded
[po
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #17 from Rui DeSousa ---
(In reply to Dave Gordon from comment #14)
Here's the output you requested. ZFS would use the same block even if it's the
same data as don't have dedup enabled.
[postgres@hades ~]$ ls arch/
dbc1
[postgres@had
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #16 from Rui DeSousa ---
(In reply to Ben RUBSON from comment #15)
I just set the quota property.
NAME PROPERTY VALUE SOURCE
hydra/home/postgres/arch quota 1Glocal
hydra/home/postgres/a
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #15 from Ben RUBSON ---
Rui just to be sure, which type of ZFS quota are you using ?
quota ? refquota ? userquota ?
--
You are receiving this mail because:
You are the QA Contact for the bug.
--
Please use reply-all for most replies
h is compressed to
19M on disk. ZFS uncompresses it on the fly and delivers 64M of data to the
first rsync. rsync sequentially writes 64M, checking the success of each write.
The last write should end at an offset of 64M, then the destination file is
closed (and the return from that is checked). Th
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #13 from Rui DeSousa ---
(In reply to Rui DeSousa from comment #12)
Running truss on the --sparse option does show the error being is returned
during the write call.
[postgres@hades ~]$ truss -f -o sparse.log rsync -av --sparse
0
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #12 from Rui DeSousa ---
(In reply to Dave Gordon from comment #10)
The sparse option errors out :).
[postgres@hades ~]$ rsync -av --sparse 0001005E0017
arch/0001005E0017
sending incremental file list
0001
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #11 from Rui DeSousa ---
(In reply to Dave Gordon from comment #7)
This is the result of hard link on the temp file where the rename failed.
root@hades:~postgres/arch # ls -lh rsync.temp ; du -h rsync.temp
-rw--- 1 postgres pos
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #10 from Dave Gordon ---
BTW, have you tried *either* --sparse *or* --preallocate (but not both
together, please, as that will trigger bug 13320 -
https://bugzilla.samba.org/show_bug.cgi?id=13320)
Does you get the same problem (i.e. fi
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #9 from Dave Gordon ---
(In reply to Rui DeSousa from comment #6)
In your example:
$ rsync -av --inplace 0001005E0017 arch/0001005E0017
sending incremental file list
0001005E0017
sent 67,125,370 byt
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #8 from Ben RUBSON ---
(In reply to Dave Gordon from comment #3)
> ZFS probably notices the quota problem somewhere between (b) and (c), drops
> the excess data, and returns EDQUOT to the close(2) call.
(In reply to Dave Gordon from c
On Mon, Mar 5, 2018 at 3:09 PM, just subscribed for rsync-qa from
bugzilla via rsync wrote:
> https://bugzilla.samba.org/show_bug.cgi?id=13317
>
> --- Comment #6 from Rui DeSousa ---
> (In reply to Rui DeSousa from comment #5)
>
> It looks like no error is returned and result is a sparse file. I
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #7 from Dave Gordon ---
(In reply to Rui DeSousa from comment #5)
That was a run where the rename failed. Do you know whether the temporary file
was truncated or corrupted in that scenario?
[HINT: one can stop the rsync with a signal,
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #6 from Rui DeSousa ---
(In reply to Rui DeSousa from comment #5)
It looks like no error is returned and result is a sparse file. I think a
sync() would be required otherwise the file is truncated on close to meet the
quota.
[postgr
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #5 from Rui DeSousa ---
(In reply to Dave Gordon from comment #4)
Hi Dave,
I'm not seeing any errors on the write calls. Would an fsync() be required to
force the error?
[postgres@hades ~]$ grep ERR rsync.test.log
52419: lstat("/usr
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #4 from Dave Gordon ---
To see whether rsync is getting any errors reported by any system calls it
makes, one could run it under strace(1) on Linux, or dtrace on Solaris.
Presumably FreeBSD has at least one of these, or something simila
UOT to the close(2) call.
The success of the rename depends only on whether there is sufficient free
space at that instant; any previous failure in writing the file won't affect
the rename directly. Rename may not always need any extra space anyway.
BTW: apropos your comment about "cat"
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #2 from Rui DeSousa ---
(In reply to Kevin Korb from comment #1)
I saying that in some cases the rename does not fail; and in those cases it
returns success despite there not being enough space to store the original
file. It looks
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #1 from Kevin Korb ---
Are you saying that it is exiting with an exit code of 0 after outputting that
error or that sometimes in the same condition it shows no error and exits code
0? Either way it would probably be helpful to see an o
https://bugzilla.samba.org/show_bug.cgi?id=13317
Bug ID: 13317
Summary: rsync returns success when target filesystem is full
Product: rsync
Version: 3.1.2
Hardware: x64
OS: FreeBSD
Status: NEW
Severity
Wayne Davison mailto:way...@samba.org>> wrote:
On Wed, Apr 20, 2011 at 12:44 PM, Alistair Dsouza
mailto:alista...@gmail.com>> wrote:
I came across an issue where it seems that the rsync call
returned with a success but the files that it pulled are
not moved
or file system errors in general.
Regards,
Alistair
On Sat, Apr 23, 2011 at 9:06 PM, Wayne Davison wrote:
> On Wed, Apr 20, 2011 at 12:44 PM, Alistair Dsouza wrote:
>
>> I came across an issue where it seems that the rsync call returned with a
>> success but the files that
Yes, we are not trying to directly access the data via calls outside the
file system so the VFS should have handled it correctly.
Our logs don't show any type of errors related to SD card access or file
system access in general.
Thanks,
Alistair
On Sat, Apr 23, 2011 at 9:04 AM, Cameron Simpson
On Wed, Apr 20, 2011 at 12:44 PM, Alistair Dsouza wrote:
> I came across an issue where it seems that the rsync call returned with a
> success but the files that it pulled are not moved immediately to its final
> destination.
>
I think it more likely that you had 2 instances o
On 20Apr2011 19:29, Tony Abernethy wrote:
| OK, I'll bite.
| With all file system designs, there is a tradeoff between speed and safety.
| This tradeoff occurs at all levels where there might be something that
buffers information.
| Writing into the middle of a structure can be incredibly slow if
rsync-boun...@lists.samba.org] On
Behalf Of Henri Shustak
Sent: Wednesday, April 20, 2011 6:17 PM
To: rsync
Subject: Re: files not moved immediately to final destination from temp
location after rsync returns with success
> I am using rsync version 3.0.7 on an arm linux based embedded dev
it seems that the rsync call returned with a
> success but the files that it pulled are not moved immediately to its final
> destination.
You could try issuing the 'sync' command? However, I do not think believe that
this should be required.
Perhaps someone else on th
nations all reside on the SD card.
>
> I came across an issue where it seems that the rsync call returned with a
> success but the files that it pulled are not
> moved immediately to its final destination. The issue points either to
> rsync not moving data immediately to its fina
returned with a
success but the files that it pulled are not
moved immediately to its final destination. The issue points either to rsync
not moving data immediately to its final location
of some delay in the virtual file system. However a read system call would
flush the block buffer via the VFS
> I have a bash script for rsync that should tranfer all my filer from one
> drive to the other.
>
> I would like to know how I can make the script sending me an email after the
> script is run and be able to know if it was a success or not and also if
> possible to include
iler from one
> drive to the other.
>
> I would like to know how I can make the script sending me an email after the
> script is run and be able to know if it was a success or not and also if
> possible to include the log file.
>
> This is my script:
>
> !/bin/bash
>
>
Hi,
On Wed, 30 Dec 2009, Sébastien Arpin wrote:
I have a bash script for rsync that should tranfer all my filer from one
drive to the other.
I would like to know how I can make the script sending me an email after
the script is run and be able to know if it was a success or not and
also if
Hi,
I have a bash script for rsync that should tranfer all my filer from one drive
to the other.
I would like to know how I can make the script sending me an email after the
script is run and be able to know if it was a success or not and also if
possible to include the log file
I have also been eager to test bbouncer
The latest source passes with flying colours!
but make check finds some problems with xattrs if not run by sudo
I have xattr in /usr/local/bin/ from the source found at:
http://dev.bignerdranch.com/public/bnr/eXttra.zip
What I did:
cd /usr/local/Source
r
http://www.noss123.com/
is a party who acts as an intermediary between sellers and buyers of real
estate and attempts to find sellers who wish to sell and buyers who wish to
buy. In the United States, the relationship was originally established by
reference to the English common law of agency with
Found a space after + /nflmg/scripts/regional/misc_loaders/ which caused the
subdirectory to be missed. Thanks a bunch for your example. That illustrated
the issue well. Once I got rid of the space and saw that your example did
work, it made it much easier to understand how the rules build. The key
On Fri, 7 Mar 2003 04:32, Max Bowsher wrote:
> bob parker wrote:
> > On Fri, 7 Mar 2003 04:10, Max Bowsher wrote:
> >> bob parker wrote:
> >>> On Fri, 7 Mar 2003 03:47, Max Bowsher wrote:
> Curious. I wonder why rsync thinks the file is up to date.
>
> Maybe the -I or -c options woul
after following my earlier about rsync and inetd not binding properly, i
have rsync working! many thanks to the suggestions brought forward on this
mailing list.
hope holidays are happy!
Robb Benedict
All Things Computers
--
To unsubscribe or change options: http://lists.samba.org/mailman/listi
To the rsync maintainers:
When rsync 2.5.5 is pulling files and the target disk runs out of space,
this is what the tail end of the message stream looks like (w/--verbose):
write failed on games/ghostmaster/ectsdemo2002.zip : Success
rsync error: error in file IO (code 11) at receiver.c(243
49 matches
Mail list logo