On Wed, Feb 27, 2008 at 02:07:06PM -0700, Rob Bosch wrote:
> If there is 100% match shouldn't it just leave the file as is even if
> the -I option is selected?
The -I option tells rsync to transfer all the files (ignoring any
quick-check time/size matches), so it is expected that it will update
th
Wayne, thanks for your help on this issue. It turned out to be a user error
(me) since the client was the pre5 client instead of the pre10. I reran the
test with the pre10 client as you suggested and here are the results. The
only odd thing I noticed is that even though all the data matched, the
Let me know of any additional info or tests you need me to run. I'll halp any
way I can. thanks.
rob
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
On Mon, Feb 25, 2008 at 09:28:18PM -0700, Rob Bosch wrote:
> I reran this test with the --no-whole-file option and received the exact
> same results. Any idea on why some much data is being sent when the files
> are exactly the same on both sides?
Yeah, I hadn't noticed that your transfer had alr
> A local transfer needs --no-whole-file if you want it to use the rsync
> algorithm (which uses more disk I/O, so it's not the default).
>
>..wayne..
Here are the stats on a small file...exactly as expected with everything
getting matched. Is this an issue with how much of the file rsync can loo
I ran rsync on the 59GB file again without preallocate on XFS. It created
only 383 extents...very low fragmentation.
Rob
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
> A local transfer needs --no-whole-file if you want it to use the rsync
> algorithm (which uses more disk I/O, so it's not the default).
>
>..wayne..
I reran this test with the --no-whole-file option and received the exact
same results. Any idea on why some much data is being sent when the files
Rob Bosch wrote:
> The patch truncates the file with ftruncate if a transfer fails in
> receiver.c. This should avoid the problem you mention.
I was thinking of a user-abort (Control-C) or crash, but this is good.
> Even if this didn't
> occur, the file would exist on the FS with the predefined
The patch truncates the file with ftruncate if a transfer fails in
receiver.c. This should avoid the problem you mention. Even if this didn't
occur, the file would exist on the FS with the predefined size. It would be
in the allocation table and exist on the disk (you can see it under Windows
ex
Rob Bosch wrote:
> > Was that simply due to writing too-small block to NTFS? In other
> > words, would increasing the size of write() calls have fixed it
> > instead, without leaving allocated but unused disk space in the case
> > of a user-abort with --partial, --partial-dir or --inplace?
>
> It
> Was that simply due to writing too-small block to NTFS? In other
> words, would increasing the size of write() calls have fixed it
> instead, without leaving allocated but unused disk space in the case
> of a user-abort with --partial, --partial-dir or --inplace?
It could have been a function o
Rob Bosch wrote:
> > Though, did I get the right impression that NTFS generates lots of
> > extents for small writes even when nothing else is running?
>
> The fragmentation on NTFS was a problem even when nothing else was running
> on the server. The preallocation patch made all the difference o
> Though, did I get the right impression that NTFS generates lots of
> extents for small writes even when nothing else is running?
The fragmentation on NTFS was a problem even when nothing else was running
on the server. The preallocation patch made all the difference on NTFS and
cygwin. In that
Rob Bosch wrote:
> > Any idea why Glibc's posix_fallocate makes any difference?
> >
> > Doesn't it simply write a lot of zeros? In that case, why doesn't
> > rsync writing lots of data sequentially result in the same number of
> > extents?
>
> The destination server had a lot of other processes r
> A local transfer needs --no-whole-file if you want it to use the rsync
> algorithm (which uses more disk I/O, so it's not the default).
The transfers occurred across a local network but were on separate machines.
Rob
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinf
> Any idea why Glibc's posix_fallocate makes any difference?
>
> Doesn't it simply write a lot of zeros? In that case, why doesn't
> rsync writing lots of data sequentially result in the same number of
> extents?
The destination server had a lot of other processes running at the same
time. I sus
On Mon, Feb 25, 2008 at 08:48:22AM -0700, Rob Bosch wrote:
> The odd thing is that a huge amount of the file was resent again even
> though the files are identical at the source and destination.
A local transfer needs --no-whole-file if you want it to use the rsync
algorithm (which uses more disk
Rob Bosch wrote:
> Destination file on XFS
> - ftruncate, 59GB file, Execution time 52776 secs, 1235 extents
> - posix_fallocate, 59GB file, Execution time 53919 secs, 11 extents
Any idea why Glibc's posix_fallocate makes any difference?
Doesn't it simply write a lot of zeros? In th
>For a meaningful test, you should actually write 77GB of data into a new
>file and an ftruncated file and see if there's any difference in the
>resulting fragmentation.
>
>In your patch, you should use fallocate in place of ftruncate. If your
>glibc is like mine and doesn't provide direct access
On Sat, 2008-02-23 at 16:43 -0700, Rob Bosch wrote:
>In your patch, you should use fallocate in place of ftruncate. If your
>glibc is like mine and doesn't provide direct access to fallocate,
>you'll have to use syscall and __NR_fallocate .
I'll run a test with both ftruncate and fallocate using
On Sat, 2008-02-23 at 16:43 -0700, Rob Bosch wrote:
> Matt's patch worked great for cygwin (preallocate.diff). The same approach
> is not working well under CentOS since it writes out all those 0's for the
> files using the posix_fallocate function. It seems to me that under Linux
> the ftruncate
21 matches
Mail list logo