Re: Fragmentation on XFS

2008-02-27 Thread Wayne Davison
On Wed, Feb 27, 2008 at 02:07:06PM -0700, Rob Bosch wrote: > If there is 100% match shouldn't it just leave the file as is even if > the -I option is selected? The -I option tells rsync to transfer all the files (ignoring any quick-check time/size matches), so it is expected that it will update th

RE: Fragmentation on XFS

2008-02-27 Thread Rob Bosch
Wayne, thanks for your help on this issue. It turned out to be a user error (me) since the client was the pre5 client instead of the pre10. I reran the test with the pre10 client as you suggested and here are the results. The only odd thing I noticed is that even though all the data matched, the

Re: Fragmentation on XFS

2008-02-27 Thread Rob Bosch
Let me know of any additional info or tests you need me to run. I'll halp any way I can. thanks. rob -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: Fragmentation on XFS

2008-02-27 Thread Wayne Davison
On Mon, Feb 25, 2008 at 09:28:18PM -0700, Rob Bosch wrote: > I reran this test with the --no-whole-file option and received the exact > same results. Any idea on why some much data is being sent when the files > are exactly the same on both sides? Yeah, I hadn't noticed that your transfer had alr

RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
> A local transfer needs --no-whole-file if you want it to use the rsync > algorithm (which uses more disk I/O, so it's not the default). > >..wayne.. Here are the stats on a small file...exactly as expected with everything getting matched. Is this an issue with how much of the file rsync can loo

RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
I ran rsync on the 59GB file again without preallocate on XFS. It created only 383 extents...very low fragmentation. Rob -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
> A local transfer needs --no-whole-file if you want it to use the rsync > algorithm (which uses more disk I/O, so it's not the default). > >..wayne.. I reran this test with the --no-whole-file option and received the exact same results. Any idea on why some much data is being sent when the files

Re: Fragmentation on XFS

2008-02-25 Thread Jamie Lokier
Rob Bosch wrote: > The patch truncates the file with ftruncate if a transfer fails in > receiver.c. This should avoid the problem you mention. I was thinking of a user-abort (Control-C) or crash, but this is good. > Even if this didn't > occur, the file would exist on the FS with the predefined

RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
The patch truncates the file with ftruncate if a transfer fails in receiver.c. This should avoid the problem you mention. Even if this didn't occur, the file would exist on the FS with the predefined size. It would be in the allocation table and exist on the disk (you can see it under Windows ex

Re: Fragmentation on XFS

2008-02-25 Thread Jamie Lokier
Rob Bosch wrote: > > Was that simply due to writing too-small block to NTFS? In other > > words, would increasing the size of write() calls have fixed it > > instead, without leaving allocated but unused disk space in the case > > of a user-abort with --partial, --partial-dir or --inplace? > > It

RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
> Was that simply due to writing too-small block to NTFS? In other > words, would increasing the size of write() calls have fixed it > instead, without leaving allocated but unused disk space in the case > of a user-abort with --partial, --partial-dir or --inplace? It could have been a function o

Re: Fragmentation on XFS

2008-02-25 Thread Jamie Lokier
Rob Bosch wrote: > > Though, did I get the right impression that NTFS generates lots of > > extents for small writes even when nothing else is running? > > The fragmentation on NTFS was a problem even when nothing else was running > on the server. The preallocation patch made all the difference o

RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
> Though, did I get the right impression that NTFS generates lots of > extents for small writes even when nothing else is running? The fragmentation on NTFS was a problem even when nothing else was running on the server. The preallocation patch made all the difference on NTFS and cygwin. In that

Re: Fragmentation on XFS

2008-02-25 Thread Jamie Lokier
Rob Bosch wrote: > > Any idea why Glibc's posix_fallocate makes any difference? > > > > Doesn't it simply write a lot of zeros? In that case, why doesn't > > rsync writing lots of data sequentially result in the same number of > > extents? > > The destination server had a lot of other processes r

RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
> A local transfer needs --no-whole-file if you want it to use the rsync > algorithm (which uses more disk I/O, so it's not the default). The transfers occurred across a local network but were on separate machines. Rob -- To unsubscribe or change options: https://lists.samba.org/mailman/listinf

RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
> Any idea why Glibc's posix_fallocate makes any difference? > > Doesn't it simply write a lot of zeros? In that case, why doesn't > rsync writing lots of data sequentially result in the same number of > extents? The destination server had a lot of other processes running at the same time. I sus

Re: Fragmentation on XFS

2008-02-25 Thread Wayne Davison
On Mon, Feb 25, 2008 at 08:48:22AM -0700, Rob Bosch wrote: > The odd thing is that a huge amount of the file was resent again even > though the files are identical at the source and destination. A local transfer needs --no-whole-file if you want it to use the rsync algorithm (which uses more disk

Re: Fragmentation on XFS

2008-02-25 Thread Jamie Lokier
Rob Bosch wrote: > Destination file on XFS > - ftruncate, 59GB file, Execution time 52776 secs, 1235 extents > - posix_fallocate, 59GB file, Execution time 53919 secs, 11 extents Any idea why Glibc's posix_fallocate makes any difference? Doesn't it simply write a lot of zeros? In th

RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
>For a meaningful test, you should actually write 77GB of data into a new >file and an ftruncated file and see if there's any difference in the >resulting fragmentation. > >In your patch, you should use fallocate in place of ftruncate. If your >glibc is like mine and doesn't provide direct access

RE: Fragmentation on XFS

2008-02-23 Thread Rob Bosch
On Sat, 2008-02-23 at 16:43 -0700, Rob Bosch wrote: >In your patch, you should use fallocate in place of ftruncate. If your >glibc is like mine and doesn't provide direct access to fallocate, >you'll have to use syscall and __NR_fallocate . I'll run a test with both ftruncate and fallocate using

Re: Fragmentation on XFS

2008-02-23 Thread Matt McCutchen
On Sat, 2008-02-23 at 16:43 -0700, Rob Bosch wrote: > Matt's patch worked great for cygwin (preallocate.diff). The same approach > is not working well under CentOS since it writes out all those 0's for the > files using the posix_fallocate function. It seems to me that under Linux > the ftruncate

Fragmentation on XFS

2008-02-23 Thread Rob Bosch
For some reason my previous emails on this topic didn't go out which I just realized based on my post hitting the rsync lists. If this is a duplicate then ignore it. I've been doing some testing on my CentOS 5 system running a storage array with XFS. There are huge amounts of fragmentation on ve