Re: [zfs-discuss] iSCSI access patterns and possible improvements?

2013-01-17 Thread Thomas Nau
Thanks for all the answers more inline) On 01/18/2013 02:42 AM, Richard Elling wrote: > On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn > wrote: > >> On Wed, 16 Jan 2013, Thomas Nau wrote: >> >>> Dear all >>> I've a question concerning possible performance tunin

Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-17 Thread Bob Friesenhahn
On Thu, 17 Jan 2013, Bob Friesenhahn wrote: For NFS you should disable atime on the NFS client mounts. This advice was wrong. It needs to be done on the server side. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,

Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-17 Thread Bob Friesenhahn
On Thu, 17 Jan 2013, Peter Wood wrote: Great points Jim. I have requested more information how the gallery share is being used and any temporary data will be moved out of there. About atime, it is set to "on" right now and I've considered to turn it off but I wasn't sure if this will effect in

Re: [zfs-discuss] iSCSI access patterns and possible improvements?

2013-01-17 Thread Richard Elling
On Jan 17, 2013, at 8:35 AM, Jim Klimov wrote: > On 2013-01-17 16:04, Bob Friesenhahn wrote: >> If almost all of the I/Os are 4K, maybe your ZVOLs should use a >> volblocksize of 4K? This seems like the most obvious improvement. > >> Matching the volume block size to what the clients are actua

Re: [zfs-discuss] iSCSI access patterns and possible improvements?

2013-01-17 Thread Richard Elling
On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn wrote: > On Wed, 16 Jan 2013, Thomas Nau wrote: > >> Dear all >> I've a question concerning possible performance tuning for both iSCSI access >> and replicating a ZVOL through zfs send/receive. We export ZVOLs with the >> default volblocksize of 8k t

Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-17 Thread Peter Wood
Great points Jim. I have requested more information how the gallery share is being used and any temporary data will be moved out of there. About atime, it is set to "on" right now and I've considered to turn it off but I wasn't sure if this will effect incremental zfs send/receive. 'zfs send -i s

Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-17 Thread Peter Wood
Right on Tim. Thanks. I didn't know that. I'm sure it's documented somewhere and I should have read it so double thanks for explaining it. On Thu, Jan 17, 2013 at 4:18 PM, Timothy Coalson wrote: > On Thu, Jan 17, 2013 at 5:33 PM, Peter Wood wrote: > >> >> The 'zpool iostat -v' output is uncomfo

Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-17 Thread Jim Klimov
On 2013-01-18 00:42, Bob Friesenhahn wrote: You can install Brendan Gregg's DTraceToolkit and use it to find out who and what is doing all the writing. 1.2GB in an hour is quite a lot of writing. If this is going continuously, then it may be causing more fragmentation in conjunction with your s

Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-17 Thread Timothy Coalson
On Thu, Jan 17, 2013 at 5:33 PM, Peter Wood wrote: > > The 'zpool iostat -v' output is uncomfortably static. The values of > read/write operations and bandwidth are the same for hours and even days. > I'd expect at least some variations between morning and night. The load on > the servers is diff

Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-17 Thread Bob Friesenhahn
On Thu, 17 Jan 2013, Peter Wood wrote: Unless there is some other way to test what/where these write operations are applied. You can install Brendan Gregg's DTraceToolkit and use it to find out who and what is doing all the writing. 1.2GB in an hour is quite a lot of writing. If this is g

Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-17 Thread Peter Wood
I have a script that rotates hourly, daily and monthly snapshots. Each filesystem has about 40 snapshots (zfsList.png - output of 'zfs list | grep -v home/' - the home directories datasets are snipped from the output. 4 users in total.) I noticed that the hourly snapshots on the heaviest filesyst

Re: [zfs-discuss] Heavy write IO for no apparent reason

2013-01-17 Thread Ray Arachelian
On 01/16/2013 10:25 PM, Peter Wood wrote: > > Today I started migrating file systems from some old Open Solaris > servers to these Supermicro boxes and noticed the transfer to one of > them was going 10x slower then to the other one (like 10GB/hour). What does "dladm show-link" show? I'm guessing

Re: [zfs-discuss] iSCSI access patterns and possible improvements?

2013-01-17 Thread Jim Klimov
On 2013-01-17 16:04, Bob Friesenhahn wrote: If almost all of the I/Os are 4K, maybe your ZVOLs should use a volblocksize of 4K? This seems like the most obvious improvement. Matching the volume block size to what the clients are actually using (due to their filesystem configuration) should im

Re: [zfs-discuss] iSCSI access patterns and possible improvements?

2013-01-17 Thread Bob Friesenhahn
On Wed, 16 Jan 2013, Thomas Nau wrote: Dear all I've a question concerning possible performance tuning for both iSCSI access and replicating a ZVOL through zfs send/receive. We export ZVOLs with the default volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI. The pool is made of SA