Thanks for all the answers more inline)

On 01/18/2013 02:42 AM, Richard Elling wrote:
> On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn <bfrie...@simple.dallas.tx.us 
> <mailto:bfrie...@simple.dallas.tx.us>> wrote:
> 
>> On Wed, 16 Jan 2013, Thomas Nau wrote:
>>
>>> Dear all
>>> I've a question concerning possible performance tuning for both iSCSI access
>>> and replicating a ZVOL through zfs send/receive. We export ZVOLs with the
>>> default volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI.
>>> The pool is made of SAS2 disks (11 x 3-way mirrored) plus mirrored STEC RAM 
>>> ZIL
>>> SSDs and 128G of main memory
>>>
>>> The iSCSI access pattern (1 hour daytime average) looks like the following
>>> (Thanks to Richard Elling for the dtrace script)
>>
>> If almost all of the I/Os are 4K, maybe your ZVOLs should use a volblocksize 
>> of 4K?  This seems like the most obvious improvement.
> 
> 4k might be a little small. 8k will have less metadata overhead. In some cases
> we've seen good performance on these workloads up through 32k. Real pain
> is felt at 128k :-)

My only pain so far is the time a send/receive takes without really loading the
network at all. VM performance is nothing I worry about at all as it's pretty 
good.
So key question for me is if going from 8k to 16k or even 32k would have some 
benefit for
that problem?


> 
>>
>> [ stuff removed ]
>>
>>> For disaster recovery we plan to sync the pool as often as possible
>>> to a remote location. Running send/receive after a day or so seems to take
>>> a significant amount of time wading through all the blocks and we hardly
>>> see network average traffic going over 45MB/s (almost idle 1G link).
>>> So here's the question: would increasing/decreasing the volblocksize improve
>>> the send/receive operation and what influence might show for the iSCSI side?
>>
>> Matching the volume block size to what the clients are actually using (due 
>> to their filesystem configuration) should improve
>> performance during normal operations and should reduce the number of blocks 
>> which need to be sent in the backup by reducing
>> write amplification due to "overlap" blocks..
> 
> compression is a good win, too 

Thanks for that. I'll use your mentioned tools to drill down

>  -- richard

Thomas

> 
> --
> 
> richard.ell...@richardelling.com <mailto:richard.ell...@richardelling.com>
> +1-760-896-4422
> 
> 
> 
> 
> 
> 
> 
> 
> 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to