On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn <bfrie...@simple.dallas.tx.us> 
wrote:

> On Wed, 16 Jan 2013, Thomas Nau wrote:
> 
>> Dear all
>> I've a question concerning possible performance tuning for both iSCSI access
>> and replicating a ZVOL through zfs send/receive. We export ZVOLs with the
>> default volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI.
>> The pool is made of SAS2 disks (11 x 3-way mirrored) plus mirrored STEC RAM 
>> ZIL
>> SSDs and 128G of main memory
>> 
>> The iSCSI access pattern (1 hour daytime average) looks like the following
>> (Thanks to Richard Elling for the dtrace script)
> 
> If almost all of the I/Os are 4K, maybe your ZVOLs should use a volblocksize 
> of 4K?  This seems like the most obvious improvement.

4k might be a little small. 8k will have less metadata overhead. In some cases
we've seen good performance on these workloads up through 32k. Real pain
is felt at 128k :-)

> 
> [ stuff removed ]
> 
>> For disaster recovery we plan to sync the pool as often as possible
>> to a remote location. Running send/receive after a day or so seems to take
>> a significant amount of time wading through all the blocks and we hardly
>> see network average traffic going over 45MB/s (almost idle 1G link).
>> So here's the question: would increasing/decreasing the volblocksize improve
>> the send/receive operation and what influence might show for the iSCSI side?
> 
> Matching the volume block size to what the clients are actually using (due to 
> their filesystem configuration) should improve performance during normal 
> operations and should reduce the number of blocks which need to be sent in 
> the backup by reducing write amplification due to "overlap" blocks..

compression is a good win, too 
 -- richard

--

richard.ell...@richardelling.com
+1-760-896-4422









_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to