On Apr 29, 2010, at 11:55 AM, Katzke, Karl wrote:

>>> The server is a Fujitsu RX300 with a Quad Xeon 1.6GHz, 6G ram, 8x400G
>>> SATA through a U320SCSI<->SATA box - Infortrend A08U-G1410, Sol10u8.
> 
>> slow disks == poor performance
> 
>>> Should have enough oompf, but when you combine snapshot with a
>>> scrub/resilver, sync performance gets abysmal.. Should probably try
>>> adding a ZIL when u9 comes, so we can remove it again if performance
>>> goes crap.
> 
>> A separate log will not help.  Try faster disks.
> 
> We're seeing the same thing in Sol10u8 with both 300gb 15k rpm SAS disks 
> in-board on a Sun x4250 and an external chassis with 1tb 7200 rpm SATA disks 
> connected via SAS. Faster disks aren't the problem; there's a fundamental 
> issue with ZFS [iscsi;nfs;cifs] share performance under scrub & resilver. 

In Solaris 10u8 (and prior releases) the default number of outstanding I/Os is
35 and (I trust, because Solaris 10 is not open source) the default max number
of scrub I/Os is 10 per vdev. If your disk is slow, then the service time for 
the
queue with 35 entries will be such that the queue depth has grown to 35 entries
and the I/O scheduler in ZFS has an opportunity to prioritize and reorder the 
queue. If your disk is fast, then you won't see this and life will be good.

In recent OpenSolaris builds, the default number of outstanding I/Os is reduced
to 4-10, by default. For slow disks, the scheduler has a greater probability of 
being able to prioritize non-scrub I/Os. Again, if your disk is fast, you won't 
see
the queue depth reach 10 and life will be good.

iostat is the preferred tool for measuring queue depth, though it would be easy
to write a dedicated tool using DTrace.

Also in OpenSolaris, there is code to throttle the scrub based on bandwidth. But
we've clearly ascertained that this is not a bandwidth problem, so a bandwidth
throttle is mostly useless... unless the disks are fast.

P.S. I don't consider any HDDs to be fast.  SSDs won.  Game over :-)
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to