On Fri, Oct 23, 2009 at 7:17 PM, Richard Elling <richard.ell...@gmail.com>wrote:

>
> Tim has a valid point. By default, ZFS will queue 35 commands per disk.
> For 46 disks that is 1,610 concurrent I/Os.  Historically, it has proven to
> be
> relatively easy to crater performance or cause problems with very, very,
> very expensive arrays that are easily overrun by Solaris. As a result, it
> is
> not uncommon to see references to setting throttles, especially in older
> docs.
>
> Fortunately, this is  simple to test by reducing the number of I/Os ZFS
> will queue.  See the Evil Tuning Guide
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29
>
> The mpt source is not open, so the mpt driver's reaction to 1,610
> concurrent
> I/Os can only be guessed from afar -- public LSI docs mention a number of
> 511
> concurrent I/Os for SAS1068, but it is not clear to me that is an explicit
> limit.  If
> you have success with zfs_vdev_max_pending set to 10, then the mystery
> might be solved. Use iostat to observe the wait and actv columns, which
> show the number of transactions in the queues.  JCMP?
>
> NB sometimes a driver will have the limit be configurable. For example, to
> get
> high performance out of a high-end array attached to a qlc card, I've set
> the execution-throttle in /kernel/drv/qlc.conf to be more than two orders
> of
> magnitude greater than its default of 32. /kernel/drv/mpt*.conf does not
> seem
> to have a similar throttle.
>  -- richard
>
>

I believe there's a caveat here though.  That really only helps if the total
I/O load is actually enough for the controller to handle.  If the sustained
I/O workload is still 1600 concurrent I/O's, lowering the batch won't
actually cause any difference in the timeouts, will it?  It would obviously
eliminate burstiness (yes, I made that word up), but if the total sustained
I/O load is greater than the ASIC can handle, it's still going to fall over
and die with a queue of 10, correct?

--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to