ES> Second, you may be able to get more performance from the ZFS filesystem
ES> on the HW lun by tweaking the max pending # of reqeusts.  One thing
ES> we've found is that ZFS currently has a hardcoded limit of how many
ES> outstanding requests to send to the underlying vdev (35).  This works
ES> well for most single devices, but large arrays can actually handle more,
ES> and we end up leaving some performance on the floor.  Currently the only
ES> way to tweak this variable is through 'mdb -kw'.  Try something like:

Well, strange - I did try with value of 1, 60 and 256. And basically I
get the same results from varmail tests.



If vdev_reopen() is called then it will reset vq_max_pending to the vdev_knob's default value.

So you can set the "global" vq_max_pending in vdev_knob (though this affects all pools and all vdevs of each pool):
#mdb -kw
> vdev_knob::print
....

Also, here's a simple dscript (doesn't work on U2 though due to a CTF bug, but works on nevada). This tells the average and distribution # of I/Os you tried doing. So if you find this under 35, then upping vq_max_pending won't help. If however, you find you're continually hitting the upper limit of 35, upping vq_max_pending should help.

#!/usr/sbin/dtrace -s

vdev_queue_io_to_issue:return
/arg1 != NULL/
{
       @c["issued I/O"] = count();
}

vdev_queue_io_to_issue:return
/arg1 == NULL/
{
       @c["didn't issue I/O"] = count();
}

vdev_queue_io_to_issue:entry
{
@avgers["avg pending I/Os"] = avg(args[0]->vq_pending_tree.avl_numnodes); @lquant["quant pending I/Os"] = quantize(args[0]->vq_pending_tree.avl_numnodes);
       @c["total times tried to issue I/O"] = count();
}

vdev_queue_io_to_issue:entry
/args[0]->vq_pending_tree.avl_numnodes > 349/
{
@avgers["avg pending I/Os > 349"] = avg(args[0]->vq_pending_tree.avl_numnodes); @quant["quant pending I/Os > 349"] = lquantize(args[0]->vq_pending_tree.avl_numnodes, 33, 1000, 1);
       @c["total times tried to issue I/O where > 349"] = count();
}

/* bail after 5 minutes */
tick-300sec
{
       exit(0);
}


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to