eric kustarz writes:
 > 
 > >ES> Second, you may be able to get more performance from the ZFS filesystem
 > >ES> on the HW lun by tweaking the max pending # of reqeusts.  One thing
 > >ES> we've found is that ZFS currently has a hardcoded limit of how many
 > >ES> outstanding requests to send to the underlying vdev (35).  This works
 > >ES> well for most single devices, but large arrays can actually handle more,
 > >ES> and we end up leaving some performance on the floor.  Currently the only
 > >ES> way to tweak this variable is through 'mdb -kw'.  Try something like:
 > >
 > >Well, strange - I did try with value of 1, 60 and 256. And basically I
 > >get the same results from varmail tests.
 > >
 > >
 > >  
 > >
 > 
 > If vdev_reopen() is called then it will reset vq_max_pending to the 
 > vdev_knob's default value.
 > 
 > So you can set the "global" vq_max_pending in vdev_knob (though this 
 > affects all pools and all vdevs of each pool):
 > #mdb -kw
 >  > vdev_knob::print
 > ....

I think the interlace  on the  volume was  set to 32K  which
means that  each 128K  I/O spreads to  4  disks.  So the  35
vq_max_pending  turns into 140  disk I/O which seems enough,
as was   found, to drive   the 10-20  disks storage.  If the
interlace had been set to  1M or  more  then I would  expect
vq_max_pending to start to make a difference.

What  we must  try to  avoid   is ZFS throttling itself   on
vq_max_pending when some disks  have near 0 request in their
pipe.

-r

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to