On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote:
> On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote:
> >> From: Richard Elling [mailto:richard.ell...@gmail.com]
> >> Sent: Saturday, June 18, 2011 7:47 PM
> >> 
> >> Actually, all of the data I've gathered recently shows that the number of
> >> IOPS does not significantly increase for HDDs running random workloads.
> >> However the response time does :-( 
> > 
> > Could you clarify what you mean by that?  
> 
> Yes. I've been looking at what the value of zfs_vdev_max_pending should be.
> The old value was 35 (a guess, but a really bad guess) and the new value is
> 10 (another guess, but a better guess).  I observe that data from a fast, 
> modern 
> HDD, for  1-10 threads (outstanding I/Os) the IOPS ranges from 309 to 333 
> IOPS. 
> But as we add threads, the average response time increases from 2.3ms to 
> 137ms.
> Since the whole idea is to get lower response time, and we know disks are not 
> simple queues so there is no direct IOPS to response time relationship, maybe 
> it
> is simply better to limit the number of outstanding I/Os.

How would this work for a storage device with an intelligent
controller that provides only a few LUNs to the host, even though it
contains a much larger number of disks?  I would expect the controller
to be more efficient with a large number of outstanding IOs because it
could distribute those IOs across the disks.  It would, of course,
require a non-volatile cache to provide fast turnaround for writes.

-- 
-Gary Mills-        -Unix Group-        -Computer and Network Services-
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to