On Sun, 2008-08-31 at 15:03 -0400, Miles Nordin wrote:

> It's sort of like network QoS, but not quite, because: 
> 
>   (a) you don't know exactly how big the ``pipe'' is, only
>       approximately, 

In an ip network, end nodes generally know no more than the pipe size of
the first hop -- and in some cases (such as true CSMA networks like
classical ethernet or wireless) only have an upper bound on the pipe
size.  

beyond that, they can only estimate the characteristics of the rest of
the network by observing its behavior - all they get is end-to-end
latency, and *maybe* a 'congestion observed' mark set by an intermediate
system.

>   (c) all the fabrics are lossless, so while there are queues which
>       undesireably fill up during congestion, these queues never drop
>       ``packets'' but instead exert back-pressure all the way up to
>       the top of the stack.

hmm.  I don't think the back pressure makes it all the way up to zfs
(the top of the block storage stack) except as added latency.  

(on the other hand, if it did, zfs could schedule around it both for
reads and writes, avoiding pouring more work on already-congested
paths..)

> I'm surprised we survive as well as we do without disk QoS.  Are the
> storage vendors already doing it somehow?

I bet that (as with networking) in many/most cases overprovisioning the
hardware and running at lower average utilization is often cheaper in
practice than running close to the edge and spending a lot of expensive
expert time monitoring performance and tweaking QoS parameters.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to