Miles Nordin wrote:
>>>>>> "dc" == David Collier-Brown <[EMAIL PROTECTED]> writes:
>>>>>>             
>
>     dc> one discovers latency growing without bound on disk
>     dc> saturation,
>
> yeah, ZFS needs the same thing just for scrub.
>   

ZFS already schedules scrubs at a low priority.  However, once the
iops leave ZFS's queue, they can't be rescheduled by ZFS.
> I guess if the disks don't let you tag commands with priorities, then
> you have to run them at slightly below max throughput in order to QoS
> them.
>
> It's sort of like network QoS, but not quite, because: 
>
>   (a) you don't know exactly how big the ``pipe'' is, only
>       approximately, 
>
>   (b) you're not QoS'ing half of a bidirectional link---you get
>       instant feedback of how long it took to ``send'' each ``packet''
>       that you don't get with network QoS, and
>
>   (c) all the fabrics are lossless, so while there are queues which
>       undesireably fill up during congestion, these queues never drop
>       ``packets'' but instead exert back-pressure all the way up to
>       the top of the stack.
>
> I'm surprised we survive as well as we do without disk QoS.  Are the
> storage vendors already doing it somehow?
>   

Excellent question.  I hope someone will pipe up with an
answer.  In my experience, they get by through overprovisioning.
But I predict that SSDs will render this question moot, at least
for another generation or so.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to