On Mar 22, 2010, at 10:36 AM, Svein Skogen wrote:
> On 22.03.2010 18:10, Richard Elling wrote:
>> On Mar 22, 2010, at 7:30 AM, Svein Skogen wrote:
>> 
>>> On 22.03.2010 13:54, Edward Ned Harvey wrote:
>>>>> IIRC it's "zpool scrub", and last time I checked, the zpool command
>>>>> exited (with status 0) as soon as it had started the scrub. Your
>>>>> command
>>>>> would start _ALL_ scrubs in paralell as a result.
>>>> 
>>>> You're right.  I did that wrong.  Sorry 'bout that.
>>>> 
>>>> So either way, if there's a zfs property for scrub, that still doesn't
>>>> prevent multiple scrubs from running simultaneously.  So ...  Presently
>>>> there's no way to avoid the simultaneous scrubs either way, right?  You 
>>>> have
>>>> to home-cook scripts to detect which scrubs are running on which
>>>> filesystems, and serialize the scrubs.  With, or without the property.
>>>> 
>>>> Don't get me wrong - I'm not discouraging the creation of the property.  
>>>> But
>>>> if you want to avoid simul-scrub, you'd first have to create a mechanism 
>>>> for
>>>> that, and then you could create the autoscrub.
>>>> 
>>> 
>>> Which is exactly why I wanted it "cooked in" in the zfs code itself. zfs 
>>> "knows" how many fs'es it's scrubbing.
>> 
>> Nit: ZFS does not scrub file systems.  ZFS scrubs pools.  In most deployments
>> I've done or seen there are very few pools, with many file systems.
>> 
>> For appliances like NexentaStor or Oracle's Sun OpenStorage platforms, the
>> default smallest unit of deployment is one disk. In other words, there is no
>> case where multiple scrubs compete for the resources of a single disk because
>> a single disk only participates in one pool. In general, resource management
>> works when you are resource constrained. Hence, it is quite acceptable to
>> implement concurrent scrubs.
>> 
>> Bottom line: systems engineering is still required for optimal system 
>> operation.
>>  -- richard
> 
> When you hook up a monstrosity like 96 disks (the limit of those supermicro 
> 2.5"-drive sas enclosures discussed on this list recently) to two 4-lane 
> sas-controllers, the bottleneck is likely to be your controller, your 
> pci-express-bus, or your memory bandwidth. You still want to be able to put 
> some constraints into how much your pushing the hardware. ;)

Scrub tends to be a random workload dominated by IOPS, not bandwidth.
But if you are so inclined to create an unbalanced system...

Bottom line: systems engineering is still required for optimal system operation 
:-)
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 





_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to