Hi Richard,

> > - scrubbing the same pool, configured as raidz1
> didn't max out CPU which is no surprise (haha, slow
> storage...) the notable part is that it didn't slow
> down payload that much either.
> 
> raidz creates more, smaller writes than a mirror or
> simple stripe. If the disks are slow,
> then the IOPS will be lower and the scrub takes
> longer, but the I/O scheduler can
> manage the queue better (disks are slower).

This wasn't mirror vs. raidz but raidz1 vs. raidz2, whereas the latter maxes 
out CPU and the former maxes out physical disc I/O. Concurrent payload 
degradation isn't that extreme on raidz1 pools, as it seems. Hence, the CPU 
theory that you still seem to be reluctant to follow.


> There are several
> bugs/RFEs along these lines, something like:
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bu
> g_id=6743992

Thanks to pointing at this. As it seems, it's a problem for a couple of years 
already. Obviously, the opinion is being shared that this a management problem, 
not a HW issue.

As a Project Manager I will soon have to take a purchase decision for an 
archival storage system (A/V media), and one of the options we are looking into 
is SAMFS/QFS solution including tiers on disk with ZFS. I will have to make up 
my mind if the pool sizes we are looking into (typically we will need 150-200 
TB) are really manageable under the current circumstances when we think about 
including zfs scrub in the picture. From what I have learned here it rather 
looks as if there will be an extra challenge, if not even a problem for the 
system integrator. That's unfortunate.

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to