Erik wrote:
> Actually, your biggest bottleneck will be the IOPS
> limits of the 
> drives.  A 7200RPM SATA drive tops out at 100 IOPS.
>  Yup. That's it.
> So, if you need to do 62.5e6 IOPS, and the rebuild
> drive can do just 100 
> IOPS, that means you will finish (best case) in
> 62.5e4 seconds.  Which 
> is over 173 hours. Or, about 7.25 WEEKS.

My OCD is coming out and I will split that hair with you.  173 hours is just 
over a week.

This is a fascinating and timely discussion.  My personal (biased and 
unhindered by facts) preference is wide stripes RAIDZ3.  Ned is right that I 
kept reading that RAIDZx should not exceed _ devices and couldn't find real 
numbers behind those conclusions.

Discussions in this thread have opened my eyes a little and I am in the middle 
of deploying a second 22 disk fibre array on home server, so I have been 
struggling with the best way to allocate pools.  Up until reading this thread, 
the biggest downside to wide stripes, that I was aware of, has been low iops.  
And let's be clear: while on paper the iops of a wide stripe is the same as a 
single disk, it actually is worse.  In truth, the service time for any request 
on wide stripe is the service time of the SLOWEST disk for that request.  The 
slowest disk may vary from request to request, but will always delay the entire 
stripe operation.

Since all of the 44 spindles are 15K disks, I am about to convince myself to go 
with two pools of wide stripes and keep several spindles for L2ARC and SLOG.  
The thinking is that other background operations (scrub and resilver) can take 
place with little impact to application performance, since those will be using 
L2ARC and SLOG.

Of course, I could be wrong on any of the above.

Cheers,
Marty
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to