Hi,

the corners I am basing my previous idea on you can find here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAIDZ_Configuration_Requirements_and_Recommendations
I can confirm some of the recommendations already from personal practise. First 
and foremost this sentence: "The recommended number of disks per group is 
between 3 and 9. If you have more disks, use multiple groups."
One example:
I am running 11+1 disks in a single group now. I have recently changed the 
configuration from raidz to raidz2, and the performance while scrub dropped 
from 500 MB/s to app. 200 MB/s by the imposition of the second parity. I am 
sure that if I had chosen two groups in raidz, the performance would have been 
even better than the original config while I could still loose two drives in 
the pool unless the loss wouldn't occur within a single group. 
The bottom line is that while  increasing the number of stripes in a group the 
performance, especially random I/O, will converge against the performance of a 
single group member.
The only reason why I am sticking with the single group configuration myself is 
that performance is "good enough" for what I am doing for now, and that "11 is 
not so far from 9".

In your case, there are two other aspects:
- if you pool small devices as JBODS below a vdev member, no parity will help 
you when you loose a member of the underlying JBOD. 
- If you use slices as vdev members, performance will drop dramatically.

Regards,

tonmaus
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to