may be 5x(3+1) use one disk from each controller, 15TB usable space, 3+1 raidz rebuild time should be reasonable

On 9/7/2010 4:40 AM, hatish wrote:
Thanks for all the replies :)

My mindset is split in two now...

Some detail - I'm using 4 1-to-5 Sata Port multipliers connected to a 4-port 
SATA raid card.

I only need reliability and size, as long as my performance is the equivalent 
of one drive, Im happy.

Im assuming all the data used in the group is read once when re-creating a lost 
drive. Also assuming space consumed is 50%.

So option 1 - Stay with the 2 x 10drive RaidZ2. My concern is the stress on the 
drives when one drive fails and the others go crazy (read-wise) to re-create 
the new drive. Is there no way to reduce this stress? Maybe limit the data 
rate, so its not quite so stressful, even though it will end up taking longer? 
(quite acceptable)
[Available Space: 16TB, Redundancy Space: 4TB, Repair data read: 4.5TB]

And option 2 - Add a 21st drive to one of the motherboard sata ports. And then 
go with 3 x 7drive RaidZ2. [Available Space: 15TB, Redundancy Space: 6TB, 
Repair data read: 3TB]

Sadly, SSD's wont go too well in a PM based setup like mine. I may add it 
directly onto the MB if I can afford it. But again, performance is not a 
prioity.

Any further thoughts and ideas are much appreciated.

<<attachment: laotsao.vcf>>

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to