Hi,

following the zfs best practise guide, my understanding is that neither choice 
is very good. There is maybe a third choice, that is

pool
------vdev1 
--------------disk
--------------disk
.....
--------------disk 

...

------vdev n
--------------disk 
--------------disk
.....
--------------disk

whereas the vdevs will add up in capacity. As far as I understand the option to 
use a parity protected stripe set (i.e. raidz) would be on the vdev layer. As 
far as I understand the smallest disk will limit the capacity of the vdev, not 
of the pool, so that the size should be constant within a pool. Potential hot 
spares would be universally usable for any vdev if they match the size of the 
largest member of any vdev. (i.e. 2 GB).
The benefit of that solution are that a physical disk device failure will not 
affect more than one vdev, and that IO will scale across vdevs as much as 
capacity. The drawback is that the per-vdev redundancy has a price in capacity.
I hope I am correct - I am a newbie as you.

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to