mike wrote:
> And terminology-wise, one or more zpools create zdevs right?
>
Lets get the terminology right first.

You can have more than one zPool.

Each zPool can have many filesystems which all share *ALL* the space in 
the pool.
Each zPool can get it's space from one or more vDevs.

(Yes you can put more than one vDev in a single pool, and space from all 
vDevs is available to all filesystems - No artificial bounadaries here.)

Each vDev can be one of several types.

Single - 1 device      - No redundancy - 100% space usable.
Mirror - 2 devices min - Redundandacy increases as you add mirror devices.
                         Available space is equal to smallest device.
RAIDZ1 - 3 devices min - Redundancy allows 1 failure at a time.
                         Available space is (n-1) times smallest device.
RAIDZ2 - 4 devices min - Redundancy allows 2 failures at once.
                         Available space is (n-2) times smallest device.

You can ( though I don't know why you'd want to) put vDevs of different 
types in the same pool.
>
> zpool create tank \
> raidz disk1 disk2 disk3 disk4 disk5 disk6 disk7 \
> raidz disk8 disk9 disk10  disk11 disk12 disk13 disk14 \
> spare disk15 
>
> That's pretty much dual parity/dual failure for both pools assuming I swap 
> out the dead drive pretty quickly. Yeah?
>
In this example, you have one pool, with 2 vDevs. Each vDev can sustain 
one failure, but 2 failures in either vDev will take out the whole pool.

If you really can afford to trade performance (and no I don't know how 
much you lose) for redundancy, it'd be better to do:

zpool create tank \
     raidz2 disk1  disk2  disk3  disk4  disk5  disk6  disk7 \
            disk8  disk9  disk10 disk11 disk12 disk13 disk14 \
     spare  disk15

Since now you can have any 2 disks fail (3 if the spare has time to get 
used,) and the same space as your example.

   -Kyle

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to