> Thus, if you have a 2GB, a 3GB, and a 5GB device in a pool,
 > the pool's capacity is 3 x 2GB = 6GB

If you put the three into one raidz vdev it will be 2+2
until you replace the 2G disk with a 5G at which point
it will be 3+3 and then when you replace the 3G with a 5G
it will be 5+5G. and if you replace the 5G with a 10G
it will still be 5+5G

If one lists out the three disks so they are all their own
vdevs it will be 3x faster than the raidz and 3+2+5 in size
(see example below of mirrors and raidz vdevs of different sizes)

 > All pools store redundant  metadata, so they can
 > automatically detect  and repair most faults in metadata.

and one can `zfs set copys=2 pool/home` with the 2+3+5
stripe to automatically detect and repair most faults in data
as there is an "attempt" to store files on different vdevs
(mirrors are best)

7 % zpool iostat -v
                capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
root        15.9G   100G      2      0   177K    800
   c2t0d0s7  15.9G   100G      2      0   177K    800
----------  -----  -----  -----  -----  -----  -----
z           3.28T  1.59T    379     19  26.6M   103K
   raidz1    1.83T  1.58T    207     12  14.9M  64.7K
     c0t2d0      -      -     69      6  3.84M  17.1K
     c4t1d0      -      -     69      6  3.84M  17.1K
     c0t6d0      -      -     69      6  3.84M  17.1K
     c0t4d0      -      -     69      6  3.84M  17.1K
     c4t3d0      -      -     69      6  3.84M  17.1K
   raidz1    1.44T  12.0G    172      7  11.7M  37.9K
     c4t4d0      -      -     58      5  3.06M  10.2K
     c4t6d0      -      -     58      5  3.06M  10.2K
     c0t3d0      -      -     58      5  3.06M  10.2K
     c4t2d0      -      -     58      5  3.06M  10.2K
     c0t5d0      -      -     58      5  3.06M  10.2K
----------  -----  -----  -----  -----  -----  -----

1 % zpool iostat -v
                  capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
root          5.28G  24.0G      0      0    863  2.13K
   mirror      5.28G  24.0G      0      0    863  2.13K
     c0t1d0s0      -      -      0      0    297  4.76K
     c0t0d0s0      -      -      0      0    597  4.76K
------------  -----  -----  -----  -----  -----  -----
z              230G   500G     17     76   150K   461K
   mirror      83.8G   182G      6     25  52.1K   158K
     c0t0d0s7      -      -      2     15  85.1K   248K
     c0t1d0s7      -      -      2     15  86.7K   248K
   mirror      72.6G   159G      5     26  49.4K   161K
     c0t2d0        -      -      2     19  82.7K   251K
     c0t3d0        -      -      2     19  81.2K   251K
   mirror      74.0G   158G      5     23  48.3K   142K
     c0t4d0        -      -      2     18  72.3K   232K
     c0t5d0        -      -      2     18  71.9K   232K
------------  -----  -----  -----  -----  -----  -----
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to