Orvar Korvar wrote:
> Ok, so i make one vdev out of 8 discs. And I combine all vdevs into one large 
> zpool? Is it correct?
>
> I have 8 port SATA card. I have 4 drives into one zpool. That is one vdev, 
> right? Now I can add 4 new drives and make them into one zpool. And now I 
> combine both zpool into one zpool? That can not be right? I dont get vdevs. 
> Can someone explain?
>   
A 'vdev' is the basic unit that a zpool is made of.

There are several types of vdevs:

Single device type:

   This vdev is made from one storage device - Generally a Hard disk 
drive, but there are other possibilities. This type of vdev has *no* 
data redundancy, but ZFS will still be able to notice errors due to 
checksumming every block. ZFS will also keep redundant meta-data on this 
one device, so meta data has the ability to survive block failures, but 
nothing will save data in this type of vdev from a full device failure. 
The size of this vdev is the size of the storage device it is made from.

Mirrored type:
    This type of vdev is made from 2 or more storage devices, All data 
is written to all devices, so there is data redundancy. The more devices 
in the mirror the more copies of the data, and the more full device 
failures the vdev can survive.  The size of this vdev is the size of the 
smallest storage device in the mirror. While devices (copies) can be 
added and removed from the mirror vdev, this only changes the 
redundancy, and not the size. Though if the smallest storage device in 
the mirror is replaced with a larger one, the size of the mirror should 
grow to the size of the new smallest device in the mirror.

RAIDZ or RAIDZ1 type:
   This type of vdev is made of 3 or more storage devices. This type of 
vdev has data redundancy, and can survive the loss of one device in the 
vdev at a time. The available space on a RAIDZ vdev is the size of the 
smallest storage device in the vdev times one less than the number of 
devices in the vdev ( minsize*(n-1) ) because 1 devices worth of space 
is used for parity information to provide the redundancy. This vdev type 
cannot (currently) have it's size changes by adding or removing 
(changing 'n') devices to/from it. However it can have it's available 
space increased by replacing the current smallest device with a larger 
device (changing 'minsize') so that some other device now becomes the 
'smallest device'. NOTE: if the vdev started with identical sized 
devices, you'll need to replace all of them before you'll see any 
increase in the available space since the 'size of the smallest device' 
will still be the same untill they are all replaced. Posts by 
knowledgable people on this mailing list have suggested that there is 
little benefit to having 10 or more devices in a RAIDZ vdev., and that 
devices should be split into multiple vdevs to keep the number in anyone 
in the single digits.

RAIDZ2 type:
   This type of vdev is made of 4 or more storage devices. It is 
basically just like RAIDZ1, except it has enoug redundancy to survive 2 
device failures at the same time, and the available space is the size of 
the smallest device times *two* less than number of devices in the vdev 
( minsize*(n-2) ) becuase 2 devices worth of space are used to provide 
the redundancy. Changing the space in this type of vdev is limited the 
same way that a RAIDZ vdev is.

As noted above, the term 'Storage Device' in these descriptoins is 
generally a hard disk drive, but it can be other things. ZFS allows you 
to use files on another filesystem, slices (solaris partitions) of a 
drive, fdisk partitions,  Hardware RAID LUNs,  iSCSI targets,  USB thumb 
drives, etc.


A Zpool is made up of more than one of these vdev's.  The size of a 
zpool is the sum of the sizes of the vdevs it is made from. The zpool 
doesn't add any redundancy itself, the vdev's are responsible for that. 
Which is why, while a zpool can be made of vdev's of differing types, 
it's not a good idea. The 'zpool create' command will warn you if you 
try to use a mix of redundant and non-redundant vdev types in the same 
pool. This is really a bad idea since you can't control what data is 
placed on the redundant vdev's and which is places on the non-redundant 
vdev's. If you have data that has different redundancy needs, you're 
better off creating more than one zpool.

Vdev's can be added to a zpool, but not removed (yet?) Therefore to 
increase the size of a of zpool, you have to either add another full 
vdev to it, or replace one (or more) devices in one of the existing 
vdev's so that the vdev contributes more space to the zpool.


I hope this helps.

   -Kyle



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to