Bob Friesenhahn wrote:
> On Wed, 15 Oct 2008, Tomas Ögren wrote:
>>> ZFS does not support RAID0 (simple striping).
>>
>> zpool create mypool disk1 disk2 disk3
>>
>> Sure it does.
>
> This is load-share, not RAID0.  Also, to answer the other fellow, 
> since ZFS does not support RAID0, it also does not support RAID 1+0 
> (10). :-)

Technically correct.  But beware of operational definitions.

 From the SNIA Dictionary, http://www.snia.org/education/dictionary

RAID Level 0
    [Storage System] Synonym for data striping.

RAID Level 1
    [Storage System] Synonym for mirroring.

RAID Level 10
    not defined at SNIA, but generally agreed to be data stripes of
    mirrors.

Data Striping
    [Storage System] A disk array data mapping technique in which
    fixed-length sequences of virtual disk data addresses are mapped to
    sequences of member disk addresses in a regular rotating pattern.

    Disk striping is commonly called RAID Level 0 or RAID 0 because
    of its similarity to common RAID data mapping techniques. It includes
    no redundancy, however, so strictly speaking, the appellation RAID
    is a misnomer.

mirroring
    [Storage System] A configuration of storage in which two or more
    identical copies of data are maintained on separate media; also known
    as RAID Level 1, disk shadowing, real-time copy, and t1 copy.

ZFS dynamic stripes are not restricted by fixed-length sequences, so
they are not, technically data stripes by SNIA definition.

ZFS mirrors do fit the SNIA definition of mirroring, though ZFS does
so by logical address, not a physical block offset.

You will often see people describe ZFS mirroring with multiple top-level
vdevs as RAID-1+0 (or 10), because that is a well-known thing.  But if
you see this in any of the official documentation, then please file a bug.
>
> With RAID0 and 8 drives in a stripe, if you send a 128K block of data, 
> it gets split up into eight chunks, with a chunk written to each 
> drive.  With ZFS's load share, that 128K block of data only gets 
> written to one of the eight drives and no striping takes place.  The 
> next write is highly likely to go to a different drive.  Load share 
> seems somewhat similar to RAID0 but it is easy to see that it is not 
> by looking at the drive LEDs on an drive array while writes are taking 
> place.

ZFS allocates data in slabs.  By default, the slabs are 1 MByte each.
So a vdev is divided into a collection of slabs and when ZFS fills a
slab, it moves onto another.  With a dynamic stripe, the next slab may
be on a different vdev, depending on how much free space is available.
So you may see many I/Os hitting one disk, just because they happen
to be allocated on the same vdev, perhaps in the same slab.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to