Hello Tom,

Tuesday, May 23, 2006, 10:37:31 PM, you wrote:


TG> Robert Milkowski wrote:
>> Hello Tom,
>>
>> Tuesday, May 23, 2006, 9:46:24 PM, you wrote:
>>
>> TG> Hi,
>>
>> TG> I have these two pools, four luns each. One has two mirrors x two luns,
>> TG> the other is one mirror x 4 luns.
>>
>> TG> I am trying to figure out what the pro's and cons are of these two 
>> configs.
>>
>> TG> One thing I have noticed is that the single mirror 4 lun config can 
>> TG> survive as many as three lun failures.  The other config only two.
>> TG> I am thinking that space efficiency is similar because zfs strips across
>> TG> all the luns in both configs.
>>
>> TG> So that being said. I would like to here from others on pro's and cons
>> TG> of these two approaches.
>>
>> TG> Thanks ahead,
>> TG> -tomg
>>
>> TG>        NAME              STATE         READ WRITE CKSUM
>> TG>         mypool             ONLINE       0     0     0
>> TG>           mirror             ONLINE       0     0     0
>> TG>             /export/lun5   ONLINE       0     0     0
>> TG>             /export/lun2   ONLINE       0     0     0
>> TG>           mirror             ONLINE       0     0     0
>> TG>             /export/lun3  ONLINE       0     0     0
>> TG>             /export/lun4  ONLINE       0     0     0
>>
>> TG>         NAME              STATE     READ WRITE CKSUM
>> TG>         newpool           ONLINE       0     0     0
>> TG>           mirror            ONLINE       0     0     0
>> TG>             /export/luna  ONLINE       0     0     0
>> TG>             /export/lunb  ONLINE       0     0     0
>> TG>             /export/lund  ONLINE       0     0     0
>> TG>             /export/lunc  ONLINE       0     0     0
>>
>>
>> In the first config you should get a pool storage with capacity equal to
>> '2x lun size'. In the second config only '1x lun size'.
>> So in the second config you get better redundancy but only half
>> storage size.
>>
>>   
TG> Ok I see that,  df shows it explicitly.

[EMAIL PROTECTED]>> df -F zfs -h
TG> Filesystem             size   used  avail capacity  Mounted on
TG> mypool                 2.0G    39M   1.9G     2%    /mypool
TG> newpool               1000M     8K  1000M     1%    /newpool

TG> What confused me is that ZFS does dynamic striping and if I write to the
TG> 2x lun mirror all of the disks get IO. But my error in thought was in 
TG> how the data gets spread out. It must be that the writes get striped for
TG> bandwidth utilization but the blocks and their copies are not spread 
TG> across the mirrors. I'd like to understand that better.

TG> It sure is good to be able to experiment with devious.

Well, mirror A B mirror C D with zfs is actually behaving like RAID-10
(stripe over mirrors). The main difference here is variable stripe
width but when it comes to protection it's just RAID-10 + checksums
for data+metadata. You can imaging such a config as stacked raid -
the same if you have created two mirrors on HW RAID, then exposed two
such disks to a host and then just did striping using ZFS (zpool
create pool X Y - where X is one mirror from two disks and Y is
another mirror from two disks). The difference is in using variable
stripe width and checksums (and more clever IO scheduler?)

-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to