Robert Milkowski wrote:
Hello Tom,

Tuesday, May 23, 2006, 9:46:24 PM, you wrote:

TG> Hi,

TG> I have these two pools, four luns each. One has two mirrors x two luns,
TG> the other is one mirror x 4 luns.

TG> I am trying to figure out what the pro's and cons are of these two configs.

TG> One thing I have noticed is that the single mirror 4 lun config can TG> survive as many as three lun failures. The other config only two.
TG> I am thinking that space efficiency is similar because zfs strips across
TG> all the luns in both configs.

TG> So that being said. I would like to here from others on pro's and cons
TG> of these two approaches.

TG> Thanks ahead,
TG> -tomg

TG>        NAME              STATE         READ WRITE CKSUM
TG>         mypool             ONLINE       0     0     0
TG>           mirror             ONLINE       0     0     0
TG>             /export/lun5   ONLINE       0     0     0
TG>             /export/lun2   ONLINE       0     0     0
TG>           mirror             ONLINE       0     0     0
TG>             /export/lun3  ONLINE       0     0     0
TG>             /export/lun4  ONLINE       0     0     0

TG>         NAME              STATE     READ WRITE CKSUM
TG>         newpool           ONLINE       0     0     0
TG>           mirror            ONLINE       0     0     0
TG>             /export/luna  ONLINE       0     0     0
TG>             /export/lunb  ONLINE       0     0     0
TG>             /export/lund  ONLINE       0     0     0
TG>             /export/lunc  ONLINE       0     0     0


In the first config you should get a pool storage with capacity equal to
'2x lun size'. In the second config only '1x lun size'.
So in the second config you get better redundancy but only half
storage size.

Ok I see that,  df shows it explicitly.

[EMAIL PROTECTED]> df -F zfs -h
Filesystem             size   used  avail capacity  Mounted on
mypool                 2.0G    39M   1.9G     2%    /mypool
newpool               1000M     8K  1000M     1%    /newpool

What confused me is that ZFS does dynamic striping and if I write to the 2x lun mirror all of the disks get IO. But my error in thought was in how the data gets spread out. It must be that the writes get striped for bandwidth utilization but the blocks and their copies are not spread across the mirrors. I'd like to understand that better.

It sure is good to be able to experiment with devious.

-tomg
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to