On 1/12/07, Kyle McDonald <[EMAIL PROTECTED]> wrote:
Patrick P Korsnick wrote:
> hi,
>
> i just set up snv_54 on an old p4 celeron system and even tho the processor 
is crap, it's got 3 7200RPM HDs: 1 80GB and 2 40GBs.  so i'm wondering if there is 
an optimal way to lay out the ZFS pool(s) to make this old girl as fast as 
possible....
>
> as it stands now i've got the following drive layout:
>
> pri master: 80GB (call it drive1)
> pri slave: 40GB (drive 2)
>
> sec master:40GB (drv 3)
> sec slave: DVD
>
> (all connected with 80 conductor ribbons)
>
> my partitions are:
> drive 1: i've got 2 10GB UFS root slices (so i can do live upgrades), and 1GB 
swap slice
> i've got one big zpool consisting of a 50GB slice on drive 1 and all of drives 2 
& 3.
>
> i'm not sure that this is the optimal layout for striping.  i don't need 
mirroring or
redundancy-- just speed.  i'm thinking maybe i'd be better booting
off one of the smaller >>drives and putting the other two on one
controller and putting the zpool on those only.  >> spanning the two
IDE controllers with the single zpool seems like it might be a bad
idea, but i am just postulating here....

that is backwards, you want them split, on each controller, in case
one controller chip dies the data still remains in tact and safe if
they the two drives are mirrored.

for most the space, availible with safety,  take a 40GB slice off the
80GB drive, and combine with the other 2x 40GB drives in a raidz group
composed of 3x 40GB pieces that gives you about 80GB of usable disk
space.

your other choice is 2x 40GB drives together mirrored in one pool, so
it can survive the loss of a drive and still maintain data. and use
the remaining slice off the 80GB drive as a single drive pool that
holds temporary data that isn't important because the data is gone if
the drive dies.

Needless to say I would recomend the first idea unless you can find
another 40GB drive and another controller, and use 4x pieces to make a
120GB pool. But for best performance its best to allocate full drives
so ZFS can activate write caching on the drives.

James Dickens
uadmin.blogspot.com



>
>
Why not take all but 40GB from the 80 for the OS/Boot, then take the
remaining 40GB, and the 40GB each from Drive 2 and Drive 3 and put it in
a 3 device RaidZ. This will at least give you some redundancy.

    -Kyle
>
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to