On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Leon Koll wrote:
> On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote:
>
>> Leon Koll wrote:
>>
>> > <...>
>> >
>> >> So having 4 pools isn't a recommended config - i would destroy
>> those 4
>> >> pools and just create 1 RAID-0 pool:
>> >> #zpool create sfsrocks c4t001738010140000Bd0 c4t001738010140000Cd0
>> >> c4t001738010140001Cd0 c4t0017380101400012d0
>> >>
>> >> each of those devices is a 64GB lun, right?
>> >
>> >
>> > I did it - created one pool, 4*64GB size, and running the benchmark
>> now.
>> > I'll update you on results, but one pool is definitely not what I
>> need.
>> > My target is - SunCluster with HA ZFS where I need 2 or 4 pools per
>> node.
>> >
>> Why do you need 2 or 4 pools per node?
>>
>> If you're doing HA-ZFS (which is SunCluster 3.2 - only available in beta
>> right now), then you should divide your storage up to the number of
>
>
> I know, I run the 3.2 now.
>
>> *active* pools. So say you have 2 nodes and 4 luns (each lun being
>> 64GB), and only need one active node - then you can create one pool of
>
>
> To have one active node doesn't look smart to me. I want to distribute
> load between 2 nodes, not to have 1 active and 1 standby.
> The LUN size in this test is 64GB but in real configuration it will be
> 6TB
>
>> all 4 luns, and attach the 4 luns to both nodes.
>>
>> The way HA-ZFS basically works is that when the "active" node fails, it
>> does a 'zpool export', and the takeover node does a 'zpool import'. So
>> both nodes are using the same storage, but they cannot use the same
>> storage at the same time, see:
>> http://www.opensolaris.org/jive/thread.jspa?messageID=49617
>
>
> Yes, it works this way.
>
>>
>> If however, you have 2 nodes, 4 luns, and wish both nodes to be active,
>> then you can divy up the storage into two pools. So each node has one
>> active pool of 2 luns. All 4 luns are doubly attached to both nodes,
>> and when one node fails, the takeover node then has 2 active pools.
>
>
> I agree with you - I can have 2 active pools, not 4 in case of
> dual-node cluster.
>
>>
>> So how many nodes do you have? and how many do you wish to be "active"
>> at a time?
>
>
> Currently - 2 nodes, both active. If I define 4 pools, I can easily
> expand the cluster to the 4-nodes configuration, that may be the good
> reason to have 4 pools.
Ok, that makes sense.
>>
>> And what was your configuration for VxFS and SVM/UFS?
>
>
> 4 SVM concat volumes (I need a concatenation of 1TB LUNs if I am in
> SC3.1 that doesn't support EFI label) with UFS or VxFS on top.
So you have 2 nodes, 2 file systems (of either UFS or VxFS) on each node?
I have 2 nodes, 2 file systems per node. One share is working via
bge0, the second one - via bge1.
I'm just trying to make sure its a fair comparison bewteen ZFS, UFS, and
VxFS.
After I saw that ZFS performance (when the box isn't stuck) is about 3
times lower than UFS/VxFS, I understood I should wait with ZFS for
Solaris 11official release.
I don't believe that it's possible to do some magic with my setup and
increase the ZFS performance 3 times. Fix me if I'm wrong.
>
> And now comes the questions - my short test showed that 1-pool config
> doesn't behave better than 4-pools one - with the first the box was
> hung, with the second - didn't.
> Why do you think the 1-pool config is better?
I suggested the 1 pool config before i knew you were doing HA-ZFS :)
Purposely dividing up your storage (by creating separate pools) in a
non-clustered environment usually doesn't make sense (root being one
notable exception).
I see.
Thanks,
-- Leon
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss