Leon Koll wrote:

<...>
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t001738010140000Bd0 c4t001738010140000Cd0
c4t001738010140001Cd0 c4t0017380101400012d0

each of those devices is a 64GB lun, right?

I did it - created one pool, 4*64GB size, and running the benchmark now.
I'll update you on results, but one pool is definitely not what I need.
My target is - SunCluster with HA ZFS where I need 2 or 4 pools per node.

Why do you need 2 or 4 pools per node?

If you're doing HA-ZFS (which is SunCluster 3.2 - only available in beta right now), then you should divide your storage up to the number of *active* pools. So say you have 2 nodes and 4 luns (each lun being 64GB), and only need one active node - then you can create one pool of all 4 luns, and attach the 4 luns to both nodes.
The way HA-ZFS basically works is that when the "active" node fails, it 
does a 'zpool export', and the takeover node does a 'zpool import'.  So 
both nodes are using the same storage, but they cannot use the same 
storage at the same time, see:
http://www.opensolaris.org/jive/thread.jspa?messageID=49617

If however, you have 2 nodes, 4 luns, and wish both nodes to be active, then you can divy up the storage into two pools. So each node has one active pool of 2 luns. All 4 luns are doubly attached to both nodes, and when one node fails, the takeover node then has 2 active pools.
So how many nodes do you have? and how many do you wish to be "active" 
at a time?
And what was your configuration for VxFS and SVM/UFS?

eric

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to