Hi,

I am setting up a Pacemaker/Corosync/Glusterfs HA cluster set.

Pacemaker ver. 1.0.9.1

With Glusterfs I have 4 nodes serving replicated (RAID1) storage back-ends and up to 5 servers mounting the store. With out getting into the specifics of how Gluster works, simply put, as long as any one of the 4 backed nodes are running, all of the 5 servers will be able to mount the store.

I have started setting up a testing cluster and have the following: (crm configure show output)

node test1
node test2
node test3
node test4
primitive glfs ocf:cybersites:glusterfs \
        params volfile="repstore.vol" mount_dir="/home" \
        op monitor interval="10s" timeout="30"
primitive glfsd-1 ocf:cybersites:glusterfsd \
        params volfile="glfs.vol" \
        op monitor interval="10s" timeout="30" \
        meta target-role="Started"
primitive glfsd-1-IP ocf:heartbeat:IPaddr2 \
        params ip="192.168.5.221" nic="eth1" cidr_netmask="24" \
        op monitor interval="5s"
primitive glfsd-2 ocf:cybersites:glusterfsd \
        params volfile="glfs.vol" \
        op monitor interval="10s" timeout="30" \
        meta target-role="Started"
primitive glfsd-2-IP ocf:heartbeat:IPaddr2 \
        params ip="192.168.5.222" nic="eth1" cidr_netmask="24" \
        op monitor interval="5s" \
        meta target-role="Started"
primitive glfsd-3 ocf:cybersites:glusterfsd \
        params volfile="glfs.vol" \
        op monitor interval="10s" timeout="30" \
        meta target-role="Started"
primitive glfsd-3-IP ocf:heartbeat:IPaddr2 \
        params ip="192.168.5.223" nic="eth1" cidr_netmask="24" \
        op monitor interval="5s"
primitive glfsd-4 ocf:cybersites:glusterfsd \
        params volfile="glfs.vol" \
        op monitor interval="10s" timeout="30" \
        meta target-role="Started"
primitive glfsd-4-IP ocf:heartbeat:IPaddr2 \
        params ip="192.168.5.224" nic="eth1" cidr_netmask="24" \
        op monitor interval="5s"
group glfsd-1-GROUP glfsd-1-IP glfsd-1
group glfsd-2-GROUP glfsd-2-IP glfsd-2
group glfsd-3-GROUP glfsd-3-IP glfsd-3
group glfsd-4-GROUP glfsd-4-IP glfsd-4
clone clone-glfs glfs \
        meta clone-max="4" clone-node-max="1" target-role="Started"
location block-glfsd-1-GROUP-test2 glfsd-1-GROUP -inf: test2
location block-glfsd-1-GROUP-test3 glfsd-1-GROUP -inf: test3
location block-glfsd-1-GROUP-test4 glfsd-1-GROUP -inf: test4
location block-glfsd-2-GROUP-test1 glfsd-2-GROUP -inf: test1
location block-glfsd-2-GROUP-test3 glfsd-2-GROUP -inf: test3
location block-glfsd-2-GROUP-test4 glfsd-2-GROUP -inf: test4
location block-glfsd-3-GROUP-test1 glfsd-3-GROUP -inf: test1
location block-glfsd-3-GROUP-test2 glfsd-3-GROUP -inf: test2
location block-glfsd-3-GROUP-test4 glfsd-3-GROUP -inf: test4
location block-glfsd-4-GROUP-test1 glfsd-4-GROUP -inf: test1
location block-glfsd-4-GROUP-test2 glfsd-4-GROUP -inf: test2
location block-glfsd-4-GROUP-test3 glfsd-4-GROUP -inf: test3


now I need a way of saying that clone-glfs can start once any of glfsd-1, glfsd-2,glfsd-3 or glfsd-4 have started.

Any ideas. I have read the crm cli document, as well of many iterations of the clusters from scratch, etc.

I just can't seem to find an answer.  can it be done?

Pat.

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker

Reply via email to