Greetings, I am getting closer to what I need. I am having trouble figuring out a rule for p_VIPeth1_1 that will cause it's score to be -inf if p_R_NODE1 is not running.Here is the rule that I am struggling with. location p_VIPeth1_1_loc2 p_VIPeth1_1 \ rule $id="p_VIPeth1_1_loc2-rule" -inf: p_R_NODE1 eq Stopped
\/ The entire config so far is below \/ root@clust1:~# crm configure shownode $id="404ea40c-f92d-4649-869d-41beaf261d87" clust1 \ attributes standby="off"node $id="ae33be72-ccbb-4f54-859f-fd400efeb60b" clust2 \ attributes standby="off"primitive p_PINGDB ocf:pacemaker:ping \ params host_list="192.168.254.42" name="p_PINGDB" \ op monitor interval="15s" timeout="5s"primitive p_R_NODE1 lsb:rabbitmq-server \ op monitor interval="15s" timeout="15s" \ meta target-role="Started" is-managed="true"primitive p_R_NODE2 lsb:rabbitmq-server \ op monitor interval="15s" timeout="15s" \ meta target-role="Started"primitive p_VIPeth1_1 ocf:heartbeat:IPaddr \ params ip="192.168.254.78" cidr_netmask="255.255.255.0" nic="eth1" \ op monitor interval="40s" timeout="20s" \ meta target-role="Started" is-managed="true"clone p_PINGDB_clone p_PINGDBlocation cli-standby-p_R_NODE1 p_R_NODE1 \ rule $id="cli-standby-rule-p_R_NODE1" -inf: #uname eq clust2location p_R_NODE1_loc p_R_NODE1 -inf: clust2location p_R_NODE1_pref p_R_NODE1 inf: clust1location p_R_NODE2_loc p_R_NODE2 -inf: clust1location p_R_NODE2_pref p_R_NODE2 inf: clust2location p_VIPeth1_1_loc p_VIPeth1_1 \ rule $id="p_VIPeth1_1_loc-rule" -inf: p_PINGDB lte 0location p_VIPeth1_1_loc2 p_VIPeth1_1 \ rule $id="p_VIPeth1_1_loc2-rule" -inf: p_R_NODE1 eq Stoppedproperty $id="cib-bootstrap-options" \ dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \ cluster-infrastructure="Heartbeat" \ stonith-enabled="false" \ default-action-timeout="120" \ last-lrm-refresh="1395671783"root@clust1:~# Thanks, Steve From: and...@beekhof.net Date: Wed, 19 Mar 2014 09:37:43 +1100 To: pacemaker@oss.clusterlabs.org Subject: Re: [Pacemaker] Don't want to stop lsb resource on migration On 19 Mar 2014, at 6:56 am, Bingham <knee-jerk-react...@hotmail.com> wrote: > > My problem is that I need to have rabbitmq running on both node1 and node2. > I also need the IP to fail over if rabbitmq were to fail on the current node. > > The 2 rabbitmq services are communicating with each other. > Data is pushed to the clients. > > Even though the IP may currently live on node1, data may flow through node1 > then through node2 (via rabbit) and out to client. > > Rnode1 -------> client1 > / /|\ > DB---->VIP | > \|/ > Rnode2 --------> client2 > > > > Maybe I should not have these resources grouped together since that implies > collocation infinity for IP and rabbitmq? Correct. It also sounds like rabbitmq should be a master/slave resource > > > Steve > > > From: and...@beekhof.net > Date: Tue, 18 Mar 2014 11:44:34 +1100 > To: pacemaker@oss.clusterlabs.org > Subject: Re: [Pacemaker] Don't want to stop lsb resource on migration > > > On 14 Mar 2014, at 1:00 am, Bingham <knee-jerk-react...@hotmail.com> wrote: > > > Hello, > > > > My setup: > > I have a 2 node cluster using pacemaker and heartbeat. I have 2 > > resources, ocf::heartbeat:IPaddr and lsb:rabbitmq-server. > > I have these 2 resources grouped together and they will fail over > > to the other node. > > > > > > > > question: > > When rabbitmq is migrated to node1 from node2 I would like to > > 'not' have the the </etc/init.d/rabbitmq-server stop> happen on the failed > > server (node1 in this example). > > 'migrate' has special meaning here. > After a failure rabbitmq is moved (stopped on the old node and started on the > new one), which is different from a migration. > > Leaving rabbitmq in an unclean state on node1 would definitely not be a good > idea. > > > > > Is it possible to do this in crm? > > > > I realize that I could hack the initscript's case statement for > > stop to just "exit 0", but I am hoping there is a way to do this in crm. > > > > > > Thanks for any help, > > Steve > > > > > > _______________________________________________ > > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > > > Project Home: http://www.clusterlabs.org > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > Bugs: http://bugs.clusterlabs.org > > > _______________________________________________ Pacemaker mailing list: > Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: > http://www.clusterlabs.org Getting > started:http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: > http://bugs.clusterlabs.org > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org