I'm not sure if I've configured things correctly, as the last time I did this was on Heartbeat 2.0.7 or so. It's either that as a bug (and it's far more likely I've stuffed something up): * there are two resource groups, one with higher priority (master) and one with lower priority (slave) - note that I'm not actually configuring them as master/slave resources * two nodes in the cluster (A and B) * they are constrained to run: - only if a pingd instance is successfully running - colocated with -INFINITY to each other (i.e. cannot run together) - in the order of the "master" resource group and then the "slave" resource group - preferring resource group "master" running on node A
With the cluster in its default running state of "master" on A, and "slave" on B, everything seems fine. When I "fail" node A, "slave" stops on node B but then no resources are started on node B. It's clearly a conflict of constraints, or the resource group priorities being ignored but I can pick it. Here are (hopefully) the relevant snippets of CIB: <resources> <group id="master"> <meta_attributes id="master-meta_attributes"> <nvpair id="master-meta_attributes-priority" name="priority" value="1000"/> </meta_attributes> <group id="slave"> <meta_attributes id="slave-meta_attributes"> <nvpair id="slave-meta_attributes-priority" name="priority" value="0"/> </meta_attributes> <clone id="pingdclone"> <meta_attributes id="pingdclone-meta_attributes"> <nvpair id="pingdclone-meta_attributes-globally-unique" name="globally-unique" value="false"/> </meta_attributes> <primitive class="ocf" id="pingd" provider="pacemaker" type="pingd"> <instance_attributes id="pingd-instance_attributes"> <nvpair id="pingd-instance_attributes-host_list" name="host_list" value="X.X.X.X"/> <nvpair id="pingd-instance_attributes-multiplier" name="multiplier" value="100"/> </instance_attributes> <operations> <op id="pingd-monitor-15s" interval="15s" name="monitor" timeout="5s"/> </operations> </primitive> </clone> </resources> <constraints> <rsc_location id="cli-prefer-master0" node="nodeA" rsc="master" score="1000"/> <rsc_location id="cli-prefer-master1" node="nodeB" rsc="master" score="0"/> <rsc_colocation id="separate_master_and_slave" rsc="master" score="-INFINITY" with-rsc="slave"/> </constraints> Any help would be appreciated. -- Regards, Oliver Hookins Anchor Systems _______________________________________________ Pacemaker mailing list Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker