Hello, I have a two node pacemaker + CMAN cluster on CentOS 6.4 with the configuration shown below. I'm struggling to get the resources contained in the EIP-AND-VARNISH group back online after a failover on the node that failed.
I start with all varnish resources online on both nodes and the EIP resource online on node1. I stop the varnish service on node1 and all resources then failover to node2 as expected. However, if I then restart the varnish services on node1 and run a crm_resource --cleanup on those services, the cluster is disrupted and a full failover happens again. My question is how I can perform the process of restarting the varnish resources on a previously failed node without causing a failover and so they are marked online (Started) on that node? Is it a matter of the order of the cleanup and starting or have I done something wrong in my configuration? CLUSTER CONFIG [root@node1 ~]# pcs config Corosync Nodes: Pacemaker Nodes: node1 node2 Resources: Resource: ClusterEIP_1.2.3.4 (provider=pacemaker type=EIP class=ocf) Attributes: first_network_interface_id=eni-e4e0b68c second_network_interface_id=eni-35f9af5d first_private_ip=10.50.3.191 second_private_ip=10.50.3.91 eip=1.2.3.4 alloc_id=eipalloc-376c3c5f Operations: monitor interval=30s Clone: EIP-AND-VARNISH-clone Group: EIP-AND-VARNISH Resource: Varnish (provider=redhat type=varnish.sh class=ocf) Operations: monitor interval=30s Resource: Varnishlog (provider=redhat type=varnishlog.sh class=ocf) Operations: monitor interval=30s Resource: Varnishncsa (provider=redhat type=varnishncsa.sh class=ocf) Operations: monitor interval=30s Location Constraints: Ordering Constraints: ClusterEIP_1.2.3.4 then Varnish Varnish then Varnishlog Varnishlog then Varnishncsa Colocation Constraints: Varnish with ClusterEIP_1.2.3.4 Varnishlog with Varnish Varnishncsa with Varnishlog Cluster Properties: dc-version: 1.1.8-7.el6-394e906 cluster-infrastructure: cman last-lrm-refresh: 1381020426 expected-quorum-votes: 2 stonith-enabled: false no-quorum-policy: ignore CONSTRAINT AND RSC DEFAULTS [root@node1 ~]# pcs constraint all Location Constraints: Ordering Constraints: ClusterEIP_1.2.3.4 then Varnish (Mandatory) (id:order-ClusterEIP_1.2.3.4-Varnish-mandatory) Varnish then Varnishlog (Mandatory) (id:order-Varnish-Varnishlog-mandatory) Varnishlog then Varnishncsa (Mandatory) (id:order-Varnishlog-Varnishncsa-mandatory) Colocation Constraints: Varnish with ClusterEIP_1.2.3.4 (INFINITY) (id:colocation-Varnish-ClusterEIP_1.2.3.4-INFINITY) Varnishlog with Varnish (INFINITY) (id:colocation-Varnishlog-Varnish-INFINITY) Varnishncsa with Varnishlog (INFINITY) (id:colocation-Varnishncsa-Varnishlog-INFINITY) [root@node1 ~]# pcs resource rsc defaults resource-stickiness: 100 migration-threshold: 1
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org