On Oct 15, 2013, at 7:12 PM, Andrew Beekhof <and...@beekhof.net> wrote:

> 
> On 04/10/2013, at 12:51 AM, Sean Lutner <s...@rentul.net> wrote:
> 
>> Hello,
>> I'm hoping to get some assistance with a cluster configuration I'm currently 
>> working on.
>> 
>> The cluster is built on CentOS 6.4 Amazon EC2 systems with:
>>      - pacemaker-1.1.8-7.el6.x86_64
>>      - cman-3.0.12.1-49.el6_4.2.x86_64
>> 
>> Within the cluster I have four resources. One is for a floating Elastic IP 
>> (IP anonymized) and three for varnish services. I've configured each 
>> service, placed them all into a group and also have configured order and 
>> collocation constraints. 
>> 
>> The varnish resources are written by us as varnish is installed from custom 
>> in-house packages. Also, the version of the resource-agents package included 
>> in CentOS 6.4 doesn't include the varnish resource agent.
>> 
>> Currently all the resources are online on the current DC. If I stop the 
>> cluster services to simulate a failure, the EIP resource is brought up on 
>> the second node as expected, but the other resources are not.
> 
> is crm_mon (i think 'pcs status' is the equivalent) reporting any errors?

This turned out to be a bug in the status() function in our OCF script. Once 
that was fixed things started working. Sorry for the noise.

> 
>> I'm having trouble trying to determine if the problem is with the 
>> configuration I have in place or with something else. The end goal is to 
>> have all services in the group (or the group) move to the secondary node 
>> upon failure. Any advice or pointers are welcome.
>> 
>> RSC defaults are:
>> # pcs resource rsc defaults
>> resource-stickiness: 100
>> migration-threshold: 1
>> 
>> Current configuration is:
>> Corosync Nodes:
>> 
>> Pacemaker Nodes:
>> ip-10-50-3-122 ip-10-50-3-251 
>> 
>> Resources: 
>> Group: EIP-AND-VARNISH
>> Resource: ClusterEIP_1.2.3.4 (provider=pacemaker type=EIP class=ocf)
>>  Attributes: first_network_interface_id=eni-e4e0b68c 
>> second_network_interface_id=eni-35f9af5d first_private_ip=10.50.3.191 
>> second_private_ip=10.50.3.91 eip=1.2.3.4 alloc_id=eipalloc-376c3c5f 
>>  Operations: monitor interval=30s
>> Resource: Varnish (provider=redhat type=varnish.sh class=ocf)
>>  Operations: monitor interval=30s
>> Resource: Varnishlog (provider=redhat type=varnishlog.sh class=ocf)
>>  Operations: monitor interval=30s
>> Resource: Varnishncsa (provider=redhat type=varnishncsa.sh class=ocf)
>>  Operations: monitor interval=30s
>> 
>> Location Constraints:
>> Resource: ClusterEIP_1.2.3.4
>>   Rule: #uname eq ip-10-50-3-251 (score:INFINITY) 
>> Ordering Constraints:
>> ClusterEIP_1.2.3.4 then Varnish
>> Varnish then Varnishlog
>> Varnishlog then Varnishncsa
>> Colocation Constraints:
>> Varnish with ClusterEIP_1.2.3.4
>> Varnishlog with Varnish
>> Varnishncsa with Varnishlog
>> 
>> Cluster Properties:
>> dc-version: 1.1.8-7.el6-394e906
>> cluster-infrastructure: cman
>> last-lrm-refresh: 1380767822
>> expected-quorum-votes: 2
>> stonith-enabled: false
>> no-quorum-policy: ignore
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> 
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to