Hi,

On Wed, Sep 26, 2012 at 04:44:51PM -0500, Viviana Cuellar Rivera wrote:
> Hi all,
> I'm trying to setup HAProxy with pacemaker, the scenario is as follows:
> 
>                                              |-Server 1
> vip1---balancer 1------------|
>                                              |-Server 2
> 
> 
>                                              |-Servidor 3
> vip2---balancer 2------------|
>                                              |-Servidor 4
> 
> My configuration is:
> lvs1:~#crm configure edit
> node lvs1
> node lvs2
> primitive haproxy lsb:haproxy \
> op monitor interval="30s" \
> meta is-managed="true" target-role="Started"
> primitive vip1 ocf:heartbeat:IPaddr2 \
> params ip="10.200.2.231" cidr_netmask="255.255.255.0" nic="eth0" \
> op monitor interval="40s" timeout="20s" \
> meta target-role="Started"
> primitive vip2 ocf:heartbeat:IPaddr2 \
> params ip="10.200.2.224" cidr_netmask="255.255.255.0" nic="eth0" \
> op monitor interval="40s" timeout="20s" \
> meta target-role="Started"
> location vip1_pref_1 vip1 100: lvs1
> location vip1_pref_2 vip1 50: lvs2
> location vip2_pref_1 vip2 100: lvs2
> location vip2_pref_2 vip2 50: lvs1

If you have only two nodes, the vip1_pref_2 and vip2_pref_2
are not needed.

> colocation haproxy-with-failover inf: haproxy vip1 vip2

You want here:

colocation haproxy-with-failover inf: vip1 vip2 haproxy

In resources sets, the colocation has the same order as in the
order constraint.

> order haproxy-after-failover-ip inf: ( vip1 vip2 ) haproxy
> property $id="cib-bootstrap-options" \
> dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
> cluster-infrastructure="openais" \
> expected-quorum-votes="3" \

Eh? expected-quorum-votes should be "2".

> stonith-enabled="false" \

You need stonith (fencing) in two-node clusters.

> no-quorum-policy="ignore"
> 
> root@lvs1:~# crm status
> ============
> Last updated: Wed Sep 26 15:37:22 2012
> Last change: Wed Sep 26 15:37:20 2012 via cibadmin on lvs1
> Stack: openais
> Current DC: lvs2 - partition with quorum
> Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
> 3 Nodes configured, 3 expected votes

Hmm, do you really have three nodes?

> 3 Resources configured.
> ============
> 
> Online: [ lvs1 lvs2 ]
> 
>  vip1 (ocf::heartbeat:IPaddr2): Started lvs1
>  vip2 (ocf::heartbeat:IPaddr2): Started lvs2
>  haproxy (lsb:haproxy): Started lvs1
> 
> But haproxy is not been monitored on both nodes, I don't know what I'm
> doing wrong :(

How do you know it's not monitored? Note that lrmd will log
the first of recurring monitors and afterwards only once an hour.

Thanks,

Dejan

> I apologize for my English ;)
> 
> Thanks!
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to