Hi All, Can someone please help me in my below setup??
I have 2 node setup with HB+pacemaker. I have my app running on both the nodes before the start of HB and pacemaker. Later I configured the crm as below: # crm configure primitive havip ocf:IPaddr2 params ip=192.168.101.205 cidr_netmask=32 nic=eth1 op monitor interval=30s # crm configure primitive oc_proxyapp lsb::proxyapp meta allow-migrate="true" migration-threshold="3" failure-timeout="30s" op monitor interval="5s" #crm configure colocation oc-havip INFINITY: havip oc_proxyapp My intention is to monitor the already running instance and a vip is attached to only one node. If the app fails for 3 times on the current node, automatically the vip should be moved to another node but the app shouldn't be restarted. With the above config, the app is getting stopped on the second node and getting re-started in first node. >From logs, I could see WARN: native_create_actions: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information. I tried with is-managed="false" also but in that case the app will not get restarted as per my understanding. So please let me know how can I monitor an already running instance on both nodes with some migration threshold ??? Thanks Eswar
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org