When updating the meta attribute clone-max all instances of the clone are 
terminated and immediately restarted.

Following configuration (not symmetric cluster):
primitive resMux_gw ocf:heartbeat:Dummy op start interval="0" timeout="10" op 
stop interval="0" timeout="10" op monitor interval="10" timeout="3" 
on-fail="restart" start-delay="10" meta failure-timeout="15m" 
migration-threshold="3"
clone cloneMux_gw resMux_gw meta clone-max="2" target-role="Started" 
is-managed="true"
location locMux_gwmanagement1 cloneMux_gw 1000: management1

crm resource status cloneMux_gw shows
resource cloneMux_gw is running on: management1
which is correct, because there is location information only present for node 
management1.

When clone-max is now updated by
crm resource meta cloneMux_gw set clone-max 1
resMux_gw is immediately restarted on management1. I see in pacemaker log a 
stop call to the resource agent and after a few milliseconds a start.

My question, is there any reason for stopping all instances during update of 
clone-max ?
After update of clone-max in the above case, the same resources run on the same 
nodes as before.

Pacemaker version is 1.1.5

Thanks, Rainer
-- 
NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone!                          
        
Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to