Hi, vsanqa27 is promoted to master and vsanqa28 is slave. Suddently, vsanqa27 is demoted and vsanqa28 is promoted.
[root@vsanqa28 vsh-mp-05]# rpm -qa | grep pcs; rpm -qa | grep ccs ; rpm -qa | grep pacemaker ; rpm -qa | grep corosync pcs-0.9.90-2.el6.centos.2.noarch ccs-0.16.2-69.el6_5.1.x86_64 pacemaker-cli-1.1.8-7.el6.x86_64 pacemaker-cluster-libs-1.1.8-7.el6.x86_64 pacemaker-1.1.8-7.el6.x86_64 pacemaker-libs-1.1.8-7.el6.x86_64 corosync-1.4.1-15.el6_4.1.x86_64 corosynclib-1.4.1-15.el6_4.1.x86_64 [root@vsanqa28 vsh-mp-05]# uname -a Linux vsanqa28 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux [root@vsanqa28 vsh-mp-05]# cat /etc/redhat-release CentOS release 6.4 (Final) [root@vsanqa28 vsh-mp-05]# May 13 01:38:06 vsanqa27 crmd[6961]: notice: process_lrm_event: LRM operation vha-924bf029-93a2-41a0-adcf-f1c1a42956e5_promote_0 (call=704, rc=0, cib-update=412, confirmed=true) ok May 13 01:38:06 vsanqa27 crmd[6961]: notice: process_lrm_event: LRM operation vha-924bf029-93a2-41a0-adcf-f1c1a42956e5_monitor_30000 (call=707, rc=8, cib-update=413, confirmed=false) master May 13 01:38:36 vsanqa27 cib[6956]: notice: cib_process_diff: Diff 0.9967.124 -> 0.9967.125 from vsanqa28 not applied to 0.9967.124: Failed application of an update diff May 13 01:38:36 vsanqa27 cib[6956]: notice: cib_server_process_diff: Not applying diff 0.9967.125 -> 0.9967.126 (sync in progress) May 13 01:38:36 vsanqa27 crmd[6961]: notice: process_lrm_event: LRM operation vha-924bf029-93a2-41a0-adcf-f1c1a42956e5_demote_0 (call=713, rc=0, cib-update=415, confirmed=true) ok <<<<< Why did this happen ? May 13 01:38:36 vsanqa27 crmd[6961]: notice: process_lrm_event: LRM operation vha-924bf029-93a2-41a0-adcf-f1c1a42956e5_monitor_31000 (call=716, rc=8, cib-update=416, confirmed=false) master May 13 01:38:36 vsanqa27 attrd[6959]: notice: attrd_ais_dispatch: Update relayed from vsanqa28 May 13 01:38:36 vsanqa27 attrd[6959]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-vha-924bf029-93a2-41a0-adcf-f1c1a42956e5 (1) May 13 01:38:36 vsanqa27 attrd[6959]: notice: attrd_perform_update: Sent update 6442: fail-count-vha-924bf029-93a2-41a0-adcf-f1c1a42956e5=1 May 13 01:38:36 vsanqa27 attrd[6959]: notice: attrd_ais_dispatch: Update relayed from vsanqa28 May 13 01:38:36 vsanqa27 attrd[6959]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-vha-924bf029-93a2-41a0-adcf-f1c1a42956e5 (1399970316) May 13 01:38:06 vsanqa28 pengine[4310]: notice: unpack_config: On loss of CCM Quorum: Ignore May 13 01:38:06 vsanqa28 pengine[4310]: notice: LogActions: Promote vha-924bf029-93a2-41a0-adcf-f1c1a42956e5:1#011(Slave -> Master vsanqa27) May 13 01:38:06 vsanqa28 pengine[4310]: notice: process_pe_message: Calculated Transition 223: /var/lib/pacemaker/pengine/pe-input-817.bz2 May 13 01:38:06 vsanqa28 crmd[4311]: notice: run_graph: Transition 223 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-817.bz2): Complete May 13 01:38:06 vsanqa28 crmd[4311]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] May 13 01:38:36 vsanqa28 attrd[4309]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-924bf029-93a2-41a0-adcf-f1c1a42956e5 (1) May 13 01:38:36 vsanqa28 attrd[4309]: notice: attrd_perform_update: Sent update 3511: master-vha-924bf029-93a2-41a0-adcf-f1c1a42956e5=1 May 13 01:38:36 vsanqa28 crmd[4311]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] May 13 01:38:36 vsanqa28 pengine[4310]: notice: unpack_config: On loss of CCM Quorum: Ignore May 13 01:38:36 vsanqa28 pengine[4310]: notice: LogActions: Promote vha-924bf029-93a2-41a0-adcf-f1c1a42956e5:0#011(Slave -> Master vsanqa28) May 13 01:38:36 vsanqa28 pengine[4310]: notice: LogActions: Demote vha-924bf029-93a2-41a0-adcf-f1c1a42956e5:1#011(Master -> Slave vsanqa27) <<<<< Why did this happen ? May 13 01:38:36 vsanqa28 pengine[4310]: notice: process_pe_message: Calculated Transition 224: /var/lib/pacemaker/pengine/pe-input-818.bz2 May 13 01:38:36 vsanqa28 crmd[4311]: notice: run_graph: Transition 224 (Complete=5, Pending=0, Fired=0, Skipped=4, Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-input-818.bz2): Stopped May 13 01:38:36 vsanqa28 pengine[4310]: notice: unpack_config: On loss of CCM Quorum: Ignore May 13 01:38:36 vsanqa28 pengine[4310]: notice: LogActions: Promote vha-924bf029-93a2-41a0-adcf-f1c1a42956e5:0#011(Slave -> Master vsanqa28) May 13 01:38:36 vsanqa28 crmd[4311]: warning: destroy_action: Cancelling timer for action 14 (src=1629) May 13 01:38:36 vsanqa28 pengine[4310]: notice: process_pe_message: Calculated Transition 225: /var/lib/pacemaker/pengine/pe-input-819.bz2 May 13 01:38:36 vsanqa28 vgc-vha-config: /usr/bin/vgc-vha-config --promote /dev/vgca0_VHA13 May 13 01:38:36 vsanqa28 vgc-vha-config: Success May 13 01:38:36 vsanqa28 crmd[4311]: warning: status_from_rc: Action 219 (vha-924bf029-93a2-41a0-adcf-f1c1a42956e5_monitor_31000) on vsanqa27 failed (target: 0 vs. rc: 8): Error May 13 01:38:36 vsanqa28 crmd[4311]: warning: update_failcount: Updating failcount for vha-924bf029-93a2-41a0-adcf-f1c1a42956e5 on vsanqa27 after failed monitor: rc=8 (update=value++, time=1399970316) May 13 01:38:36 vsanqa28 crmd[4311]: warning: update_failcount: Updating failcount for vha-924bf029-93a2-41a Regards, Kiran
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org