Hi All,

I have a (hopefully) simple problem that I need to fix, but I feel like I'm missing a key concept here that is causing problems. I have 2 nodes, genome-ldap1 and genome-ldap2. Using latest corosync, pacemaker and openais from the epel and clusterlabs repos, CentOS 5.4.

Both nodes are running OpenLDAP daemons, genome-ldap1 is the primary and genome-ldap2 is the replication slave. Both services are simple LSB services, and I want them both active all the time. Here's my CRM config:

node genome-ldap1
node genome-ldap2
primitive LDAP lsb:ldap \
        op monitor interval="10s" timeout="15s" \
        meta target-role="Started"
primitive LDAP-IP ocf:heartbeat:IPaddr2 \
        params ip="10.1.1.83" nic="eth0" cidr_netmask="16" \
        op monitor interval="30s" timeout="20s" \
        meta target-role="Started"
clone LDAP-clone LDAP \
        meta clone-max="2" clone-node-max="1" globally-unique="false"
location LDAP-IP-placement-1 LDAP-IP 100: genome-ldap1
location LDAP-IP-placement-2 LDAP-IP 50: genome-ldap2
location LDAP-placement-1 LDAP-clone 100: genome-ldap1
location LDAP-placement-2 LDAP-clone 100: genome-ldap2
colocation LDAP-with-IP inf: LDAP-IP LDAP-clone
order LDAP-after-IP inf: LDAP-IP LDAP-clone
property $id="cib-bootstrap-options" \
        dc-version="1.0.7-d3fa20fc76c7947d6de66db7e52526dc6bd7d782" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="2" \
        stonith-enabled="false" \
        symmetric-cluster="false" \
        no-quorum-policy="ignore" \
        last-lrm-refresh="1268024522"

So I want LDAP-IP to stay with genome-ldap1 if possible, but when genome-ldap1 goes down I want it to float to genome-ldap2, and then float back to genome-ldap1 when that machine comes back online again. But what happens is, when genome-ldap1 goes down, the LDAP clones on BOTH nodes get stopped, and the LDAP-IP disappears. When genome-ldap1 comes back, LDAP gets started JUST on genome-ldap1 (and not on genome-ldap2), and the IP returns to genome-ldap1.

A 'crm resource cleanup LDAP-clone' brings everything back to normal if it gets messed up.

If genome-ldap2 goes offline, then the IP stays with genome-ldap1 and LDAP stays started on genome-ldap1. Then when genome-ldap2 comes back, LDAP starts OK. It's only when genome-ldap1 goes offline that LDAP stops everywhere and doesn't come back...

Looking at /var/log/messages on genome-ldap2:

-----
Mar  8 11:41:22 genome-ldap2 corosync[1991]:   [CLM   ] New Configuration:
Mar 8 11:41:22 genome-ldap2 corosync[1991]: [CLM ] r(0) ip(10.1.1.85)
Mar  8 11:41:22 genome-ldap2 corosync[1991]:   [CLM   ] Members Left:
Mar  8 11:41:22 genome-ldap2 corosync[1991]:   [CLM   ] Members Joined:
Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 152: memb=1, new=0, lost=0 Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] info: pcmk_peer_update: memb: genome-ldap2 1426129162 Mar 8 11:41:22 genome-ldap2 corosync[1991]: [CLM ] CLM CONFIGURATION CHANGE
Mar  8 11:41:22 genome-ldap2 corosync[1991]:   [CLM   ] New Configuration:
Mar 8 11:41:22 genome-ldap2 corosync[1991]: [CLM ] r(0) ip(10.1.1.84) Mar 8 11:41:22 genome-ldap2 corosync[1991]: [CLM ] r(0) ip(10.1.1.85) Mar 8 11:41:22 genome-ldap2 cib: [2037]: notice: ais_dispatch: Membership 152: quorum acquired Mar 8 11:41:22 genome-ldap2 crmd: [2041]: notice: ais_dispatch: Membership 152: quorum acquired Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: crm_update_peer: Node genome-ldap1: id=1409351946 state=member (new) addr=r(0) ip(10.1.1.84) votes=1 born=136 seen=152 proc=00000000000000000000000000013312
Mar  8 11:41:22 genome-ldap2 corosync[1991]:   [CLM   ] Members Left:
Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: ais_status_callback: status: genome-ldap1 is now member (was lost) Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: ais_dispatch: Membership 152: quorum retained Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: crm_update_peer: Node genome-ldap1: id=1409351946 state=member (new) addr=r(0) ip(10.1.1.84) votes=1 born=136 seen=152 proc=00000000000000000000000000013312 Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: crm_update_quorum: Updating quorum status to true (call=44) Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_delete for section //node_sta...@uname='genome-ldap1']/lrm (origin=local/crmd/40, version=0.53.4): ok (rc=0)
Mar  8 11:41:22 genome-ldap2 corosync[1991]:   [CLM   ] Members Joined:
Mar 8 11:41:22 genome-ldap2 corosync[1991]: [CLM ] r(0) ip(10.1.1.84) Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_delete for section //node_sta...@uname='genome-ldap1']/transient_attributes (origin=local/crmd/41, version=0.53.5): ok (rc=0) Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 152: memb=2, new=1, lost=0 Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] info: update_member: Node 1409351946/genome-ldap1 is now: member Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] info: pcmk_peer_update: NEW: genome-ldap1 1409351946 Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] info: pcmk_peer_update: MEMB: genome-ldap1 1409351946 Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] info: pcmk_peer_update: MEMB: genome-ldap2 1426129162 Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] info: send_member_notification: Sending membership update 152 to 2 children Mar 8 11:41:22 genome-ldap2 corosync[1991]: [TOTEM ] A processor joined or left the membership and a new membership was formed. Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/42, version=0.53.5): ok (rc=0) Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] info: update_member: 0x1f77bd70 Node 1409351946 (genome-ldap1) born on: 152 Mar 8 11:41:22 genome-ldap2 corosync[1991]: [pcmk ] info: send_member_notification: Sending membership update 152 to 2 children Mar 8 11:41:22 genome-ldap2 corosync[1991]: [MAIN ] Completed service synchronization, ready to provide service. Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: log_data_element: cib:diff: - <cib have-quorum="0" admin_epoch="0" epoch="53" num_updates="6" /> Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: log_data_element: cib:diff: + <cib have-quorum="1" admin_epoch="0" epoch="54" num_updates="1" /> Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/44, version=0.54.1): ok (rc=0) Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=LDAP-IP_monitor_0, magic=0:7;6:48:7:cda9768d-23dc-4984-ac96-0756c2c1ae37, cib=0.53.4) : Resource op removal Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: erase_xpath_callback: Deletion of "//node_sta...@uname='genome-ldap1']/lrm": ok (rc=0) Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=1, tag=transient_attributes, id=genome-ldap1, magic=NA, cib=0.53.5) : Transient attribute: removal Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: erase_xpath_callback: Deletion of "//node_sta...@uname='genome-ldap1']/transient_attributes": ok (rc=0) Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: need_abort: Aborting on change to have-quorum Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: ais_dispatch: Membership 152: quorum retained Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/46, version=0.54.1): ok (rc=0) Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/47, version=0.54.1): ok (rc=0) Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: do_state_transition: Membership changed: 148 -> 152 - join restart Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: do_pe_invoke: Query 51: Requesting the current CIB: S_POLICY_ENGINE Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=do_state_transition ] Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: update_dc: Unset DC genome-ldap2 Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: join_make_offer: Making join offers based on membership 152 Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks Mar 8 11:41:22 genome-ldap2 crmd: [2041]: info: update_dc: Set DC to genome-ldap2 (3.0.1) Mar 8 11:41:22 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/50, version=0.54.1): ok (rc=0) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: update_dc: Unset DC genome-ldap2 Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_dc_join_offer_all: A new node joined the cluster Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: update_dc: Set DC to genome-ldap2 (3.0.1) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ] Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_state_transition: All 2 cluster nodes responded to the join offer. Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_dc_join_finalize: join-3: Syncing the CIB from genome-ldap2 to the rest of the cluster Mar 8 11:41:26 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/54, version=0.54.1): ok (rc=0) Mar 8 11:41:26 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/55, version=0.54.1): ok (rc=0) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_dc_join_ack: join-3: Updating node state to member for genome-ldap2 Mar 8 11:41:26 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/56, version=0.54.1): ok (rc=0) Mar 8 11:41:26 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_delete for section //node_sta...@uname='genome-ldap2']/lrm (origin=local/crmd/57, version=0.54.2): ok (rc=0) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: erase_xpath_callback: Deletion of "//node_sta...@uname='genome-ldap2']/lrm": ok (rc=0) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_dc_join_ack: join-3: Updating node state to member for genome-ldap1 Mar 8 11:41:26 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_delete for section //node_sta...@uname='genome-ldap1']/lrm (origin=local/crmd/59, version=0.54.3): ok (rc=0) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: erase_xpath_callback: Deletion of "//node_sta...@uname='genome-ldap1']/lrm": ok (rc=0) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ] Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: crm_update_quorum: Updating quorum status to true (call=63) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_pe_invoke: Query 64: Requesting the current CIB: S_POLICY_ENGINE Mar 8 11:41:26 genome-ldap2 attrd: [2039]: info: attrd_local_callback: Sending full refresh (origin=crmd) Mar 8 11:41:26 genome-ldap2 attrd: [2039]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-LDAP:1 (INFINITY) Mar 8 11:41:26 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/61, version=0.54.4): ok (rc=0) Mar 8 11:41:26 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/63, version=0.54.4): ok (rc=0) Mar 8 11:41:26 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_delete for section //node_sta...@uname='genome-ldap1']/transient_attributes (origin=genome-ldap1/crmd/6, version=0.54.4): ok (rc=0) Mar 8 11:41:26 genome-ldap2 cib: [2037]: info: cib_process_request: Operation complete: op cib_delete for section //node_sta...@uname='genome-ldap1']/lrm (origin=genome-ldap1/crmd/7, version=0.54.5): ok (rc=0) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_pe_invoke_callback: Invoking the PE: query=64, ref=pe_calc-dc-1268077286-24, seq=152, quorate=1 Mar 8 11:41:26 genome-ldap2 pengine: [2040]: notice: unpack_config: On loss of CCM Quorum: Ignore Mar 8 11:41:26 genome-ldap2 pengine: [2040]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Mar 8 11:41:26 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap2 is online Mar 8 11:41:26 genome-ldap2 pengine: [2040]: WARN: unpack_rsc_op: Processing failed op LDAP:1_start_0 on genome-ldap2: unknown error (1) Mar 8 11:41:26 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap1 is online Mar 8 11:41:26 genome-ldap2 pengine: [2040]: notice: clone_print: Clone Set: LDAP-clone Mar 8 11:41:26 genome-ldap2 pengine: [2040]: notice: short_print: Stopped: [ LDAP:0 LDAP:1 ] Mar 8 11:41:26 genome-ldap2 pengine: [2040]: notice: native_print: LDAP-IP (ocf::heartbeat:IPaddr2): Stopped Mar 8 11:41:26 genome-ldap2 pengine: [2040]: info: get_failcount: LDAP-clone has failed 1000000 times on genome-ldap2 Mar 8 11:41:26 genome-ldap2 pengine: [2040]: WARN: common_apply_stickiness: Forcing LDAP-clone away from genome-ldap2 after 1000000 failures (max=1000000) Mar 8 11:41:26 genome-ldap2 pengine: [2040]: WARN: native_color: Resource LDAP:1 cannot run anywhere Mar 8 11:41:26 genome-ldap2 pengine: [2040]: notice: RecurringOp: Start recurring monitor (10s) for LDAP:0 on genome-ldap1 Mar 8 11:41:26 genome-ldap2 pengine: [2040]: notice: RecurringOp: Start recurring monitor (30s) for LDAP-IP on genome-ldap1 Mar 8 11:41:26 genome-ldap2 pengine: [2040]: notice: LogActions: Start LDAP:0 (genome-ldap1) Mar 8 11:41:26 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP:1 (Stopped) Mar 8 11:41:26 genome-ldap2 pengine: [2040]: notice: LogActions: Start LDAP-IP (genome-ldap1) Mar 8 11:41:26 genome-ldap2 attrd: [2039]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Mar 8 11:41:26 genome-ldap2 pengine: [2040]: WARN: process_pe_message: Transition 1: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-20.bz2 Mar 8 11:41:26 genome-ldap2 attrd: [2039]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>) Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: unpack_graph: Unpacked transition 1: 11 actions in 11 synapses Mar 8 11:41:26 genome-ldap2 pengine: [2040]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1268077286-24) derived from /var/lib/pengine/pe-warn-20.bz2 Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: te_rsc_command: Initiating action 5: monitor LDAP:0_monitor_0 on genome-ldap1 Mar 8 11:41:26 genome-ldap2 crmd: [2041]: info: te_rsc_command: Initiating action 6: monitor LDAP-IP_monitor_0 on genome-ldap1 Mar 8 11:41:26 genome-ldap2 attrd: [2039]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>) Mar 8 11:41:26 genome-ldap2 attrd: [2039]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-LDAP:0 (<null>) Mar 8 11:41:26 genome-ldap2 attrd: [2039]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-LDAP:1 (1268077069) Mar 8 11:41:26 genome-ldap2 attrd: [2039]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-LDAP:0 (<null>) Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: match_graph_event: Action LDAP:0_monitor_0 (5) confirmed on genome-ldap1 (rc=0) Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: match_graph_event: Action LDAP-IP_monitor_0 (6) confirmed on genome-ldap1 (rc=0) Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: te_rsc_command: Initiating action 4: probe_complete probe_complete on genome-ldap1 - no waiting Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: te_pseudo_action: Pseudo action 2 fired and confirmed Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: te_pseudo_action: Pseudo action 11 fired and confirmed Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: te_rsc_command: Initiating action 13: start LDAP-IP_start_0 on genome-ldap1 Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: match_graph_event: Action LDAP-IP_start_0 (13) confirmed on genome-ldap1 (rc=0) Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: te_pseudo_action: Pseudo action 9 fired and confirmed Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: te_rsc_command: Initiating action 14: monitor LDAP-IP_monitor_30000 on genome-ldap1 Mar 8 11:41:27 genome-ldap2 crmd: [2041]: info: te_rsc_command: Initiating action 7: start LDAP:0_start_0 on genome-ldap1 Mar 8 11:41:28 genome-ldap2 crmd: [2041]: info: match_graph_event: Action LDAP-IP_monitor_30000 (14) confirmed on genome-ldap1 (rc=0) Mar 8 11:41:28 genome-ldap2 crmd: [2041]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=genome-ldap1, magic=NA, cib=0.54.10) : Transient attribute: update Mar 8 11:41:28 genome-ldap2 crmd: [2041]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000 Mar 8 11:41:28 genome-ldap2 crmd: [2041]: info: update_abort_priority: Abort action done superceeded by restart Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: match_graph_event: Action LDAP:0_start_0 (7) confirmed on genome-ldap1 (rc=0) Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: te_pseudo_action: Pseudo action 10 fired and confirmed Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: run_graph: ==================================================== Mar 8 11:41:29 genome-ldap2 crmd: [2041]: notice: run_graph: Transition 1 (Complete=10, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-20.bz2): Stopped Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: te_graph_trigger: Transition 1 is now complete Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ] Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: do_pe_invoke: Query 65: Requesting the current CIB: S_POLICY_ENGINE Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: do_pe_invoke_callback: Invoking the PE: query=65, ref=pe_calc-dc-1268077289-31, seq=152, quorate=1 Mar 8 11:41:29 genome-ldap2 pengine: [2040]: notice: unpack_config: On loss of CCM Quorum: Ignore Mar 8 11:41:29 genome-ldap2 pengine: [2040]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Mar 8 11:41:29 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap2 is online Mar 8 11:41:29 genome-ldap2 pengine: [2040]: WARN: unpack_rsc_op: Processing failed op LDAP:1_start_0 on genome-ldap2: unknown error (1) Mar 8 11:41:29 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap1 is online Mar 8 11:41:29 genome-ldap2 pengine: [2040]: notice: clone_print: Clone Set: LDAP-clone Mar 8 11:41:29 genome-ldap2 pengine: [2040]: notice: short_print: Started: [ genome-ldap1 ] Mar 8 11:41:29 genome-ldap2 pengine: [2040]: notice: short_print: Stopped: [ LDAP:1 ] Mar 8 11:41:29 genome-ldap2 pengine: [2040]: notice: native_print: LDAP-IP (ocf::heartbeat:IPaddr2): Started genome-ldap1 Mar 8 11:41:29 genome-ldap2 pengine: [2040]: info: get_failcount: LDAP-clone has failed 1000000 times on genome-ldap2 Mar 8 11:41:29 genome-ldap2 pengine: [2040]: WARN: common_apply_stickiness: Forcing LDAP-clone away from genome-ldap2 after 1000000 failures (max=1000000) Mar 8 11:41:29 genome-ldap2 pengine: [2040]: WARN: native_color: Resource LDAP:1 cannot run anywhere Mar 8 11:41:29 genome-ldap2 pengine: [2040]: notice: RecurringOp: Start recurring monitor (10s) for LDAP:0 on genome-ldap1 Mar 8 11:41:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP:0 (Started genome-ldap1) Mar 8 11:41:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP:1 (Stopped) Mar 8 11:41:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP-IP (Started genome-ldap1) Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: unpack_graph: Unpacked transition 2: 1 actions in 1 synapses Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1268077289-31) derived from /var/lib/pengine/pe-warn-21.bz2 Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: te_rsc_command: Initiating action 8: monitor LDAP:0_monitor_10000 on genome-ldap1 Mar 8 11:41:29 genome-ldap2 pengine: [2040]: WARN: process_pe_message: Transition 2: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-21.bz2 Mar 8 11:41:29 genome-ldap2 pengine: [2040]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: match_graph_event: Action LDAP:0_monitor_10000 (8) confirmed on genome-ldap1 (rc=0) Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: run_graph: ==================================================== Mar 8 11:41:29 genome-ldap2 crmd: [2041]: notice: run_graph: Transition 2 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-21.bz2): Complete Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: te_graph_trigger: Transition 2 is now complete Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: notify_crmd: Transition 2 status: done - <null> Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Mar 8 11:41:29 genome-ldap2 crmd: [2041]: info: do_state_transition: Starting PEngine Recheck Timer Mar 8 11:46:37 genome-ldap2 cib: [2037]: info: cib_stats: Processed 164 operations (4878.00us average, 0% utilization) in the last 10min Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: crm_timer_popped: PEngine Recheck Timer (I_PE_CALC) just popped! Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ] Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: do_pe_invoke: Query 66: Requesting the current CIB: S_POLICY_ENGINE Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: do_pe_invoke_callback: Invoking the PE: query=66, ref=pe_calc-dc-1268078189-33, seq=152, quorate=1 Mar 8 11:56:29 genome-ldap2 pengine: [2040]: notice: unpack_config: On loss of CCM Quorum: Ignore Mar 8 11:56:29 genome-ldap2 pengine: [2040]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Mar 8 11:56:29 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap2 is online Mar 8 11:56:29 genome-ldap2 pengine: [2040]: WARN: unpack_rsc_op: Processing failed op LDAP:1_start_0 on genome-ldap2: unknown error (1) Mar 8 11:56:29 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap1 is online Mar 8 11:56:29 genome-ldap2 pengine: [2040]: notice: clone_print: Clone Set: LDAP-clone Mar 8 11:56:29 genome-ldap2 pengine: [2040]: notice: short_print: Started: [ genome-ldap1 ] Mar 8 11:56:29 genome-ldap2 pengine: [2040]: notice: short_print: Stopped: [ LDAP:1 ] Mar 8 11:56:29 genome-ldap2 pengine: [2040]: notice: native_print: LDAP-IP (ocf::heartbeat:IPaddr2): Started genome-ldap1 Mar 8 11:56:29 genome-ldap2 pengine: [2040]: info: get_failcount: LDAP-clone has failed 1000000 times on genome-ldap2 Mar 8 11:56:29 genome-ldap2 pengine: [2040]: WARN: common_apply_stickiness: Forcing LDAP-clone away from genome-ldap2 after 1000000 failures (max=1000000) Mar 8 11:56:29 genome-ldap2 pengine: [2040]: WARN: native_color: Resource LDAP:1 cannot run anywhere Mar 8 11:56:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP:0 (Started genome-ldap1) Mar 8 11:56:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP:1 (Stopped) Mar 8 11:56:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP-IP (Started genome-ldap1) Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: unpack_graph: Unpacked transition 3: 0 actions in 0 synapses Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1268078189-33) derived from /var/lib/pengine/pe-warn-22.bz2 Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: run_graph: ==================================================== Mar 8 11:56:29 genome-ldap2 crmd: [2041]: notice: run_graph: Transition 3 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-22.bz2): Complete Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: te_graph_trigger: Transition 3 is now complete Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: notify_crmd: Transition 3 status: done - <null> Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Mar 8 11:56:29 genome-ldap2 crmd: [2041]: info: do_state_transition: Starting PEngine Recheck Timer Mar 8 11:56:29 genome-ldap2 pengine: [2040]: WARN: process_pe_message: Transition 3: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-22.bz2 Mar 8 11:56:29 genome-ldap2 pengine: [2040]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Mar 8 11:56:37 genome-ldap2 cib: [2037]: info: cib_stats: Processed 1 operations (0.00us average, 0% utilization) in the last 10min
Mar  8 11:57:07 genome-ldap2 crm_shadow: [2826]: info: Invoked: crm_shadow
Mar  8 11:57:07 genome-ldap2 cibadmin: [2827]: info: Invoked: cibadmin -Ql
Mar 8 12:06:37 genome-ldap2 cib: [2037]: info: cib_stats: Processed 1 operations (0.00us average, 0% utilization) in the last 10min
Mar  8 12:06:53 genome-ldap2 ntpd[2227]: time reset +0.388044 s
Mar 8 12:10:49 genome-ldap2 ntpd[2227]: synchronized to LOCAL(0), stratum 10 Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: crm_timer_popped: PEngine Recheck Timer (I_PE_CALC) just popped! Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ] Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: do_pe_invoke: Query 67: Requesting the current CIB: S_POLICY_ENGINE Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: do_pe_invoke_callback: Invoking the PE: query=67, ref=pe_calc-dc-1268079089-34, seq=152, quorate=1 Mar 8 12:11:29 genome-ldap2 pengine: [2040]: notice: unpack_config: On loss of CCM Quorum: Ignore Mar 8 12:11:29 genome-ldap2 pengine: [2040]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Mar 8 12:11:29 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap2 is online Mar 8 12:11:29 genome-ldap2 pengine: [2040]: WARN: unpack_rsc_op: Processing failed op LDAP:1_start_0 on genome-ldap2: unknown error (1) Mar 8 12:11:29 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap1 is online Mar 8 12:11:29 genome-ldap2 pengine: [2040]: notice: clone_print: Clone Set: LDAP-clone Mar 8 12:11:29 genome-ldap2 pengine: [2040]: notice: short_print: Started: [ genome-ldap1 ] Mar 8 12:11:29 genome-ldap2 pengine: [2040]: notice: short_print: Stopped: [ LDAP:1 ] Mar 8 12:11:29 genome-ldap2 pengine: [2040]: notice: native_print: LDAP-IP (ocf::heartbeat:IPaddr2): Started genome-ldap1 Mar 8 12:11:29 genome-ldap2 pengine: [2040]: info: get_failcount: LDAP-clone has failed 1000000 times on genome-ldap2 Mar 8 12:11:29 genome-ldap2 pengine: [2040]: WARN: common_apply_stickiness: Forcing LDAP-clone away from genome-ldap2 after 1000000 failures (max=1000000) Mar 8 12:11:29 genome-ldap2 pengine: [2040]: WARN: native_color: Resource LDAP:1 cannot run anywhere Mar 8 12:11:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP:0 (Started genome-ldap1) Mar 8 12:11:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP:1 (Stopped) Mar 8 12:11:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP-IP (Started genome-ldap1) Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: unpack_graph: Unpacked transition 4: 0 actions in 0 synapses Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1268079089-34) derived from /var/lib/pengine/pe-warn-23.bz2 Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: run_graph: ==================================================== Mar 8 12:11:29 genome-ldap2 crmd: [2041]: notice: run_graph: Transition 4 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-23.bz2): Complete Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: te_graph_trigger: Transition 4 is now complete Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: notify_crmd: Transition 4 status: done - <null> Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Mar 8 12:11:29 genome-ldap2 crmd: [2041]: info: do_state_transition: Starting PEngine Recheck Timer Mar 8 12:11:29 genome-ldap2 pengine: [2040]: WARN: process_pe_message: Transition 4: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-23.bz2 Mar 8 12:11:29 genome-ldap2 pengine: [2040]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues.
Mar  8 12:11:52 genome-ldap2 ntpd[2227]: synchronized to 10.1.1.5, stratum 3
Mar 8 12:16:37 genome-ldap2 cib: [2037]: info: cib_stats: Processed 1 operations (0.00us average, 0% utilization) in the last 10min Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: crm_timer_popped: PEngine Recheck Timer (I_PE_CALC) just popped! Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ] Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: do_pe_invoke: Query 68: Requesting the current CIB: S_POLICY_ENGINE Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: do_pe_invoke_callback: Invoking the PE: query=68, ref=pe_calc-dc-1268079989-35, seq=152, quorate=1 Mar 8 12:26:29 genome-ldap2 pengine: [2040]: notice: unpack_config: On loss of CCM Quorum: Ignore Mar 8 12:26:29 genome-ldap2 pengine: [2040]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Mar 8 12:26:29 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap2 is online Mar 8 12:26:29 genome-ldap2 pengine: [2040]: WARN: unpack_rsc_op: Processing failed op LDAP:1_start_0 on genome-ldap2: unknown error (1) Mar 8 12:26:29 genome-ldap2 pengine: [2040]: info: determine_online_status: Node genome-ldap1 is online Mar 8 12:26:29 genome-ldap2 pengine: [2040]: notice: clone_print: Clone Set: LDAP-clone Mar 8 12:26:29 genome-ldap2 pengine: [2040]: notice: short_print: Started: [ genome-ldap1 ] Mar 8 12:26:29 genome-ldap2 pengine: [2040]: notice: short_print: Stopped: [ LDAP:1 ] Mar 8 12:26:29 genome-ldap2 pengine: [2040]: notice: native_print: LDAP-IP (ocf::heartbeat:IPaddr2): Started genome-ldap1 Mar 8 12:26:29 genome-ldap2 pengine: [2040]: info: get_failcount: LDAP-clone has failed 1000000 times on genome-ldap2 Mar 8 12:26:29 genome-ldap2 pengine: [2040]: WARN: common_apply_stickiness: Forcing LDAP-clone away from genome-ldap2 after 1000000 failures (max=1000000) Mar 8 12:26:29 genome-ldap2 pengine: [2040]: WARN: native_color: Resource LDAP:1 cannot run anywhere Mar 8 12:26:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP:0 (Started genome-ldap1) Mar 8 12:26:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP:1 (Stopped) Mar 8 12:26:29 genome-ldap2 pengine: [2040]: notice: LogActions: Leave resource LDAP-IP (Started genome-ldap1) Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: unpack_graph: Unpacked transition 5: 0 actions in 0 synapses Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1268079989-35) derived from /var/lib/pengine/pe-warn-24.bz2 Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: run_graph: ==================================================== Mar 8 12:26:29 genome-ldap2 crmd: [2041]: notice: run_graph: Transition 5 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-24.bz2): Complete Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: te_graph_trigger: Transition 5 is now complete Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: notify_crmd: Transition 5 status: done - <null> Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Mar 8 12:26:29 genome-ldap2 crmd: [2041]: info: do_state_transition: Starting PEngine Recheck Timer Mar 8 12:26:29 genome-ldap2 pengine: [2040]: WARN: process_pe_message: Transition 5: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-24.bz2 Mar 8 12:26:29 genome-ldap2 pengine: [2040]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Mar 8 12:26:37 genome-ldap2 cib: [2037]: info: cib_stats: Processed 1 operations (0.00us average, 0% utilization) in the last 10min
-----

Does any of this send red flags?  Any insight greatly appreciated!!

Cheers,
erich

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to