Just a tryal ... could you check the failcounts on both nodes? Maybe we also need more from your messages, as we only see the status changing to policy engine and not the next state. In you few lines the cluster is still not in status idle again, so there still could be pending actings or something like that.
Von Samsung-Tablet gesendetMarcus Bointon <[email protected]> hat geschrieben:I'm running crm using heartbeat 3.0.5 pacemaker 1.1.6 on Ubuntu Lucid 64. I have a small resource group containing an IP, ARP and email notifier on a cluster containing two nodes called proxy1 and proxy2. I asked it to move nodes, and it seems to say that was ok, but it hasn't actually moved, and crm_mon still shows it on the original node. # crm resource move proxyfloat3 WARNING: Creating rsc_location constraint 'cli-standby-proxyfloat3' with a score of -INFINITY for resource proxyfloat3 on proxy1. This will prevent proxyfloat3 from running on proxy1 until the constraint is removed using the 'crm_resource -U' command or manually with cibadmin This will be the case even if proxy1 is the last node in the cluster This message can be disabled with -Q This was in syslog: Apr 16 13:32:35 proxy1 cib: [2948]: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=local/crm_resource/3, version=0.57.2): ok (rc=0) Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: - <cib admin_epoch="0" epoch="57" num_updates="2" /> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + <cib validate-with="pacemaker-1.0" crm_feature_set="3.0.5" have-quorum="1" admin_epoch="0" epoch="58" num_updates="1" cib-last-written="Tue Apr 16 08:52:01 2013" dc-uuid="68890308-615b-4b28-bb8b-5aa00bdbf65c" > Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + <configuration > Apr 16 13:32:35 proxy1 crmd: [2952]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.58.1) : Non-status change Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + <constraints > Apr 16 13:32:35 proxy1 crmd: [2952]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + <rsc_location id="cli-standby-proxyfloat3" rsc="proxyfloat3" > Apr 16 13:32:35 proxy1 crmd: [2952]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + <rule id="cli-standby-rule-proxyfloat3" score="-INFINITY" boolean-op="and" > Apr 16 13:32:35 proxy1 crmd: [2952]: info: do_pe_invoke: Query 150: Requesting the current CIB: S_POLICY_ENGINE Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + <expression id="cli-standby-expr-proxyfloat3" attribute="#uname" operation="eq" value="proxy1" type="string" __crm_diff_marker__="added:top" /> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + </rule> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + </rsc_location> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + </constraints> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + </configuration> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + </cib> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=local/crm_resource/4, version=0.58.1): ok (rc=0) Yet crm status still shows: Resource Group: proxyfloat3 ip3 (ocf::heartbeat:IPaddr2): Started proxy1 ip3arp (ocf::heartbeat:SendArp): Started proxy1 ip3email (ocf::heartbeat:MailTo): Started proxy1 So if all that's true, why is that resource group still on the original node? Is there something else I need to do? Marcus -- Marcus Bointon Synchromedia Limited: Creators of http://www.smartmessages.net/ UK info@hand CRM solutions [email protected] | http://www.synchromedia.co.uk/ _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
