[Pacemaker] Unable to configure Pacemaker with cibadmin

2011-07-22 Thread Kelly Wong
Hello, I am trying to update the configuration of my cluster through the cibadmin command, but the command always fails: cibadmin --replace --scope resources --xml-file r.xml Call cib_replace failed (-41): Remote node did not respond I was able to replace the initial blank configuration, but up

Re: [Pacemaker] Sending message via cpg FAILED: (rc=12) Doesn't exist

2011-07-22 Thread Proskurin Kirill
22.07.2011 20:30, Steven Dake пишет: On 07/22/2011 01:15 AM, Proskurin Kirill wrote: Hello all. pacemaker-1.1.5 corosync-1.4.0 11:50:07 corosync [TOTEM ] Retransmit List: e4 e5 e7 e8 ea eb ed ee Jul 22 11:50:07 corosync [TOTEM ] Retransmit List: e4 e5 e7 e8 ea eb ed ee There is a problem?

Re: [Pacemaker] Sending message via cpg FAILED: (rc=12) Doesn't exist

2011-07-22 Thread Steven Dake
On 07/22/2011 01:15 AM, Proskurin Kirill wrote: > Hello all. > > > pacemaker-1.1.5 > corosync-1.4.0 > > 4 nodes in cluster. 3 online 1 not. > In logs: > > Jul 22 11:50:23 my106.example.com crmd: [28030]: info: > pcmk_quorum_notification: Membership 0: quorum retained (0) > Jul 22 11:50:23 my106

[Pacemaker] Cluster type is: corosync

2011-07-22 Thread Proskurin Kirill
Hello again! Hope I`m not flooding too much here but I have another problem. I install same rpm of corosync, openais, pacemaker, cluster_glue on all nodes. I check it twice. And then I start some of they - they can`t connect to cluster and stays offline. In logs I see what they see other nod

[Pacemaker] Sending message via cpg FAILED: (rc=12) Doesn't exist

2011-07-22 Thread Proskurin Kirill
Hello all. pacemaker-1.1.5 corosync-1.4.0 4 nodes in cluster. 3 online 1 not. In logs: Jul 22 11:50:23 my106.example.com crmd: [28030]: info: pcmk_quorum_notification: Membership 0: quorum retained (0) Jul 22 11:50:23 my106.example.com crmd: [28030]: info: do_started: Delaying start, no memb

[Pacemaker] Problem with colocation

2011-07-22 Thread Taneli Leppä
Hello, I'm having a problem with colocation (namely that services end up on different nodes): Online: [ cluster1.intra cluster2.intra ] OFFLINE: [ cluster3.intra ] Sphinx_IP (ocf::heartbeat:IPaddr2): Started cluster1.intra Sphinx (lsb:sphinx): Started cluster2.intra As per reque