{I'm trying to set up a three node cluster using pacemaker+corosync} = but you trying to use cman, depending on your distro you can use different cluster es: cman + pacemaker or corosync+pacemaker
2014-06-27 2:22 GMT+02:00 Vijay B <os.v...@gmail.com>: > Hi, > > I'm trying to set up a three node cluster using pacemaker+corosync, and I > installed the required packages on each node, checked for their network > connectivity so they can see each other, added the required startup scripts > and edited the cluster.conf file as well so it includes all three nodes. > > Now, when on the first node, I attempt to start up cman using service cman > start, it times out thus: > > vagrant@precise64-pmk1:~$ sudo service cman start > > Starting cluster: > > Checking if cluster has been disabled at boot... [ OK ] > > Checking Network Manager... [ OK ] > > Global setup... [ OK ] > > Loading kernel modules... [ OK ] > > Mounting configfs... [ OK ] > > Starting cman... [ OK ] > > Waiting for quorum... Timed-out waiting for cluster > > [FAILED] > > vagrant@precise64-pmk1:~$ > > > Why is this? Is it because I have three nodes to begin with in my > /etc/cluster/cluster.conf, and so this node expects that the cluster quorum > is 2, and so it should be able to talk to at least one other node? At this > point, I haven't started the cman or pacemaker services on the other nodes. > > If this is the case, what will happen when two nodes of the three die? In > case cluster.conf changes accordingly to reflect the new cluster membership, > what if all three nodes are simply powered off and one rebooted? The cluster > will be down, won't it? > > What is the best way to get around this? I don't want to set > CMAN_QUORUM_TIMEOUT=0, since as I understand it, the node would then go > ahead and start itself as a cluster without waiting for the other nodes, and > if this causes my service to start up and it is already started/running on > another node, it could cause issues. > > Now, I don't know how to configure quorum disks for pacemaker - is it > possible to do this with pacemaker? How does it work? What are the > recommended ways to address the above problem? I infer that if this disk is > configured, the node that grabs the disk first becomes the president of the > pacemaker cluster. In this context, I have another question - does corosync > have its own cluster membership state distributed across all cluster nodes? > If so, I guess quorum is configured at the corosync level rather than at the > pacemaker level? > > Apologies in advance if my queries above are addressed in the documentation > already - I felt it would be quicker and more accurate to ask the community > for reliable info. > > Thanks! > Regards, > Vijay > > > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > -- esta es mi vida e me la vivo hasta que dios quiera _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org