On Tue, Jan 3, 2012 at 11:22 AM, Fil <li...@internyc.net> wrote: > nothing from the pacemaker, but if I do: > > cd /etc/init.d/; ./pacemaker start > > it works every time, while: > > /etc/init.d/pacemaker start > or > systemctl start pacemaker.service > > fails. Which leads me to believe upstart is to blame for this.
You mean systemd right? > Weird > thing is, this works in corosync/pacemaker scenario but not in > cman/pacemaker. Is selinux enabled perhaps? > thanks > fil > > > On 01/02/2012 06:05 PM, Andrew Beekhof wrote: >> On Sat, Dec 31, 2011 at 12:24 PM, Fil <li...@internyc.net> wrote: >>> Hi Andreas, >>> >>> That is exactly how I am staring the cluster first cman and then >>> pacemaker. For some reason pacemaker doesn't start until I run >>> pacemakerd by hand and then kill it. After that I can run >>> >>> systemctl start pacemaker.service ( or /etc/init.d/pacemaker start ) >>> >>> This is the only thing which shows up in the log files >>> >>> Dec 30 20:03:49 server01 systemd[1]: pacemaker.service: control process >>> exited, code=exited status=200 >>> Dec 30 20:03:49 server01 systemd[1]: pacemaker.service holdoff time >>> over, scheduling restart. >>> Dec 30 20:03:49 server01 systemd[1]: Job pending for unit, delaying >>> automatic restart. >>> Dec 30 20:03:49 server01 systemd[1]: Unit pacemaker.service entered >>> failed state. >>> Dec 30 20:03:49 server01 systemd[1]: pacemaker.service start request >>> repeated too quickly, refusing to start. >> >> Anything from pacemaker itself? >> >>> >>> here are the configs >>> >>> node server01 >>> node server02 >>> primitive clvmd lsb:clvmd >>> primitive resDLM ocf:pacemaker:controld \ >>> params daemon="dlm_controld" \ >>> op start interval="0" timeout="90s" \ >>> op stop interval="0" timeout="100s" \ >>> op monitor interval="120s" >>> primitive stonith_sbd stonith:external/sbd \ >>> params >>> sbd_device="/dev/disk/by-path/ip-192.168.10.5\:3260-iscsi-iqn.2004-04.com.qnap\:ts-459proii\:iscsi.sbd01.cb4d16-lun-0" >>> \ >>> meta target-role="Started" >>> clone cloneDLM resDLM \ >>> meta interleave="true" >>> clone clone_clvmd clvmd \ >>> meta interleave="true" >>> property $id="cib-bootstrap-options" \ >>> dc-version="1.1.6-4.fc16-89678d4947c5bd466e2f31acd58ea4e1edb854d5" \ >>> cluster-infrastructure="cman" \ >>> expected-quorum-votes="2" \ >>> stonith-enabled="true" \ >>> no-quorum-policy="ignore" \ >>> default-resource-stickiness="100" \ >>> last-lrm-refresh="1325237993" \ >>> stonith-timeout="60s" \ >>> stonith-action="reboot" >>> >>> <?xml version="1.0"?> >>> <cluster config_version="4" name="adriatic"> >>> <logging debug="on"/> >>> <clusternodes> >>> <clusternode name="server01" nodeid="1"> >>> <fence> >>> <method name="pcmk-redirect"> >>> <device name="pcmk" port="server01"/> >>> </method> >>> </fence> >>> </clusternode> >>> <clusternode name="server02" nodeid="2"> >>> <fence> >>> <method name="pcmk-redirect"> >>> <device name="pcmk" port="server02"/> >>> </method> >>> </fence> >>> </clusternode> >>> </clusternodes> >>> <fencedevices> >>> <fencedevice name="pcmk" agent="fence_pcmk"/> >>> </fencedevices> >>> <cman two_node="1" expected_votes="1" port="5405"> >>> <multicast addr="226.94.1.2"/> >>> </cman> >>> </cluster> >>> >>> thanks >>> fil >>> >>> >>> On 12/28/2011 06:43 PM, Andreas Kurz wrote: >>>> Hello, >>>> >>>> On 12/24/2011 09:13 AM, Fil wrote: >>>>> Hi everyone, >>>>> >>>>> Happy holidays! >>>>> >>>>> I need some help with adding CMAN to my current cluster config. >>>>> Currently I have a two node Corosync/Pacemaker (Active/Passive) cluster. >>>>> It works as expected. Now I need to add a distributed filesystem to my >>>>> setup. I would like to test GFS2. As much as I understand I need to >>>>> setup CMAN to manage dlm/gfs_controld, am I correct? I have followed the >>>>> Clusters_from_Scratch document but I am having issues starting >>>>> pacemakerd once the cman is up and running. Is it possible to use >>>>> dlm/gfs_controld without cman, directly from pacemaker? How do I strat >>>>> pacemaker when CMAN is running, and do I even need to, and if not how do >>>>> I manage my resources? Currently I am using: >>>>> >>>>> Fedora 16 >>>>> corosync-1.4.2-1.fc16.x86_64 >>>>> pacemaker-1.1.6-4.fc16.x86_64 >>>>> cman-3.1.7-1.fc16.x86_64 >>>> >>>> Only start cman service -- not corosync -- and then start pacemaker >>>> service, that should be enough. What is the error you get when starting >>>> pacemaker via its init script? >>>> >>>> Regards, >>>> Andreas >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>>> >>>> Project Home: http://www.clusterlabs.org >>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >>>> Bugs: http://bugs.clusterlabs.org >>> >>> _______________________________________________ >>> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>> >>> Project Home: http://www.clusterlabs.org >>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >>> Bugs: http://bugs.clusterlabs.org >> >> _______________________________________________ >> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: http://bugs.clusterlabs.org >> > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org