Hi.
First of all I wish to say tank you to everyone who helped on this topic.
Taking a look at the
logs it seams everthing is working fine, stonith is able to read the status
of the other machine.
But reading the stonith man page a doubt has arised. I tryed to test the
configuration by hand using
# stonith -t external/ipmi ipaddr=172.31.0.240 userid=root
password=somepass -S
and some errors got displayed
external/ipmi[15985]: ERROR: ipaddr, userid or passwd missing; check
configuration
external/ipmi[15982]: ERROR: error executing ipmitool:
WARN: external_status: 'ipmi status' failed with rc 1
ERROR: external/ipmi device not accessible.
Although if I run the ipmitool command it works fine
# /usr/bin/ipmitool -I lan -U root -P genese -H 172.31.0.240 chassis power
status
Chassis Power is on
Am I missing something?
Best regards,
Carlos.
On Sunday, September 23, 2012 4:39 PM
Volker Dormeyer <vol...@ixolution.de> wrote:
Hi
On Fri, Sep 21, 2012 at 07:35:35PM -0300,
Carlos Xavier <cbas...@connection.com.br> wrote:
> Hi.
>
> I´m running a Pacemaker Cluster on Dell R610 machines, so I enabled
> the IDrac6 and configured the following external/ipmi resources
> primitive resIPMI-1 stonith:external/ipmi \
> params hostname=apolo ipaddr=172.31.0.240 \
> userid=root passwd=somepass interface=lan \
> op monitor interval=600 timeout=240
> primitive resIPMI-2 stonith:external/ipmi \
> params hostname=diana ipaddr=172.31.0.241 \
> userid=root passwd=somepass interface=lan \
> op monitor interval=600 timeout=240
>
> When the resources are started i see one running at each host:
> resIPMI-1 (stonith:external/ipmi): Started apolo
> resIPMI-2 (stonith:external/ipmi): Started diana
>
> But as it can be noticed the resource resIPMI-1 that controls the
> node 172.31.0.240 is running at its own host and so is the
> resIPMI-2.
>
> Is that correct or should them be running on the oposite host or be
> a clone resource? Do I need to set the location for those resources?
In general, it depends on the fence-device and agent. If the
fence-device can control both nodes, you can drive a cloned resource. In
this case the parameters pcmk_host_list and pcmk_host_map, etc. for
stonithd might be of interest for you.
In your case, I think one device can control a single host only, which
means
the resource for node 1 should run on node 2 and the one for node 2 should
run on node 1. This implies, that you need to set the appropriate location
contrains to prevent the situation you described, above. It could be
like this:
location locIPMI-1 resIPMI-1 -inf: apolo
location locIPMI-2 resIPMI-2 -inf: diana
Best Regards,
Volker
_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org