I seem to have another instance where pacemaker fails to exit at the end
of a shutdown. Here's the log from the start of the "service pacemaker
stop":
Dec 3 13:00:39 wtm-60vm8 crmd[14076]: notice: do_state_transition: State
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCES
*From: *Lars Marowsky-Bree
*Sent: * 2013-12-06 13:44:53 E
*To: *The Pacemaker cluster resource manager
*Subject: *Re: [Pacemaker] monitor on-fail=ignore not restarting when
resource reported as stopped
> On 2013-12-06T1
*From: *Lars Marowsky-Bree
*Sent: * 2013-12-06 13:44:53 E
*To: *The Pacemaker cluster resource manager
*Subject: *Re: [Pacemaker] monitor on-fail=ignore not restarting when
resource reported as stopped
> On 2013-12-06T1
On 2013-12-06T11:21:02, Patrick Hemmer wrote:
> > So where is the problem? If the script returns "ERROR" than pacemaker has
> > to
> > acct accordingly.
> If the script returns "ERROR" the `on-fail=ignore` should make it do
> nothing. Amazon's API failed, we need to just retry again later.
> If
*From: *Michael Schwartzkopff
*Sent: * 2013-12-06 11:16:17 E
*To: *pacemaker@oss.clusterlabs.org
*Subject: *Re: [Pacemaker] monitor on-fail=ignore not restarting when
resource reported as stopped
> Am Freitag, 6. Dezembe
On 06/12/13 10:57, Michael Schwartzkopff wrote:
> Am Freitag, 6. Dezember 2013, 16:49:32 schrieb Dvorak Andreas:
>> Dear all
>>
>> I would like to configure stonith and found example like this:
>> pcs cluster cib stonith_cfg pcs -f stonith_cfg stonith pcs -f
>> stonith_cfg stonith create impi-fen
On Thursday, December 5, 2013 9:55:47 PM "Vladislav Bogdanov"
wrote:
> Does 'db0' resolve to a correct IP address? If not, then you probably
> want either fix that or use remote-addr option as well. I saw that you
> can ping/ssh that container, but it is not clear did you use 'db0' name
> for th
Am Freitag, 6. Dezember 2013, 11:02:11 schrieben Sie:
>
> *From: *Michael Schwartzkopff
> *Sent: * 2013-12-06 10:50:19 E
> *To: *The Pacemaker cluster resource manager
> *Subject: *Re: [Pacemaker] monitor on-fail=ignore not
make two resources
pcs -f stonith_cfg stonith create impi-fencing fence_ipmilan
pcmk_host_list="sv2836" ipaddr=10.0.0.1 login=testuser passwd=acd123 op
monitor interval=60s
pcs -f stonith_cfg stonith create impi-fencing fence_ipmilan
pcmk_host_list="sv2837" ipaddr=10.0.0.2 login=testuser passwd=a
Am Freitag, 6. Dezember 2013, 16:49:32 schrieb Dvorak Andreas:
> Dear all
>
> I would like to configure stonith and found example like this:
> pcs cluster cib stonith_cfg
> pcs -f stonith_cfg stonith
> pcs -f stonith_cfg stonith create impi-fencing fence_ipmilan
> pcmk_host_list="sv2836 sv2837" ip
Am Freitag, 6. Dezember 2013, 10:11:07 schrieb Patrick Hemmer:
> I have a resource which updates DNS records (Amazon's Route53). When it
> performs it's `monitor` action, it can sometimes fail because of issues
> with Amazon's API. So I want failures to be ignored for the monitor
> action, and so I
Dear all
I would like to configure stonith and found example like this:
pcs cluster cib stonith_cfg
pcs -f stonith_cfg stonith
pcs -f stonith_cfg stonith create impi-fencing fence_ipmilan
pcmk_host_list="sv2836 sv2837" ipaddr=10.0.0.1 login=testuser passwd=acd123 op
monitor interval=60s
pcs -f s
Greetings,
This is to announce version 0.6.2 of Hawk, a web-based GUI for managing
and monitoring Pacemaker High-Availability clusters.
Notable features include:
- View cluster status (summary and detailed views).
- Examine potential failure scenarios via simulator mode.
- History explorer for a
[ Hopefully this doesn't cause a duplicate post but my first attempt
returned an error. ]
Using pacemaker 1.1.10 (but I think this issue is more general than that
release), I want to enforce a policy that once a node fails, no
resources can be started/run on it until the user permits it.
I have b
I have a resource which updates DNS records (Amazon's Route53). When it
performs it's `monitor` action, it can sometimes fail because of issues
with Amazon's API. So I want failures to be ignored for the monitor
action, and so I set `op monitor on-fail=ignore`. However now when the
monitor action c
I installed crmsh and configured it via crm commands.
best regards,
m.
On 12/06/2013 12:05 PM, Bauer, Stefan (IZLBW Extern) wrote:
Any news on this? I'm facing the same issue.
Stefan
-Ursprüngliche Nachricht-
Von: Chris Feist [mailto:cfe...@redhat.com]
Gesendet: Dienstag, 3. Dezember
Any news on this? I'm facing the same issue.
Stefan
-Ursprüngliche Nachricht-
Von: Chris Feist [mailto:cfe...@redhat.com]
Gesendet: Dienstag, 3. Dezember 2013 01:49
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] pcs ping connectivity rule
On 11/20/2013 03:30 PM, Mar
Hi Vladislav,
I used the below advisory colocation but its not working.
On 3 node setup:
I have configured all 3 resources in clone mode to start only on node1 and
node2 with a fail-count of only 1.
+++
+ crm configure primitive res_dummy_1 lsb::dummy_1 meta al
06.12.2013 11:41, Lars Marowsky-Bree wrote:
> On 2013-12-06T08:55:47, Vladislav Bogdanov wrote:
>
>> BTW, pacemaker cib accepts any meta attributes (and that is very
>> convenient way for me to store some 'meta' information), while crmsh
>> limits them to a pre-defined list. While that is probabl
On 2013-12-06T09:54:19, Gaëtan Slongo wrote:
> I know this is caused by the "-inf" but I don't explicitly created this
> constraint ... Pacemaker did it himself... :-(
No, it did this because you *asked it to*.
> This constraint is also created when the resource moves automatically.
No. This i
Hi !
I know this is caused by the "-inf" but I don't explicitly created this
constraint ... Pacemaker did it himself... :-(
This constraint is also created when the resource moves automatically.
Then after a successfuly (and automatic) "move" the resource is
"blocked" on the current node until I
On 2013-12-06T08:55:47, Vladislav Bogdanov wrote:
> BTW, pacemaker cib accepts any meta attributes (and that is very
> convenient way for me to store some 'meta' information), while crmsh
> limits them to a pre-defined list. While that is probably fine for
> novices, that limits some advanced usa
On 2013-12-06T09:00:32, Gaëtan Slongo wrote:
> OK I understand, but this makes troubles for me... Example: When the
> node holding the resource (and the constraint) reboots the resource is
> not moving to the other node (because of this constraint, I see on the
> debug logs no node can hold the r
Hi
Thank you for your answer.
OK I understand, but this makes troubles for me... Example: When the
node holding the resource (and the constraint) reboots the resource is
not moving to the other node (because of this constraint, I see on the
debug logs no node can hold the resource). As soon as I r
24 matches
Mail list logo