Am Samstag, 1. März 2014, 00:14:25 schrieb Matthew O'Connor:
> Hi,
>
> I have had a few instances recently where circumstances conspired to
> bring my cluster down completely and most non-gracefully (and this was
> in spite of a relatively new 10kVA UPS). When bringing the nodes back
> online, it
Hi,
I have had a few instances recently where circumstances conspired to
bring my cluster down completely and most non-gracefully (and this was
in spite of a relatively new 10kVA UPS). When bringing the nodes back
online, it would be enormously useful to me if they would go
automatically into sta
Yes, the issue is seen only with multi state resource. Non multi state
resource work fine. Looks like is_resource_started function in utils.py
does not compare resource name properly. Let fs be the resource name.
is_resource_started compares fs with fs:0 and fs:1 and hence match is not
found and fa
- Original Message -
> From: "K Mehta"
> To: "The Pacemaker cluster resource manager"
> Sent: Friday, February 28, 2014 7:05:47 AM
> Subject: Re: [Pacemaker] Stopping resource using pcs
>
> Can anyone tell me why --wait parameter always causes pcs resource disable to
> return failure
Can anyone tell me why --wait parameter always causes pcs resource disable
to return failure though resource actually stops within time ?
On Wed, Feb 26, 2014 at 10:45 PM, K Mehta wrote:
> Deleting master resource id does not work. I see the same issue.
> However, uncloning helps. Delete work
2014-02-24 12:00 GMT+09:00 Andrew Beekhof :
>
> On 21 Feb 2014, at 9:35 pm, Kazunori INOUE wrote:
>
>> 2014-02-20 18:59 GMT+09:00 Andrew Beekhof :
>>>
>>> On 20 Feb 2014, at 8:37 pm, Kazunori INOUE
>>> wrote:
>>>
Hi,
Is this by design although log levels differ with a stonith reso
pcs constraint colocation set fs_ldap-clone sftp01-vip ldap1
sequential=true
Let me know if this does or doesn't work for you.
I have been testing this now for a couple days and I think I must be
doing something wrong, firstly though, the command itself completes
successfully:
# pcs cons