On 02/08/2013, at 11:42 AM, Andrew Beekhof wrote:
>
> On 02/08/2013, at 11:33 AM, Andrew Beekhof wrote:
>
>>
>> On 01/08/2013, at 5:38 PM, Johan Huysmans wrote:
>>
>>> I forgot to mention:
>>>
>>> I'm using a build from git (Version: 1.1.11-1.el6-42f2063).
>>> I used the same config on an
On 02/08/2013, at 11:33 AM, Andrew Beekhof wrote:
>
> On 01/08/2013, at 5:38 PM, Johan Huysmans wrote:
>
>> I forgot to mention:
>>
>> I'm using a build from git (Version: 1.1.11-1.el6-42f2063).
>> I used the same config on an old 1.1.10 rc (rc6 or before) and that worked,
>> as of rc7 it d
On 01/08/2013, at 5:38 PM, Johan Huysmans wrote:
> I forgot to mention:
>
> I'm using a build from git (Version: 1.1.11-1.el6-42f2063).
> I used the same config on an old 1.1.10 rc (rc6 or before) and that worked,
> as of rc7 it didn't work anymore.
I will have a look, but why are you setting
On 01/08/2013, at 10:24 PM, Xzarth wrote:
> Hi,
>
> I updated from pacemaker 1.0.9 to 1.1.7
Distro? Seems strange to be upgrading to a release from 1.5 years ago.
We're up to 1.1.10 now
> After the update, cluster behaves differently than before. I have a
> resource with migration-treshold="
Thanks for the explanation. But I'm quite confused about the SBD stonith
resource configuration, as the SBD fencing wiki clearly states:
"The sbd agent does not need to and should not be cloned. If all of your
nodes run SBD, as is most likely, not even a monitor action provides a real
benefit, sinc
Hi Jan,
first of all I don't know the SBD-Fencing-Infrastructure (just read the
article linked by you). But as far as I understand the "normal" fencing
(initiated on behalf of pacemaker) is done in the following way.
SBD fencing resoure (agent) is writing a request for self-stonithing int
Hi All,
I have a problem with creating active-active Apache cluster based on 2 VM
guests inside one KVM physical host.
My environment and configuration is:
1. VM1 : aa-node1 : 10.0.0.243 (eth0)
2. VM2: aa-node2 : 10.0.0.213 (eth0)
3. VIP: 10.0.0.210
4. KVM Host: host: 10.0.0.1 (virbr1)
I fo
Hi,
I am evaluating the SLES HA Extension 11 SP3 product. The cluster consists
of 2-nodes (active/passive), using SBD stonith resource on a shared SAN
disk. Configuration according to http://www.linux-ha.org/wiki/SBD_Fencing
The SBD daemon is running on both nodes, and the stontih resource (defi
Hi,
I updated from pacemaker 1.0.9 to 1.1.7
After the update, cluster behaves differently than before. I have a
resource with migration-treshold="1", once that resource fails
everything used to migrate to another node (what i would expect).
After the upgrade, once that resource fails, cluster stop
On 01/08/2013, at 6:53 PM, Rainer Brestan wrote:
> I can also agree patch is working.
>
> To be sure, that it had to do with notify, i have created a clone resource
> with notify=true and it happened to same way, after notify monitor was not
> called again.
>
> With patch applied it works
I can also agree patch is working.
To be sure, that it had to do with notify, i have created a clone resource with notify=true and it happened to same way, after notify monitor was not called again.
With patch applied it works also for clone resources.
And from my output of the modified r
I forgot to mention:
I'm using a build from git (Version: 1.1.11-1.el6-42f2063).
I used the same config on an old 1.1.10 rc (rc6 or before) and that
worked, as of rc7 it didn't work anymore.
On 01-08-13 09:35, Johan Huysmans wrote:
Hi,
I have a cloned resource and a resource group. They have
Thanks for letting us know!
On 01/08/2013, at 4:55 PM, Kazunori INOUE wrote:
> Hi,
>
> I confirmed that this problem was fixed.
> Thanks.
>
>
> (13.08.01 15:26), Andrew Beekhof wrote:
>> Fixed:
>> https://github.com/beekhof/pacemaker/commit/0c996a1
>>
>> On 01/08/2013, at 2:00 AM, David
Hi,
I confirmed that this problem was fixed.
Thanks.
(13.08.01 15:26), Andrew Beekhof wrote:
Fixed:
https://github.com/beekhof/pacemaker/commit/0c996a1
On 01/08/2013, at 2:00 AM, David Vossel wrote:
- Original Message -
From: "Kazunori INOUE"
To: "pacemaker@oss"
Sent: We
14 matches
Mail list logo