On Tue, May 15, 2012 at 4:48 AM, Larry Brigman wrote:
> On Tue, May 8, 2012 at 9:45 PM, Andrew Beekhof wrote:
>> On Wed, May 9, 2012 at 6:43 AM, Larry Brigman
>> wrote:
>>> I must be coming to the party late. I just noticed that 1.1.7 version
>>> of pacemaker is out.
>>> We are running 1.1.5 o
On Mon, May 14, 2012 at 4:21 PM, Parshvi wrote:
> Andrew Beekhof writes:
>
>>
>> On Fri, May 11, 2012 at 4:26 PM, Parshvi wrote:
>> > Hi,
>> > I am upgrading the HA packages to the higher versions. The build failed of
>> > Pacemaker, with the following error:
>> >
>> > cc1: warnings being treate
On Mon, May 14, 2012 at 2:13 PM, David Vossel wrote:
> - Original Message -
>> From: "Larry Brigman"
>> To: "The Pacemaker cluster resource manager"
>> Sent: Monday, May 14, 2012 1:30:22 PM
>> Subject: Re: [Pacemaker] Removed nodes showing back in status
>>
>> On Mon, May 14, 2012 at 9:5
- Original Message -
> From: "Larry Brigman"
> To: "The Pacemaker cluster resource manager"
> Sent: Monday, May 14, 2012 1:30:22 PM
> Subject: Re: [Pacemaker] Removed nodes showing back in status
>
> On Mon, May 14, 2012 at 9:54 AM, Larry Brigman
> wrote:
> > I have a 5 node cluster (bu
Hi!
I ran into the issue of ocfs2_controld.pcmk consuming vast CPU again -
twice, actually. The most recent happenstance was after a multi-node
failure. One node stayed alive, two nodes had to be rebooted. After
the reboots, one of the two came back without issue, and was able to
mount the OCFS
On Tue, May 8, 2012 at 9:45 PM, Andrew Beekhof wrote:
> On Wed, May 9, 2012 at 6:43 AM, Larry Brigman wrote:
>> I must be coming to the party late. I just noticed that 1.1.7 version
>> of pacemaker is out.
>> We are running 1.1.5 on centos5 and would like to upgrade to 1.1.7 but I
>> am not find
On Mon, May 14, 2012 at 9:54 AM, Larry Brigman wrote:
> I have a 5 node cluster (but it could be any number of nodes, 3 or larger).
> I am testing some scripts for node removal.
> I remove a node from the cluster and everything looks correct from crm
> status standpoint.
> When I remove a second n
I have a 5 node cluster (but it could be any number of nodes, 3 or larger).
I am testing some scripts for node removal.
I remove a node from the cluster and everything looks correct from crm
status standpoint.
When I remove a second node, the first node that was removed now shows back
in the crm st
- Original Message -
> From: "Paul Damken"
> To: pacema...@clusterlabs.org
> Sent: Monday, May 14, 2012 9:45:30 AM
> Subject: Re: [Pacemaker] VIP on Active/Active cluster
>
> Jake Smith writes:
>
> >
> >
> > clone-node-max="2" should only be one. How about the output from
> > crm_
Cloning IPAddr2 resources utilizes the iptables CLUSTERIP rule. Probably a good
idea to start looking at it w/ tcpdump and seeing if either box gets the icmp
echo-request packet (from a ping) and determining if it just doesn't respond
properly, doesn't get it at all, or something else.
I'd say
Hi Dejan,
I log in Fedora (Gnome) desktop as myself (i.e. guillaume) and then I
switch (in the terminal) to root using su.
In my system, the USER variable is actually set to guillaume and not
root. So this might be where the problem is.
Anyway, I eventually added a symbolic link making /root/.cib
Jake Smith writes:
>
>
> clone-node-max="2" should only be one. How about the output from crm_mon -
fr1And ip a s on each node? Jake
> - Reply message -From: "Paul Damken" gmail.com>To:
oss.clusterlabs.org>Subject: [Pacemaker] VIP on Active/Active
clusterDate: Sat, May 12, 2012 2:4
Hi,
On Fri, May 11, 2012 at 06:05:27PM +0300, Spiros Ioannou wrote:
> Hello all,
> Corosync/pacemaker with 3 nodes(vserver1, vserver2, vserver3), 28 resources
> defined, with quorum, stonith (via ipmi/ilo).
> Most of the time all these work correctly but when we shutdown servers and
> try to resta
On Fri, May 11, 2012 at 03:45:43PM +0100, Guillaume Belrose wrote:
> I am running as root.
>
> The shadow file gets created under /var/lib/heartbeat/crm and is owned by
> root.
Strange. I think that most users run crm as root and I'd expect
a general outcry if creating shadow cibs didn't work.
Hi! Thanks for your reply! That makes perfect sense.
Thanks again!!
-- Matt
On 5/14/2012 10:44 AM, David Vossel wrote:
> - Original Message -
>> From: "Matthew O'Connor"
>> To: "The Pacemaker cluster resource manager"
>> Sent: Friday, May 11, 2012 11:49:04 PM
>> Subject: [Pacemaker] n
On Mon, May 14, 2012 at 01:01:48PM +0200, Greg wrote:
> W dniu 05/09/12 13:57, Dejan Muhamedagic pisze:
>> Hi,
>>
>> On Wed, Apr 25, 2012 at 10:41:05PM +0200, Greg wrote:
>>>
>>> Hi,
>>>
>>> I try to write redis resources agent working in master-slave. My
>>
>> Are you aware of a pull request for o
- Original Message -
> From: "Matthew O'Connor"
> To: "The Pacemaker cluster resource manager"
> Sent: Friday, May 11, 2012 11:49:04 PM
> Subject: [Pacemaker] new node causes spurious evil
>
> My question: Why will a node that is not allowed to start a resource
> attempt to start a moni
W dniu 05/09/12 13:57, Dejan Muhamedagic pisze:
Hi,
On Wed, Apr 25, 2012 at 10:41:05PM +0200, Greg wrote:
Hi,
I try to write redis resources agent working in master-slave. My
Are you aware of a pull request for one redis resource agent:
https://github.com/ClusterLabs/resource-agents/pull/3
W dniu 05/09/12 13:57, Dejan Muhamedagic pisze:
Hi,
On Wed, Apr 25, 2012 at 10:41:05PM +0200, Greg wrote:
Hi,
I try to write redis resources agent working in master-slave. My
Are you aware of a pull request for one redis resource agent:
https://github.com/ClusterLabs/resource-agents/pull/3
Hi Andrew,
I reported it in Bugzilla. Please confirm it.
* http://bugs.clusterlabs.org/show_bug.cgi?id=5064
Best Regards,
Kazunori INOUE
(12.05.09 16:12), Andrew Beekhof wrote:
On Mon, May 7, 2012 at 7:53 PM, Kazunori INOUE
wrote:
Hi,
I am using Pacemaker-1.1 (devel: db5e167).
When stonith
Hi Dejan,
Thanks for your reply.
I opened an enhancement request. Please confirm it.
* http://bugs.clusterlabs.org/show_bug.cgi?id=5065
Best Regards,
Kazunori INOUE
(12.05.11 23:20), Dejan Muhamedagic wrote:
Hi Kazunori-san,
On Fri, May 11, 2012 at 08:31:51PM +0900, Kazunori INOUE wrote:
Hi
21 matches
Mail list logo