Sent: Thu Aug 30 2012 23:00:25 GMT-0400 (EDT)
From: Andrew Beekhof
To: Patrick H. The Pacemaker cluster resource
manager
Subject: Re: [Pacemaker] override node name when using cman
On Wed, Aug 29, 2012 at 11:59 PM, Patrick H. wrote:
Sent: Wed Aug 29 2012 08:00:53 GMT-0400 (EDT)
From
Sent: Wed Aug 29 2012 08:00:53 GMT-0400 (EDT)
From: Andrew Beekhof
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] override node name when using cman
On Wed, Aug 29, 2012 at 4:56 AM, Patrick Hemmer wrote:
It looks like when using pacemaker with cman, pacemaker gets the nam
Sent: Wed Jan 18 2012 11:21:29 GMT-0500 (EST)
From: will mad
To: pacemaker@oss.clusterlabs.org
Subject: [Pacemaker] IPaddr2 for tow nodes in a different VLAN
Hi,
I have a two nodes cluster but each node resides in a specific VLAN :
node1 : ip=10.32.3.59 on VLAN1 10.32.3.0
node2 : ip=10.32.2.59
So I'm trying to setup a cluster with a secondary communication ring in
case the first ring fails. The cluster operates fine, but doesnt seem to
handle path failure properly. When I break the path between the 2 nodes
on ring 1, I get the following in the logs:
Jan 6 16:55:17 syslog02.cms.usa.
Yes.
P.S. Dont remove the mailing list from the recipients. Bad etiquette :-)
Sent: Thu Dec 01 2011 14:50:21 GMT-0700 (MST)
From: Charles DeVoe
To: Patrick H.
Subject: Re: [Pacemaker] Fedora 16 Cluster
That document says to remove the pacemaker plugin. Should I start
pacemaker as a
DeVoe
To: Patrick H.
Subject: Re: [Pacemaker] Fedora 16 Cluster
Is there some documentation on how to set up option 3?
corosync + cpg + cman + mcp
Do I simply start up pacemaker
Also, what is cpg?
--- On *Thu, 12/1/11, Patrick H. //* wrote:
From: Patrick H.
Subject: Re
Sent: Thu Dec 01 2011 11:34:39 GMT-0700 (MST)
From: Charles DeVoe
To: pacemaker@oss.clusterlabs.org
Subject: [Pacemaker] Fedora 16 Cluster
I've spent the last month or so building linux clusters. 2 of them in
a VM environment on fedora 15. I went through the Clusters from
Scratch Tutorial; wh
Sent: Thu Dec 01 2011 10:17:06 GMT-0700 (MST)
From: Max Williams
To: The Pacemaker cluster resource manager
(pacemaker@oss.clusterlabs.org)
Subject: [Pacemaker] What versions should we be using and where to get
packages?
Hi All,
Are the very latest versions of pacemaker and corosync the mos
Sent: Mon Nov 28 2011 16:10:01 GMT-0700 (MST)
From: Patrick H.
To: The Pacemaker cluster resource manager
Andreas Kurz
Subject: Re: [Pacemaker] colocation issue with master-slave resources
Sent: Mon Nov 28 2011 15:27:10 GMT-0700 (MST)
From: Andrew Beekhof
To: The Pacemaker cluster resource
upgrade might also help
On Tue, Nov 29, 2011 at 1:38 AM, Patrick H. wrote:
Sent: Mon Nov 28 2011 01:31:22 GMT-0700 (MST)
From: Andreas Kurz
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] colocation issue with master-slave resources
On 11/28/2011 04:51 AM, Patrick H. wrote
Sent: Mon Nov 28 2011 01:31:22 GMT-0700 (MST)
From: Andreas Kurz
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] colocation issue with master-slave resources
On 11/28/2011 04:51 AM, Patrick H. wrote:
I'm trying to setup a colocation rule so that a couple of master-
I'm trying to setup a colocation rule so that a couple of master-slave
resources cant be master unless another resource is running on the same
node, and am getting the exact opposite of what I want. The master-slave
resources are getting promoted to master on the node which this other
resource
Sent: Mon Nov 21 2011 13:27:02 GMT-0700 (MST)
From: Trevor Hemsley
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] Syntax highlighting in vim for crm configure edit
Patrick H. wrote:
Sent: Tue Nov 15 2011 02:47:59 GMT-0700 (MST)
From: Raoul Bhatia [IPAX]
To: The
Sent: Tue Nov 15 2011 02:47:59 GMT-0700 (MST)
From: Raoul Bhatia [IPAX]
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] Syntax highlighting in vim for crm configure edit
hi!
On 2011-08-19 16:28, Dan Frincu wrote:
Hi,
On Thu, Aug 18, 2011 at 5:53 PM, Digimer wrote:
On 08
I'm using cman and have been trying to set the syslog facility for all
of the pacemaker processes (crmd, lrmd, etc) without much luck. So after
google searches failing me on how to configure the syslog facility for
pacemaker when launched from cman, I went digging through the source and
found m
Sent: Mon Aug 29 2011 02:50:33 GMT-0600 (MST)
From: alexander.kra...@basf.com
To: The Pacemaker cluster resource manager
Subject: [Pacemaker] Antwort: IPaddr2 resource IP unavailable on 'lo'
interface for brief period after start
"Patrick H." schrieb am 27.08.2011 1
So the issue is that whenever I start up an IP with an IPaddr2 resource,
the IP is unavailable when attempting to connect via lo interface for
approximately 21 seconds after the resource is started.
What I am doing is starting up the IP resource, and then I have another
resource that tries to
Chepkov wrote:
On Feb 1, 2011 7:12 PM, "Larry Brigman" wrote:
On Tue, Feb 1, 2011 at 12:16 PM, Patrick H.
wrote:
I'm coming from bug 2542 where it was recommended that I try release
1.1.4.
Well I'm having problems simply getting it installed.
Th
I'm coming from bug 2542 where it was recommended that I try release
1.1.4. Well I'm having problems simply getting it installed.
This is on a RHEL5 box
Firstly the rpm source file for fedora-13 is broken
(http://clusterlabs.org/rpm-next/fedora-13/src/pacemaker-1.1.4-1.3.fc13.src.rpm).
When at
Pacemaker is already in the base RHEL6 repo. However when I tried to use
this package, it produced tons of python errors upon launching 'crm'.
Your mileage may vary.
-Patrick
Sent: Tue Jan 25 2011 12:36:45 GMT-0500 (Eastern Standard Time)
From: Ryan Kish
To: pacemaker@oss.clusterlabs.org
Subj
Oh, and its not waiting for the resource to stop on the other node
before it starts it up either.
Here's the lrmd log for resource vip_55.63 from the 'ha02' node (the
node I put into standby)
Jan 12 16:10:24 ha02 lrmd: [5180]: info: rsc:vip_55.63:1444: stop
Jan 12 16:10:24 ha02 lrmd: [5180]:
Sent: Wed Jan 12 2011 09:25:39 GMT-0700 (Mountain Standard Time)
From: Patrick H.
To: pacemaker@oss.clusterlabs.org
Subject: Re: [Pacemaker] Speed up resource failover?
Sent: Wed Jan 12 2011 01:56:31 GMT-0700 (Mountain Standard Time)
From: Lars Ellenberg
To: pacemaker@oss.clusterlabs.org
@oss.clusterlabs.org;
From: Patrick H.
Sent: Wed 12-01-2011 00:06
Subject:[Pacemaker] Speed up resource failover?
Attachment: inline.txt
As it is right now, pacemaker seems to take a long time (in computer terms) to
fail over resources from one node to the other. Right now, I have 477
As it is right now, pacemaker seems to take a long time (in computer
terms) to fail over resources from one node to the other. Right now, I
have 477 IPaddr2 resources evenly distributed among 2 nodes. When I put
one node in standby, it takes approximately 5 minutes to move the half
of those fro
Sent: Sat Nov 13 2010 04:20:56 GMT-0700 (Mountain Standard Time)
From: Andrew Beekhof
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] symmetric anti-collocation
On Fri, Nov 12, 2010 at 5:27 PM, Alan Jones wrote:
On Thu, Nov 11, 2010 at 11:31 PM, Andrew Beekhof wrote:
Sent: Wed Dec 15 2010 09:10:12 GMT-0700 (Mountain Standard Time)
From: Dejan Muhamedagic
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] colocation issues (isnt it always)
Hi,
On Mon, Dec 13, 2010 at 07:08:52PM -0700, Patrick H. wrote:
Sent: Mon Dec 13 2010 15:19:48
Sent: Tue Dec 14 2010 11:37:06 GMT-0700 (Mountain Standard Time)
From: Dejan Muhamedagic
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] continue starting chain with failed group
resources
Hi,
On Mon, Dec 13, 2010 at 10:43:36PM -0700, Patrick H. wrote:
After
colocation and order. This *appears* to
function as intended, but if anyone can point out any pitfalls I'd
appreciate it
-Patrick
Sent: Mon Dec 13 2010 21:12:04 GMT-0700 (Mountain Standard Time)
From: Patrick H.
To: The Pacemaker cluster resource manager
Subject: [Pacemaker] continue starting
Is there a way to continue down a chain of starting resources once a
previous resource hast tried to start, no matter if the try was
successful or not?
I've got 3 iSCSI resources which are in a group, and then an md raid-5
array as another resource. I have the raid array resource set to start
Sent: Mon Dec 13 2010 15:19:48 GMT-0700 (Mountain Standard Time)
From: Pavlos Parissis
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] colocation issues (isnt it always)
If you put all of them in a group and have the nfs_sdb1 as last
resource you will manage to have what you
So colocation is biting me in the ass again and I cant figure this one out.
I have a group of iSCSI devices that then go into an md raid device that
then goes into an lvm device which then gets mounted and then exported
by nfs. Thoughout this whole project I've had the resources trying to
start
ekhof
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] colocation rule not obeyed
In general, its better to have the IP (primitive) depend on the clone. eg:
With that it behaved properly here.
On Mon, Nov 29, 2010 at 4:44 PM, Patrick H. wrote:
Err, now its att
Err, now its attached. Promise :-)
-Patrick
Sent: Mon Nov 29 2010 08:36:04 GMT-0700 (Mountain Standard Time)
From: Patrick H.
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] colocation rule not obeyed
`cibadmin -Q` output is attached
From what I can see, it did run a
, Patrick H. wrote:
I've got a replicated mysql with vip setup i've been trying to create, and
it doesnt seem to be obeying my colocation rule much.
Situation:
I had shut down corosync on all (3) nodes and verified all associated
services were off.
I then started it up an all 3 nodes.
Wh
dard Time)
From: Andrew Beekhof
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] colocation rule not obeyed
On Mon, Nov 29, 2010 at 6:55 AM, Patrick H. wrote:
I've got a replicated mysql with vip setup i've been trying to create, and
it doesnt seem to be
I've got a replicated /mysql with vip/ setup i've been trying to create,
and it doesnt seem to be obeying my colocation rule much.
Situation:
I had shut down corosync on all (3) nodes and verified all associated
services were off.
I then started it up an all 3 nodes.
When it came back up, the
36 matches
Mail list logo