On Thu, 16 Dec 2010 08:27:51 +0100, Andrew Beekhof wrote:
> On Wed, Dec 15, 2010 at 8:30 AM, Chris Picton
>> Why would a resource cleanup remove the resource from the lrm, even
>> though it is still running correctly,
>
> Thats what cleanup does.
> What is supposed to
On Tue, 14 Dec 2010 18:55:06 +0100, Dejan Muhamedagic wrote:
> Hi,
>
> On Tue, Dec 14, 2010 at 12:16:22PM +0200, Chris Picton wrote:
>> Hi
>>
>> I have noticed this happening a few times on various of my clusters.
>> The monitor operation for some resources st
:48:22, the monitor is running correctly (the app logs the
"Deleting context for MONTEST-" line when the monitor is run)
After that, the monitor is not run again on this node
I have the logs for the other nodes, if they are needed to try and debug
this.
--
Chris Picton
Executive Manager
Hi all
I had said last week I would post these, but it slipped my mind.
I have two stonith scripts here - they may not be complete, but they
work in my circumstances. Feel free to take these and
update/modify/fix/improve.
1. stonith_multi
This is a wrapper around multiple stonith agents to allo
On 2010/11/18 1:01 AM, Andrew Daugherity wrote:
In production I am planning to have 2 separate AP7900 units each plugged
into 2 different APC UPS units to achieve that. I would then have the
single node name on each, for each of the 2 PS's on the individual
systems.
...
Right, there's curren
On Tue, 16 Nov 2010 10:52:44 +0100, Andrew Beekhof wrote:
> On Mon, Nov 15, 2010 at 2:38 PM, Chris Picton
> wrote:
>> On Mon, 15 Nov 2010 08:37:52 +0100, Andrew Beekhof wrote:
>>
>>> On Fri, Nov 12, 2010 at 7:41 AM, Chris Picton
>>> wrote:
>>
On Mon, 15 Nov 2010 08:37:52 +0100, Andrew Beekhof wrote:
> On Fri, Nov 12, 2010 at 7:41 AM, Chris Picton
> wrote:
>> I have attached the output as requested
>
> Normally it would get balanced, but its being pushed to 01 because there
> are so many resources on 02
>
&g
se?
>
>
>>> On Tue, Nov 9, 2010 at 5:51 PM, Chris Picton
>>> wrote:
>>>> From a previous thread (crm_resource - migrating/halt a cloned
>>>> resource)
>>>>
>>>> Andrew Beekhof wrote:
>>>>> bottom line, you d
On Wed, 10 Nov 2010 09:32:00 +0100, Andrew Beekhof wrote:
> what version is this?
This is 1.0.9
>
> On Tue, Nov 9, 2010 at 5:51 PM, Chris Picton
> wrote:
>> From a previous thread (crm_resource - migrating/halt a cloned
>> resource)
>>
>> Andrew Beekhof w
On 2010/11/09 7:07 PM, Vladimir Legeza wrote:
The only solution I know, is to change "/clone-node-max/" param on the fly.
See
http://oss.clusterlabs.org/pipermail/pacemaker/2010-November/008148.html
for details.
I have read the thread - it is a slightly different problem. In your
case, it is
From a previous thread (crm_resource - migrating/halt a cloned resource)
Andrew Beekhof wrote:
> bottom line, you don't get to chose where specific clone instances
> get placed.
In my case, I have a clone:
primitive clusterip-9 ocf:heartbeat:IPaddr2 \
params ip="192.168.0.9" cidr_netmask
On Fri, 13 Aug 2010 14:37:18 +, Chris Picton wrote:
>>> On Fri, Aug 13, 2010 at 01:44:28PM +0000, Chris Picton wrote: I have a
>>> drbd backed mysql server which has the following resources:
>>>
>>> drbd0 -> lvm_data -> mount_data
>>> drbd1
>> On Fri, Aug 13, 2010 at 01:44:28PM +, Chris Picton wrote:
>> I have a drbd backed mysql server which has the following resources:
>>
>> drbd0 -> lvm_data -> mount_data
>> drbd1 -> lvm_logs -> mount_logs
>> mysqld
>> floatingip
>>
On Fri, 13 Aug 2010 13:44:28 +, Chris Picton wrote:
> Hi all
>
> I have a drbd backed mysql server which has the following resources:
>
> drbd0 -> lvm_data -> mount_data
> drbd1 -> lvm_logs -> mount_logs
> mysqld
> floatingip
>
> I would like
Hi all
I have a drbd backed mysql server which has the following resources:
drbd0 -> lvm_data -> mount_data
drbd1 -> lvm_logs -> mount_logs
mysqld
floatingip
I would like the drbd based filesystems to start up in parallel. Once
they have started, start mysql and the ip address. Obviously the
On Fri, 13 Aug 2010 12:06:27 +0200, Dejan Muhamedagic wrote:
> Hi,
>
> On Fri, Aug 13, 2010 at 11:20:38AM +0200, Chris Picton wrote:
>> Hi all
>>
>> I have seen the following behaviour on a few occasions in the past few
>> months. It seems as if the resource s
known problem, or can I generate extra logging to try help
debug?
Chris
--
Chris Picton
Executive Manager - Systems
ECN Telecommunications (Pty) Ltd
t: 010 590 0031 m: 079 721 8521
f: 087 941 0813
e: ch...@ecntelecoms.com
"Lowering the cost of doing bus
On Tue, 15 Dec 2009 07:13:29 +, Chris Picton wrote:
>>> > The monitor op shouldn't make any changes. If the rule has gone
>>> > away, the monitor op should return failure to indicate the resource
>>> > is broken, which will result in Pacemaker tell
>> > The monitor op shouldn't make any changes. If the rule has gone
>> > away, the monitor op should return failure to indicate the resource
>> > is broken, which will result in Pacemaker telling the the failed
>> > resource to stop, and start again. Actually, from the logs it looks
>> > like
Hi all
I am doing some tests with clusterip and pacemaker/heartbeat on Centos
5.4, using the clusterlabs repo
My resource looks like:
primitive CLUSTERIP_21 ocf:heartbeat:IPaddr2 \
op monitor interval="10" timeout="20" start-delay="0" \
params ip="10.202.4.21" nic="eth0" cidr_net
20 matches
Mail list logo