> On 23 Jan 2015, at 9:13 am, brook davis wrote:
>
> < snip >
>> It sounds like default-resource-stickiness does not kick in; and with
>> default resource-stickiness=1 it is expected (10 > 6). Documentation
>> says default-recource-stickiness is deprecated so may be it is ignored
>> in your ver
< snip >
It sounds like default-resource-stickiness does not kick in; and with
default resource-stickiness=1 it is expected (10 > 6). Documentation
says default-recource-stickiness is deprecated so may be it is ignored
in your version altogether? What "ptest -L -s" shows?
I see now that defaul
On Wed, Jan 21, 2015 at 11:06 PM, brook davis wrote:
> Hi,
>
> I've got a master-slave resource and I'd like to achieve the following
> behavior with it:
>
> * Only ever run (as master or slave) on 2 specific nodes (out of N possible
> nodes). These nodes are predetermined and are specified at re
Hi,
I've got a master-slave resource and I'd like to achieve the following
behavior with it:
* Only ever run (as master or slave) on 2 specific nodes (out of N
possible nodes). These nodes are predetermined and are specified at
resource creation time.
* Prefer one specific node (of the 2 se
Andrei, Andrew,
I'm afraid the machine is not available to me anymore, i'm sorry.
I reproduced the problem on my laptop. It seems that a failing to
start colocated resource made some othe other resources to be demoted
and stopped.
So this is most likely a non issue.
I'll monitor this issue and l
> On 24 Oct 2014, at 9:00 pm, Sékine Coulibaly wrote:
>
> Hi Andrew,
>
> Yep, forgot the attachments. I did reproduce the issue, please find
> the bz2 files attached. Please tell if you need hb_report being used.
Yep, I need the log files to put these into context
>
> Thank you !
>
>
> 201
On Fri, Oct 24, 2014 at 2:00 PM, Sékine Coulibaly wrote:
> Hi Andrew,
>
> Yep, forgot the attachments. I did reproduce the issue, please find
> the bz2 files attached. Please tell if you need hb_report being used.
>
You forgot to attach log matching these files.
_
Hi Andrew,
Yep, forgot the attachments. I did reproduce the issue, please find
the bz2 files attached. Please tell if you need hb_report being used.
Thank you !
2014-10-07 5:07 GMT+02:00 Andrew Beekhof :
> I think you forgot the attachments (and my eyes are going blind trying to
> read the wor
I think you forgot the attachments (and my eyes are going blind trying to read
the word-wrapped logs :-)
On 26 Sep 2014, at 6:37 pm, Sékine Coulibaly wrote:
> Hi everyone,
>
> I'm trying my best to diagnose a strange behaviour of my cluster.
>
> My cluster is basically a Master-Slave Postgre
Hi everyone,
I'm trying my best to diagnose a strange behaviour of my cluster.
My cluster is basically a Master-Slave PostgreSQL cluster, with a VIP.
Two nodes (clustera and clusterb). I'm running RHEL 6.5, Corosync
1.4.1-1 and Pacemaker 1.1.10.
For the simplicity sake of the diagnostic, I took
On 7 Mar 2014, at 2:39 am, Jay Janssen wrote:
>
> primitive p_service ... \
>op monitor interval="2s" role="Master" \
>op monitor interval="5s" role="Slave" \
>op start timeout="1s" interval="0"
> ms ms_service p_service \
>meta master-max="3" clone-max="3" t
primitive p_service ... \
op monitor interval="2s" role="Master" \
op monitor interval="5s" role="Slave" \
op start timeout="1s" interval="0"
ms ms_service p_service \
meta master-max="3" clone-max="3" target-role="Started"
is-managed="true" ordered="false" int
On 10/17/13 11:15, Sam Gardner wrote:
I have a two-node, six resource cluster configured.
Two VIP addresses w/link monitoring, and a DRBD master/slave set configured
exactly as in the Clusters from Scratch documentation.
I want to make the DRBD master always be on the same node as the ExternalV
I have a two-node, six resource cluster configured.
Two VIP addresses w/link monitoring, and a DRBD master/slave set configured
exactly as in the Clusters from Scratch documentation.
I want to make the DRBD master always be on the same node as the
ExternalVIP in my configuration.
To do this, I r
Hello Rainer,
> Hi Felix,
> maybe my hint is worthless, but have you implemented the crm_master calls
> in your RA ?
I had not. And that was it - so simple :-)
Thanks
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.or
Hi Felix,
maybe my hint is worthless, but have you implemented the crm_master calls in your RA ?
See Stateful RA demo $CRM_MASTER calls.
Rainer
Gesendet: Mittwoch, 10. April 2013 um 09:58 Uhr
Von: "Felix Zachlod"
An: "'The Pacemaker cluster resource manager'"
Hello!
I have another problem with my resource agent which should run in a master
slave fashion.
I successfully tested the RA with ocf-tester and it completes any promote or
demote action:
fctarget[14659]: DEBUG: Resource is running
fctarget[14659]: DEBUG: Resource is currently running as Slave
f
: Monday, October 15, 2012 4:21:27 AM
Subject: [Pacemaker] master-slave resource repeats restart
Hi,
I am using Pacemaker-1.1.
- pacemaker f722cf1ff9 (2012 Oct10)
- corosync dc7002195a (2012 Oct11)
If monitor (_on-fail is stop_) of a master resource fails, the
resource
repeats restart in other
- Original Message -
> From: "Kazunori INOUE"
> To: "pacemaker@oss"
> Cc: shimaza...@intellilink.co.jp
> Sent: Monday, October 15, 2012 4:21:27 AM
> Subject: [Pacemaker] master-slave resource repeats restart
>
> Hi,
>
> I am using Pace
Hi Andrew,
I confirmed that both problems were fixed.
Thanks.
(12.09.20 07:58), Andrew Beekhof wrote:
On Fri, Sep 14, 2012 at 7:26 PM, Kazunori INOUE
wrote:
Hi Andrew,
I confirmed that this problem had been resolved.
- ClusterLabs/pacemaker : 7a9bf21cfc
However, I found two problems.
Ah,
On Fri, Sep 14, 2012 at 7:26 PM, Kazunori INOUE
wrote:
> Hi Andrew,
>
> I confirmed that this problem had been resolved.
> - ClusterLabs/pacemaker : 7a9bf21cfc
>
> However, I found two problems.
Ah, I see what you mean.
I believe https://github.com/beekhof/pacemaker/commit/7ecc279 should
fix both
- Original Message -
> From: "Kazunori INOUE"
> To: "The Pacemaker cluster resource manager"
> Sent: Friday, September 14, 2012 4:26:52 AM
> Subject: Re: [Pacemaker] master/slave resource does not stop (tries start
> repeatedly)
>
> Hi Andre
Hi Andrew,
I confirmed that this problem had been resolved.
- ClusterLabs/pacemaker : 7a9bf21cfc
However, I found two problems.
(1) it is output with orphan in crm_mon.
# crm_mon -rf1
:
Full list of resources:
Master/Slave Set: msAP [prmAP]
Stopped: [ prmAP:0 prmAP:1 ]
Mig
On Tue, Sep 11, 2012 at 9:13 PM, Andrew Beekhof wrote:
> Yikes!
>
> Fixed in:
>https://github.com/beekhof/pacemaker/commit/7d098ce
That link should have been:
https://github.com/beekhof/pacemaker/commit/c1f409baaaf388d03f6124ec0d9da440445c4a23
>
> On Fri, Sep 7, 2012 at 7:49 PM, Kazuno
Yikes!
Fixed in:
https://github.com/beekhof/pacemaker/commit/7d098ce
On Fri, Sep 7, 2012 at 7:49 PM, Kazunori INOUE
wrote:
> Hi,
>
> I am using Pacemaker-1.1.
> - ClusterLabs/pacemaker : 872a2f1af1 (Sep 07)
>
> Though a monitor of master resource fails and there is no node which
> the master/
> Firewall:slave cannot run on a different node. I assume you are
> trying to have the DRBD masters on the same node? That should be
> something like:
>
> colocation FirewallDiskWithWork inf: Firewall:Master Config:Master
that is! thanks
___
Pace
- Original Message -
> From: "Luca Meron"
> To: "The Pacemaker cluster resource manager"
> Sent: Monday, September 10, 2012 12:39:30 PM
> Subject: [Pacemaker] Master/Slave drbd setup not working as expected
>
> Hi.
> I've a simple confi
Hi.
I've a simple config with Corosync 1.4.2 on Ubuntu 12.04 which is causing me
some troubles.
I'm actually handling two drbd master/slave resources. The problem is that one
of the two is not activated on the slave, causing it to remain out of sync!
The setup is very simple (follows), the Confi
I don't use this resource in the production. But I think you may to do
experiments with *location *resource.
You also can see my cluster resource Mysql Ping (I use it in the production
for Mysql Master-Master HA) https://github.com/Sov1et/ocf-mysqlping. And
some explain (in кussian). http://habrah
Thanks very much for the link. The percona mysql script does pretty much
exactly what I need in regards to master/slave promotion/demotion. I did
run into a couple of issues revolving around virtual IPs and how they should
follow the master/slave(s) around.
The problem that I'm now running into
Hello.
Check it out https://code.launchpad.net/percona-prm.
And presentation:
http://www.percona.com/files/presentations/percona-live/nyc-2011/PerconaLiveNYC2011-MySQL-High-Availability-with-Pacemaker.pdf
2011/8/15 Michael Szilagyi
> I'm already using the mysql RA file from
> https://github.com/
I'm already using the mysql RA file from
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/mysql(which
also seems to have replication support in it).
Basically what seems to be happening is that pacemaker detects that the
master has dropped and promotes a slave up to master. Ho
Hi,
On Sat, Aug 13, 2011 at 2:53 AM, Michael Szilagyi wrote:
> I'm new to Pacemaker and trying to understand exactly what it can and can't
> do.
> I currently have a small, mysql master/slave cluster setup that is getting
> monitored within Heartbeat/Pacemaker: What I'd like to be able to do (an
I'm new to Pacemaker and trying to understand exactly what it can and can't
do.
I currently have a small, mysql master/slave cluster setup that is getting
monitored within Heartbeat/Pacemaker: What I'd like to be able to do (and
am hoping Pacemaker will do) is to have 1 node designated as Master
Hi,
On Wed, Mar 16, 2011 at 08:47:18AM +0100, Uwe Grawert wrote:
> Hi,
>
> Am 16.03.11 02:27, schrieb Sam Pinar:
> > I've setup a two node cluster for testing using the "Clusters from Scratch -
> > Apache, DRBD and GFS2" guide. I've set it up and fail over works like a
> > charm, but I want one o
Hi,
Am 16.03.11 02:27, schrieb Sam Pinar:
> I've setup a two node cluster for testing using the "Clusters from Scratch -
> Apache, DRBD and GFS2" guide. I've set it up and fail over works like a
> charm, but I want one of the nodes to be a master; and fail back resources
> to the master when it co
Hi,
I've setup a two node cluster for testing using the "Clusters from Scratch -
Apache, DRBD and GFS2" guide. I've set it up and fail over works like a
charm, but I want one of the nodes to be a master; and fail back resources
to the master when it comes back up. At the moment, the resource stays
Actually this is the correct patch. Sorry for the delay.
diff -r 3a1cab4892a4 pengine/master.c
--- a/pengine/master.c Fri Jan 14 11:23:56 2011 +0100
+++ b/pengine/master.c Wed Jan 19 13:12:50 2011 +0100
@@ -299,10 +299,12 @@ static void master_promotion_order(resou
* master instance sho
On Wed, Jan 19, 2011 at 9:47 AM, Andrew Beekhof wrote:
> On Sat, Dec 25, 2010 at 12:44 AM, ruslan usifov
> wrote:
>>
>>
>> 2010/12/20 Andrew Beekhof
>>>
>>> Actually the libxml2 guy and I figured out that problem just now.
>>> I can't find any memory corruption, but on the plus side, I does see
On Tue, Dec 21, 2010 at 06:35:04PM +0100, Marc Wilmots wrote:
> Hi,
>
> I have two nodes rspa and rspa2 (both Centos 5.3 32bits) with the following
> packages:
>
> drbd83-8.3.8-1.el5.centos
Not sure what exactly that is, but if it is equivalent to "8.3.8", not
"8.3.8.1", as tagged in git, then t
On Sat, Dec 25, 2010 at 12:44 AM, ruslan usifov wrote:
>
>
> 2010/12/20 Andrew Beekhof
>>
>> Actually the libxml2 guy and I figured out that problem just now.
>> I can't find any memory corruption, but on the plus side, I does seem
>> that 1.1 is unaffected - perhaps you could try that until I ma
On Tue, Dec 21, 2010 at 6:35 PM, Marc Wilmots wrote:
> Hi,
>
> I have two nodes rspa and rspa2 (both Centos 5.3 32bits) with the following
> packages:
>
> drbd83-8.3.8-1.el5.centos
> heartbeat-3.0.3-2.3.el5
> pacemaker-1.0.10-1.4.el5
>
> rspa is stopped, and rspa2 has all the resources (IP, FileSy
2010/12/20 Andrew Beekhof
> Actually the libxml2 guy and I figured out that problem just now.
> I can't find any memory corruption, but on the plus side, I does seem
> that 1.1 is unaffected - perhaps you could try that until I manage to
> track down the problem.
>
> Sorry I badly have understood
Hi,
I have two nodes rspa and rspa2 (both Centos 5.3 32bits) with the following
packages:
drbd83-8.3.8-1.el5.centos
heartbeat-3.0.3-2.3.el5
pacemaker-1.0.10-1.4.el5
rspa is stopped, and rspa2 has all the resources (IP, FileSystem, Mysql,
Apache and DRBD Master)
When I start heatbeat on rspa, for
On Mon, Dec 20, 2010 at 1:27 PM, Andrew Beekhof wrote:
> On Sun, Dec 19, 2010 at 11:12 PM, ruslan usifov
> wrote:
>>
>>
>> 2010/12/11 Andrew Beekhof
>>>
>>> On Fri, Dec 10, 2010 at 4:59 PM, ruslan usifov
>>> wrote:
>>> > and to me what to do?
>>>
>>> Nothing yet, there looks to be some memory
On Sun, Dec 19, 2010 at 11:12 PM, ruslan usifov wrote:
>
>
> 2010/12/11 Andrew Beekhof
>>
>> On Fri, Dec 10, 2010 at 4:59 PM, ruslan usifov
>> wrote:
>> > and to me what to do?
>>
>> Nothing yet, there looks to be some memory corruption going on.
>> With that file I've been able to reproduce loc
2010/12/11 Andrew Beekhof
> On Fri, Dec 10, 2010 at 4:59 PM, ruslan usifov
> wrote:
> > and to me what to do?
>
> Nothing yet, there looks to be some memory corruption going on.
> With that file I've been able to reproduce locally. I'll let you know
> when there is a fix (hopefully very soon).
2010/12/11 Andrew Beekhof
> On Fri, Dec 10, 2010 at 4:59 PM, ruslan usifov
> wrote:
> > and to me what to do?
>
> Nothing yet, there looks to be some memory corruption going on.
> With that file I've been able to reproduce locally. I'll let you know
> when there is a fix (hopefully very soon).
On Fri, Dec 10, 2010 at 4:59 PM, ruslan usifov wrote:
> and to me what to do?
Nothing yet, there looks to be some memory corruption going on.
With that file I've been able to reproduce locally. I'll let you know
when there is a fix (hopefully very soon).
>
> 2010/12/10 Andrew Beekhof
>>
>> On
and to me what to do?
2010/12/10 Andrew Beekhof
> On Fri, Dec 10, 2010 at 11:16 AM, ruslan usifov
> wrote:
> > you mean some think like this:
> >
> > Dec 07 15:14:05 storage1 crmd: [16003]: notice: save_cib_contents: Saved
> CIB
> > contents after PE crash to /var/lib/pengine/pe-core
> > -121f
On Fri, Dec 10, 2010 at 11:16 AM, ruslan usifov wrote:
> you mean some think like this:
>
> Dec 07 15:14:05 storage1 crmd: [16003]: notice: save_cib_contents: Saved CIB
> contents after PE crash to /var/lib/pengine/pe-core
> -121f59f1-ca5c-4ce4-996c-53f4aa617ac3.bz2
perfect
>
>
> ??
> If so, i a
you mean some think like this:
Dec 07 15:14:05 storage1 crmd: [16003]: notice: save_cib_contents: Saved CIB
contents after PE crash to /var/lib/pengine/pe-core
-121f59f1-ca5c-4ce4-996c-53f4aa617ac3.bz2
??
If so, i attached it to this email
2010/12/10 Andrew Beekhof
> On Fri, Dec 10, 2010 at
On Fri, Dec 10, 2010 at 10:18 AM, ruslan usifov wrote:
> I don't know how to see version of pacemaker, crm doesn't provide -v (or -V
> or --version) option, but I got source from here
> http://hg.clusterlabs.org/pacemaker/stable-1.0/archive/tip.tar.bz2, as
> result I download Pacemaker-1-0-b0266dd
I don't know how to see version of pacemaker, crm doesn't provide -v (or -V
or --version) option, but I got source from here
http://hg.clusterlabs.org/pacemaker/stable-1.0/archive/tip.tar.bz2, as
result I download Pacemaker-1-0-b0266dd5ffa9.tar.bz2
and here is my backtrace:
gdb /usr/lib/heartbea
On Thu, Dec 9, 2010 at 3:06 PM, ruslan usifov wrote:
> Hi! Thanks for your reply!
>
> I make some mistakes in configuration, and now i have againt segfault on
> lastest sources(pacemaker-1-0_b0266dd5ffa9) :
>
> Dec 9 16:51:29 storage0 kernel: [ 407.923417] pengine[891]: segfault at 8
> ip b77289
Hi! Thanks for your reply!
I make some mistakes in configuration, and now i have againt segfault on
lastest sources(pacemaker-1-0_b0266dd5ffa9) :
Dec 9 16:51:29 storage0 kernel: [ 407.923417] pengine[891]: segfault at 8
ip b77289b8 sp bfe38120 error 4 in libpengine.so.3.0.0[b771d000+33000]
Dec
On Wed, Dec 8, 2010 at 12:26 PM, ruslan usifov wrote:
> hello
>
> I have 2 node cluster with follow conf:
> node storage0
> node storage1
> primitive drbd_web ocf:linbit:drbd \
> params drbd_resource="web" \
> op monitor interval="30s" timeout="60s"
> primitive iscsi_ip ocf:heartbe
hello
I have 2 node cluster with follow conf:
node storage0
node storage1
primitive drbd_web ocf:linbit:drbd \
params drbd_resource="web" \
op monitor interval="30s" timeout="60s"
primitive iscsi_ip ocf:heartbeat:IPaddr2 \
params ip="192.168.17.19" nic="eth1:1" cidr_netmask
On Fri, Jul 23, 2010 at 5:30 PM, Freddie Sessler wrote:
> Dan thanks for you reply and the documentation. That all makes sense. I
> guess what I am confused about how to tell pacemaker to start mysql on both
> servers. My limited experience so far has been that pacemaker starts up
> mysql on the p
Dan thanks for you reply and the documentation. That all makes sense. I
guess what I am confused about how to tell pacemaker to start mysql on both
servers. My limited experience so far has been that pacemaker starts up
mysql on the primary and in the event of a failure it will move the virtual
ip
First take a look at this
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
It contains all you need for this kind of setup. I'm not aware if the
M/S relationship extends to other resources than DRBD, but in this case
you don't actually need a M/S relationship (from my point of view).
1.
I have a quick question is the Master Slave setting in pacemaker only
allowed in regards to a DRBD device? Can you use it to create other Master
Slave relationships? Does all resource agents potentially involved in this
need to be aware of the Master Slave relationship? I am trying to set up a
pair
IDENTIAL COMMUNICATION. This e-mail and any files transmitted with it
> are confidential and are intended solely for the use of the individual or
> entity to whom it is addressed. If you are not the intended recipient, please
> call me immediately. BROADVOX is a registered trademark of
manager
Subject: Re: [Pacemaker] Master/Slave not failing over
I modified my resource to set "migration-threshold=1" and "failure-timeout=5s".
Now the resource is finally switching to Master on the slave node when the
original master fails. However, shortly after it switches t
registered trademark of Broadvox, LLC.
-Original Message-
From: Eliot Gable [mailto:ega...@broadvox.com]
Sent: Friday, June 25, 2010 1:45 PM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] Master/Slave not failing over
When I issue the 'ip addr flush eth1' co
ual or entity to
whom it is addressed. If you are not the intended recipient, please call me
immediately. BROADVOX is a registered trademark of Broadvox, LLC.
-Original Message-
From: Eliot Gable [mailto:ega...@broadvox.com]
Sent: Friday, June 25, 2010 1:08 PM
To: The Pacemaker cluster resou
..@broadvox.com]
Sent: Friday, June 25, 2010 12:27 PM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] Master/Slave not failing over
After looking at the drbd master/slave RA, I think it is now clear. It looks
like crm_master, being a wrapper for crm_attribute, actually specifies
ev
y. BROADVOX is a registered trademark of Broadvox, LLC.
-Original Message-
From: Eliot Gable [mailto:ega...@broadvox.com]
Sent: Friday, June 25, 2010 12:17 PM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] Master/Slave not failing over
Thanks. Should I upda
ot the intended recipient, please call me
immediately. BROADVOX is a registered trademark of Broadvox, LLC.
-Original Message-
From: Andrew Beekhof [mailto:and...@beekhof.net]
Sent: Friday, June 25, 2010 8:26 AM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] Master/Sl
Please let me know if I am wrong: This requirement can be satisfied by
customizing the used RA.
Thanks,
Michael
> On Fri, Jun 25, 2010 at 12:43 AM, Eliot Gable wrote:
>> I am still having issues with the master/slave resource. When I cause one of
>> the monitoring actions to fail,
>
> as well
ty to whom it is addressed. If you are not the intended recipient, please
> call me immediately. BROADVOX is a registered trademark of Broadvox, LLC.
>
>
> -Original Message-
> From: Dejan Muhamedagic [mailto:deja...@fastmail.fm]
> Sent: Thursday, June 24, 2010 12:37 PM
> T
call me
immediately. BROADVOX is a registered trademark of Broadvox, LLC.
-Original Message-
From: Dejan Muhamedagic [mailto:deja...@fastmail.fm]
Sent: Thursday, June 24, 2010 12:37 PM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] Master/Slave not failing over
Hi
UNICATION. This e-mail and any files transmitted with it
> are confidential and are intended solely for the use of the individual or
> entity to whom it is addressed. If you are not the intended recipient, please
> call me immediately. BROADVOX is a registered trademark of Broadvox, LLC.
&
trademark of Broadvox, LLC.
From: Eliot Gable [mailto:ega...@broadvox.com]
Sent: Thursday, June 24, 2010 11:55 AM
To: The Pacemaker cluster resource manager
Subject: [Pacemaker] Master/Slave not failing over
I am using the latest CentOS 5.5 packages for pacemaker/corosync. I have a
master/slave resourc
I am using the latest CentOS 5.5 packages for pacemaker/corosync. I have a
master/slave resource up and running, and when I make the master fail, instead
of immediately promoting the slave, it restarts the failed master and
re-promotes it back to master. This takes longer than if it would just
> So, application acts as "master" if it was able to bind to the pre-configured
> IP and as a "node" if it wasn't. If it's a master it listens on an additional
> port and receives updates from nodes. Each application pulls video feed out
> of attached video cameras and stores them on the local d
On May 28, 2010, at 11:17 AM, Florian Haas wrote:
> On 05/28/2010 02:37 PM, Vadym Chepkov wrote:
>>
>> On May 28, 2010, at 8:27 AM, Florian Haas wrote:
Imperative word was "started". You think I still should go multi-state RA
for this application?
>>>
>>> If the application whi
On 05/28/2010 02:37 PM, Vadym Chepkov wrote:
>
> On May 28, 2010, at 8:27 AM, Florian Haas wrote:
>>>
>>> Imperative word was "started". You think I still should go multi-state RA
>>> for this application?
>>
>> If the application which that RA applies to distinguishes between roles
>> equivalent
On May 28, 2010, at 8:27 AM, Florian Haas wrote:
>>
>> Imperative word was "started". You think I still should go multi-state RA
>> for this application?
>
> If the application which that RA applies to distinguishes between roles
> equivalent to a Master and a Slave, and you want the RA to mana
On 2010-05-28 14:20, Vadym Chepkov wrote:
>
> On May 28, 2010, at 8:12 AM, Florian Haas wrote:
>
>> On 2010-05-28 14:01, Vadym Chepkov wrote:
>>> Hi,
>>>
>>> I want to convert our home-made application to be managed by pacemaker
>>> cluster.
>>> The way it works now: application starts, discove
On May 28, 2010, at 8:12 AM, Florian Haas wrote:
> On 2010-05-28 14:01, Vadym Chepkov wrote:
>> Hi,
>>
>> I want to convert our home-made application to be managed by pacemaker
>> cluster.
>> The way it works now: application starts, discovers all IPs configured on
>> the system and if it see
On 2010-05-28 14:01, Vadym Chepkov wrote:
> Hi,
>
> I want to convert our home-made application to be managed by pacemaker
> cluster.
> The way it works now: application starts, discovers all IPs configured on the
> system and if it sees preconfigured IP it becomes "master" and will serve
> co
Hi,
I want to convert our home-made application to be managed by pacemaker cluster.
The way it works now: application starts, discovers all IPs configured on the
system and if it sees preconfigured IP it becomes "master" and will serve
configuration requests,
if not - "node" and will try to co
On Tue, Feb 2, 2010 at 9:36 PM, Erich Weiler wrote:
>> Thanks for the tip, I think I'm closer... I set a preference and then
>> tried to start the LDAP service, and the crm monitor shows testvm3 (my
>> preferred master) trying to start LDAP as a master repeatedly but failing.
>> It does this lik
as i understand the documentation you need an ocf agent since u have to
implement monitor, promote and demote methods which are not supported in lsb
conform agents.
Am 02.02.2010 um 21:36 schrieb Erich Weiler:
>> Thanks for the tip, I think I'm closer... I set a preference and then tried
>> to
Thanks for the tip, I think I'm closer... I set a preference and then
tried to start the LDAP service, and the crm monitor shows testvm3 (my
preferred master) trying to start LDAP as a master repeatedly but
failing. It does this like 2 times per second. I think I'm very, very
close to nailin
The script should set a preference for being promoted using crm_master.
Have a look at LinBit's drbd script for a good example.
Thanks for the tip, I think I'm closer... I set a preference and then
tried to start the LDAP service, and the crm monitor shows testvm3 (my
preferred master) trying
On Tue, Feb 2, 2010 at 5:10 AM, Erich Weiler wrote:
> How does one promote a slave to master automatically?
The script should set a preference for being promoted using crm_master.
Have a look at LinBit's drbd script for a good example.
___
Pacemaker ma
OK - it seems I've achieved what I want via the following configuration:
node testvm1
node testvm2
node testvm3
primitive LDAP lsb:ldap \
op monitor interval="40s" \
op monitor interval="41s" role="Master"
primitive LDAP-IP ocf:heartbeat:IPaddr2 \
params ip="10.1.1.80" cid
if you make the LDAP daemon listen on all available interfaces, it will accept
connections on the on-demand activated floating-ip.
Well, I'm trying to get this to work and running into a wall... I've
got 3 servers, I want LDAP to run on testvm2 and testvm3. I've
configured LDAP on those 2 s
Thanks! This will be helpful...
Rafał Kupka wrote:
On Sun, Jan 31, 2010 at 06:39:28PM -0800, Erich Weiler wrote:
Hi,
However, it seems that when LDAP starts, the IP needs to be live on each
node for the LDAP server to bind on that IP. Is that how the
master/slave setup works in pacemaker?
On Sun, Jan 31, 2010 at 06:39:28PM -0800, Erich Weiler wrote:
Hi,
> However, it seems that when LDAP starts, the IP needs to be live on each
> node for the LDAP server to bind on that IP. Is that how the
> master/slave setup works in pacemaker? Does it use iptables or
> something to block
Hi,
if you make the LDAP daemon listen on all available interfaces, it will accept
connections on the on-demand activated floating-ip.
Or, if you really want to make the ldap daemon listen only on the floating-ip,
you would have to write a resoure agent for ldap which will edit the ldap
config
Hi All,
Forgive a probably elementary question, but I'm new to Pacemaker and am
not clear on exactly how a Master/Slave relationships exist. Here's my
confusion:
My initial though was that with a master/slave service, the service is
started on both nodes (assuming I have 2 nodes). But, I w
On Mon, Aug 24, 2009 at 2:52 PM, Andrew Beekhof wrote:
> On Mon, Aug 24, 2009 at 2:33 PM, Diego
> Remolina wrote:
>> I was noticing this even before the 1.0.5 update right after I changed from
>> heartbeat to openais. I assume there may be some files in that folder from
>> back when I was using hea
On Mon, Aug 24, 2009 at 2:33 PM, Diego
Remolina wrote:
> I was noticing this even before the 1.0.5 update right after I changed from
> heartbeat to openais. I assume there may be some files in that folder from
> back when I was using heartbeat which were causing the problem even with the
> older pa
I was noticing this even before the 1.0.5 update right after I changed
from heartbeat to openais. I assume there may be some files in that
folder from back when I was using heartbeat which were causing the
problem even with the older pacemaker version.
If I want to delete all files in /var/lib
The stack trace makes it look like a logging deadlock.
I'll ask the openais maintainer about it.
On Fri, Aug 21, 2009 at 5:11 PM, Diego
Remolina wrote:
> Here is what I am seeing now right after stopping openais, updating
> heartbeat and pacemaker and trying to start openais again:
>
> [r...@phys-
On Fri, Aug 21, 2009 at 9:15 PM, hj lee wrote:
> Hi,
>
> I had the same problem after upgrading to pacemaker 1.0.5 in RHLE 5.3. After
> deleting all the files in /var/lib/pengine/ directory, this problem seems
> gone, I haven't seen it so far. Maybe it is related the UID change in
> pengine(haclust
Hi,
I had the same problem after upgrading to pacemaker 1.0.5 in RHLE 5.3. After
deleting all the files in /var/lib/pengine/ directory, this problem seems
gone, I haven't seen it so far. Maybe it is related the UID change in
pengine(hacluster to daemon) in 1.0.5, but not exactly sure.
hj
On Fri,
1 - 100 of 125 matches
Mail list logo