You only show a piece of your config, I think you have the xml of your
vm under Filesystem_CDrive1, that filesystem need to be available on
both node.
2015-12-04 17:14 GMT+01:00 Klecho :
> Hi list,
> My issue is the following:
>
> I have very stable cluster, using Corosync 2.1.0.26 and Pacemaker 1
Hi,
Thanks for the help Andrew! It turns out that I mistakenly started the f5
agent's unix service on all three nodes before adding its resource to
pacemaker, and this was causing the above errors. Once I ensured that only
one service was brought up (on the node on which I added it as a resource
t
1.1.6 is really too old
in any case, rc=5 'not installed' means we cant find an init script of that
name in /etc/init.d
On 2 Jul 2014, at 2:07 pm, Vijay B wrote:
> Hi,
>
> I'm puppetizing resource deployment for pacemaker and corosync, and as part
> of it, am creating a resource on one of thr
On 11 Jun 2014, at 10:59 pm, Patrick Hemmer wrote:
> From: Andrew Beekhof
> Sent: 2014-06-11 02:36:15 EDT
> To: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] resources not rebalancing
>
>> On 11 Jun 2014, at 3:44 pm, Patrick Hemmer
>> wrote:
&
*From: *Andrew Beekhof
*Sent: * 2014-06-11 02:36:15 EDT
*To: *The Pacemaker cluster resource manager
*Subject: *Re: [Pacemaker] resources not rebalancing
> On 11 Jun 2014, at 3:44 pm, Patrick Hemmer wrote:
>
>>> Right. But each node still has 4998000+ units with which
On 11 Jun 2014, at 3:44 pm, Patrick Hemmer wrote:
>>>
>> Right. But each node still has 4998000+ units with which to accommodate
>> something that only requires 1.
>> Thats about 0.2% of the remaining capacity, so wherever it starts, its
>> hardly making a dint.
>>
> You're thinking of t
*From: *Andrew Beekhof
*Sent: * 2014-06-10 02:25:09 EDT
*To: *The Pacemaker cluster resource manager
*Subject: *Re: [Pacemaker] resources not rebalancing
> On 5 Jun 2014, at 10:38 am, Patrick Hemmer wrote:
>
>> From: Andrew Beekhof
>> Sent: 2014-06-04 20:15:22 EDT
>> T
On 5 Jun 2014, at 10:38 am, Patrick Hemmer wrote:
> From: Andrew Beekhof
> Sent: 2014-06-04 20:15:22 EDT
> To: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] resources not rebalancing
>
>> On 5 Jun 2014, at 12:57 am, Patrick Hemmer
>> wrot
*From: *Andrew Beekhof
*Sent: * 2014-06-04 20:15:22 EDT
*To: *The Pacemaker cluster resource manager
*Subject: *Re: [Pacemaker] resources not rebalancing
> On 5 Jun 2014, at 12:57 am, Patrick Hemmer wrote:
>
>> From: Andrew Beekhof
>> Sent: 2014-06-04 04:15:48 E
>> T
On 5 Jun 2014, at 12:57 am, Patrick Hemmer wrote:
> From: Andrew Beekhof
> Sent: 2014-06-04 04:15:48 E
> To: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] resources not rebalancing
>
>> On 4 Jun 2014, at 4:22 pm, Patrick Hemmer
>> wrot
*From: *Andrew Beekhof
*Sent: * 2014-06-04 04:15:48 E
*To: *The Pacemaker cluster resource manager
*Subject: *Re: [Pacemaker] resources not rebalancing
> On 4 Jun 2014, at 4:22 pm, Patrick Hemmer wrote:
>
>> Testing some different scenarios, and after bringing a node back online,
On 4 Jun 2014, at 4:22 pm, Patrick Hemmer wrote:
> Testing some different scenarios, and after bringing a node back online, none
> of the resources move to it unless they are restarted. However
> default-resource-stickiness is set to 0, so they should be able to move
> around freely.
>
> # p
On 27 May 2014, at 8:23 pm, Danilo Malcangio wrote:
> I've removed the location constraint and it seems the resources don't move
> anymore if I reboot BX-1.
> During reboot I noticed on crm_mon that resources for one second appeared
> offline and then they stayed on BX-2. Does anyone know why
I've removed the location constraint and it seems the resources don't
move anymore if I reboot BX-1.
During reboot I noticed on crm_mon that resources for one second
appeared offline and then they stayed on BX-2. Does anyone know why that
happened?
I've tried reconfiguring my cluster following
On 22 May 2014, at 9:00 pm, Danilo Malcangio wrote:
> Hi Andrew, first of all thanks for answering.
>
>> Almost certainly the node is configured to start those resources at bootup.
>> Don't do that :)
>>
>
> Are you advicing me to delete the location constraint? (location
> prefer-et-ipbx-1
Hi Andrew, first of all thanks for answering.
Danilo Malcangio - Eletech
Almost certainly the node is configured to start those resources at bootup.
Don't do that :)
Are you advicing me to delete the location constraint? (location
prefer-et-ipbx-1 cluster-group 100: BX-1)
Or is it something el
On 22 May 2014, at 7:04 pm, emmanuel segura wrote:
> This isn't related to your problem, but i seen this in your cluster config
> primitive cluster-ntp lsb:ntp, i don't think is a good idea to have a ntp in
> failover(local service), in a cluster the time needs to be synchronized on
> all nod
On 22 May 2014, at 5:31 pm, Danilo Malcangio wrote:
> Hi everyone,
> I've created an active/passive 2 node cluster following the documentation on
> clusterlabs.
> My cluster has the following characteristics
> Debian Wheezy 7.2.0
> Pacemaker 1.1.7
> Corosync 1.4.2
>
> I've made it with the fol
This isn't related to your problem, but i seen this in your cluster config
primitive cluster-ntp lsb:ntp, i don't think is a good idea to have a ntp
in failover(local service), in a cluster the time needs to be synchronized
on all nodes
2014-05-22 9:31 GMT+02:00 Danilo Malcangio :
> Hi everyone
Lars,
It's 1:10 seconds, not 0:10! :-)
I will share the configs tomorrow when the cluster is available.
2013/10/31 Lars Marowsky-Bree
> On 2013-10-29T18:12:51, Саша Александров wrote:
>
> > Oct 29 13:04:21 wcs2 pengine[2362]: warning: stage6: Scheduling Node
> wcs1
> > for STONITH
> > Oct 29
On 2013-10-29T18:12:51, Саша Александров wrote:
> Oct 29 13:04:21 wcs2 pengine[2362]: warning: stage6: Scheduling Node wcs1
> for STONITH
> Oct 29 13:04:21 wcs2 crmd[2363]: notice: te_fence_node: Executing reboot
> fencing operation (53) on wcs1 (timeout=6)
> Oct 29 13:05:33 wcs2 stonith-n
On 30 Oct 2013, at 1:12 am, Саша Александров wrote:
> Hi!
>
> I have a 2-node cluster with shared storage and SBD-fencing.
> One node was down for maintenance.
> Due to external reasons, second node was rebotted. After reboot service never
> got up:
>
> Oct 29 13:04:21 wcs2 pengine[2362]: wa
On Mon, Nov 26, 2012 at 11:37 AM, Pedro Sousa wrote:
> Hi,
>
> thank you for your answer.
>
> Do you think that if I increase the resource timeout that's failling it will
> solve the problem?
It couldn't hurt.
>
> Regards,
> Pedro Sousa
>
>
> On Mon, Nov 26, 2012 at 12:23 AM, Andrew Beekhof wro
Hi,
thank you for your answer.
Do you think that if I increase the resource timeout that's failling it
will solve the problem?
Regards,
Pedro Sousa
On Mon, Nov 26, 2012 at 12:23 AM, Andrew Beekhof wrote:
> On Wed, Nov 21, 2012 at 2:02 AM, Pedro Sousa wrote:
> > Hi all,
> >
> > some strange b
On Wed, Nov 21, 2012 at 2:02 AM, Pedro Sousa wrote:
> Hi all,
>
> some strange behavior is happening when I do some more intensive work on my
> cluster like running a bash script or wireshark, some pacemaker resources
> start to time out and fail back to the other node. I was running this
> script
On Mon, Jul 30, 2012 at 11:52 PM, Phil Frost wrote:
> On 07/29/2012 11:15 PM, Andrew Beekhof wrote:
>>
>> If I run:
>>
>> tools/crm_simulate -x ~/Dropbox/phil.xml -Ss | grep "promotion score"
>>
>> I see:
>>
>> drbd_exports:1 promotion score on storage02: 110
>> drbd_exports:0 promotion score on s
On 07/29/2012 11:15 PM, Andrew Beekhof wrote:
If I run:
tools/crm_simulate -x ~/Dropbox/phil.xml -Ss | grep "promotion score"
I see:
drbd_exports:1 promotion score on storage02: 110
drbd_exports:0 promotion score on storage01: 6
The 100 coming from one of your rules which says:
On Sat, Jun 30, 2012 at 1:59 AM, Phil Frost wrote:
> On 06/28/2012 01:29 PM, David Vossel wrote:
>>
>> I've been looking into multistate resource colocations quite a bit this
>> week. I have a branch I'm working with that may improve this situation for
>> you.
>>
>> If you are feeling brave, test
On 06/28/2012 01:29 PM, David Vossel wrote:
I've been looking into multistate resource colocations quite a bit this week.
I have a branch I'm working with that may improve this situation for you.
If you are feeling brave, test this branch out with your configuration and see
if it fairs better
- Original Message -
> From: "Phil Frost"
> To: pacemaker@oss.clusterlabs.org
> Sent: Tuesday, June 26, 2012 9:23:51 AM
> Subject: Re: [Pacemaker] resources not migrating when some are not runnable
> on one node, maybe because of groups or
> master/slave clon
On 06/22/2012 04:40 AM, Andreas Kurz wrote:
I took a look at the cib in case2 and saw this in the status for storage02.
>
>
>
>
>
>
>
>
>storage02 will not give up the drbd master since it has a higher score that storage01. This coupled with
On 06/21/2012 11:30 PM, David Vossel wrote:
> - Original Message -
>> From: "Phil Frost"
>> To: pacemaker@oss.clusterlabs.org
>> Sent: Tuesday, June 19, 2012 4:25:53 PM
>> Subject: Re: [Pacemaker] resources not migrating when some are not runnable
&g
- Original Message -
> From: "Phil Frost"
> To: pacemaker@oss.clusterlabs.org
> Sent: Tuesday, June 19, 2012 4:25:53 PM
> Subject: Re: [Pacemaker] resources not migrating when some are not runnable
> on one node, maybe because of groups or
> master/slave clon
On 06/19/2012 04:31 PM, David Vossel wrote:
Can you attach a crm_report of what happens when you put the two nodes in
standby please? Being able to see the xml and how the policy engine evaluates
the transitions is helpful.
The resulting reports were a bit big for the list, so I put them in
- Original Message -
> From: "Phil Frost"
> To: "The Pacemaker cluster resource manager"
> Sent: Monday, June 18, 2012 8:39:48 AM
> Subject: [Pacemaker] resources not migrating when some are not runnable on
> one node, maybe because of groups or
> master/slave clones?
>
> I'm attempting
On 06/18/2012 04:14 PM, Vladislav Bogdanov wrote:
> 18.06.2012 16:39, Phil Frost wrote:
>> I'm attempting to configure an NFS cluster, and I've observed that under
>> some failure conditions, resources that depend on a failed resource
>> simply stop, and no migration to another node is attempted, e
On 06/18/2012 10:05 AM, Jake Smith wrote:
Why don't you have vg_nfsexports in the group? Not really any point to
a group with only one resource...
You need an order constraint here too... Pacemaker needs to know in
what order to start/stop/promote things. Something like: order
ord_drbd_maste
On 06/18/2012 10:14 AM, Vladislav Bogdanov wrote:
Sets (constraints with more then two members) are evaluated in the
different order.
Try
colocation colo_drbd_master inf: ( drbd_nfsexports_ms:Master ) (
vg_nfsexports ) ( test )
I'm sure that's the wrong order. I've put the parens on each resour
18.06.2012 16:39, Phil Frost wrote:
> I'm attempting to configure an NFS cluster, and I've observed that under
> some failure conditions, resources that depend on a failed resource
> simply stop, and no migration to another node is attempted, even though
> a manual migration demonstrates the other
- Original Message -
> From: "Phil Frost"
> To: "The Pacemaker cluster resource manager"
> Sent: Monday, June 18, 2012 9:39:48 AM
> Subject: [Pacemaker] resources not migrating when some are not runnable on
> one node, maybe because of groups or
> master/slave clones?
>
> I'm attemptin
On Wed, Mar 28, 2012 at 5:07 PM, Brian J. Murrell wrote:
> On 12-03-28 10:39 AM, Florian Haas wrote:
>>
>> Probably because your resource agent reports OCF_SUCCESS on a probe
>> operation
>
> To be clear, is this the "status" $OP in the agent?
Nope, monitor. Of course, in your implementation moni
On 12-03-28 10:39 AM, Florian Haas wrote:
>
> Probably because your resource agent reports OCF_SUCCESS on a probe
> operation
To be clear, is this the "status" $OP in the agent?
Cheers,
b.
signature.asc
Description: OpenPGP digital signature
___
Pac
On 03/28/2012 04:39 PM, Florian Haas wrote:
[...]
Clearly this resource is not running on all nodes, so why is it
being reported as such?
Probably because your resource agent reports OCF_SUCCESS on a probe
operation when it ought to be returning OCF_NOT_RUNNING. Pastebin the
source of ocf:hydra
On Wed, Mar 28, 2012 at 4:26 PM, Brian J. Murrell wrote:
> We seem to have occasion where we find crm_resource reporting that a
> resource is running on more (usually all!) nodes when we query right
> after adding it:
>
> # crm_resource -resource chalkfs-OST_3 --locate
> resource chalkfs-OST00
Dejan:
> How long does the monitor take? I didn't see your configuration,
> but if it takes longer than the interval you set for monitor,
> this looks like exactly that case.
I have run monitor several times and it has never taken as long as monitor.
Under heavy load it might take as long, I su
Hi,
On Tue, Nov 08, 2011 at 04:46:53PM +0200, Matti Linnanvuori wrote:
> Andreas Kurz:
> > full logs around this restart would be interesting ... you tested all
> > your own OCF scripts with ocf-tester and they are sane?
>
> Yes, I tested all of our own OCF scripts and they seem to work most of
Andreas Kurz:
> full logs around this restart would be interesting ... you tested all
> your own OCF scripts with ocf-tester and they are sane?
Yes, I tested all of our own OCF scripts and they seem to work most of the time.
The following is the tail of file /var/log/corosync.log after crm resou
On 11/08/2011 09:29 AM, Matti Linnanvuori wrote:
> Andreas Kurz:
>> Beside an update to 1.1.5 or later ... does restarting DB-daemon
>> resource trigger the wanted starts?
>
> No, restarting DB-daemon resource does not trigger the wanted starts. I tried
> "crm resource restart DB-daemon" and dire
Andreas Kurz:
> Beside an update to 1.1.5 or later ... does restarting DB-daemon
> resource trigger the wanted starts?
No, restarting DB-daemon resource does not trigger the wanted starts. I tried
"crm resource restart DB-daemon" and direct init script restart. Pacemaker
seems stuck. I got the f
On 11/07/2011 11:15 AM, Matti Linnanvuori wrote:
>
> On Nov 4, 2011, at 3:57 PM, Andreas Kurz wrote:
>
>> On 11/04/2011 01:52 PM, Matti Linnanvuori wrote:
>>>
>>> On Nov 4, 2011, at 2:37 PM, Andreas Kurz wrote:
>>>
On 11/04/2011 01:01 PM, Matti Linnanvuori wrote:
> I think I have found a
On Nov 4, 2011, at 3:57 PM, Andreas Kurz wrote:
> On 11/04/2011 01:52 PM, Matti Linnanvuori wrote:
>>
>> On Nov 4, 2011, at 2:37 PM, Andreas Kurz wrote:
>>
>>> On 11/04/2011 01:01 PM, Matti Linnanvuori wrote:
I think I have found a bug. Resources are not started and I don't see a
rea
On 11/04/2011 01:52 PM, Matti Linnanvuori wrote:
>
> On Nov 4, 2011, at 2:37 PM, Andreas Kurz wrote:
>
>> On 11/04/2011 01:01 PM, Matti Linnanvuori wrote:
>>> I think I have found a bug. Resources are not started and I don't see a
>>> reason why not. All resources except PSQL-slave should be sta
On Nov 4, 2011, at 2:37 PM, Andreas Kurz wrote:
> On 11/04/2011 01:01 PM, Matti Linnanvuori wrote:
>> I think I have found a bug. Resources are not started and I don't see a
>> reason why not. All resources except PSQL-slave should be started. There is
>> only one node. The operating system is
On 11/04/2011 01:01 PM, Matti Linnanvuori wrote:
> I think I have found a bug. Resources are not started and I don't see a
> reason why not. All resources except PSQL-slave should be started. There is
> only one node. The operating system is SuSE Linux Enterprise Server 11 SP1.
>
> crm_mon -1 -r
Hello,
On 10/23/2011 01:32 AM, ge...@riseup.net wrote:
> Hello all,
>
> Got a problem with the automatic failover if one node is rebooted.
> I recently got the info that I maybe forgot to unmigrate the resource with
> 'crm resource unmigrate . I just did this now on both nodes and
> tested it wit
04.08.2011 06:08, Andrew Beekhof wrote:
> On Wed, Aug 3, 2011 at 7:35 PM, Vladislav Bogdanov
> wrote:
>> 01.08.2011 02:05, Andrew Beekhof wrote:
>>> On Wed, Jul 27, 2011 at 11:46 AM, Andrew Beekhof wrote:
On Fri, Jul 1, 2011 at 4:59 PM, Andrew Beekhof wrote:
> Hmm. Interesting. I will
On Wed, Aug 3, 2011 at 7:35 PM, Vladislav Bogdanov wrote:
> 01.08.2011 02:05, Andrew Beekhof wrote:
>> On Wed, Jul 27, 2011 at 11:46 AM, Andrew Beekhof wrote:
>>> On Fri, Jul 1, 2011 at 4:59 PM, Andrew Beekhof wrote:
Hmm. Interesting. I will investigate.
>>>
>>> This is an unfortunate side
01.08.2011 02:05, Andrew Beekhof wrote:
> On Wed, Jul 27, 2011 at 11:46 AM, Andrew Beekhof wrote:
>> On Fri, Jul 1, 2011 at 4:59 PM, Andrew Beekhof wrote:
>>> Hmm. Interesting. I will investigate.
>>
>> This is an unfortunate side-effect of my history compression patch.
>
> Actually I'm mistake
On Wed, Jul 27, 2011 at 11:46 AM, Andrew Beekhof wrote:
> On Fri, Jul 1, 2011 at 4:59 PM, Andrew Beekhof wrote:
>> Hmm. Interesting. I will investigate.
>
> This is an unfortunate side-effect of my history compression patch.
Actually I'm mistaken on this. There should be enough information in
On Wed, Jul 27, 2011 at 6:12 PM, Florian Haas wrote:
> On 2011-07-27 03:46, Andrew Beekhof wrote:
>> On Fri, Jul 1, 2011 at 4:59 PM, Andrew Beekhof wrote:
>>> Hmm. Interesting. I will investigate.
>>
>> This is an unfortunate side-effect of my history compression patch.
>>
>> Since we only store
On 2011-07-27 03:46, Andrew Beekhof wrote:
> On Fri, Jul 1, 2011 at 4:59 PM, Andrew Beekhof wrote:
>> Hmm. Interesting. I will investigate.
>
> This is an unfortunate side-effect of my history compression patch.
>
> Since we only store the last successful and last failed operation, we
> don't h
On Fri, Jul 1, 2011 at 4:59 PM, Andrew Beekhof wrote:
> Hmm. Interesting. I will investigate.
This is an unfortunate side-effect of my history compression patch.
Since we only store the last successful and last failed operation, we
don't have the md5 of the start operation around to check when
Hmm. Interesting. I will investigate.
On Tue, Jun 28, 2011 at 3:46 AM, Vladislav Bogdanov
wrote:
> Hi all,
>
> I'm pretty sure I bisected commit which breaks restart of (node local)
> resources after definition change.
>
> Nodes which has f59d7460bdde applied (v03-a and v03-b in my case) do not
ts $id="rsc-options" \
resource-stickiness="1000"
PHIL HUNT AMS Consultant
phil.h...@orionhealth.com
P: +1 857 488 4749
M: +1 508 654 7371
S: philhu0724
www.orionhealth.com
- Original Message -
From: "mark - pacem
Hi Phil,
On Tue, Apr 19, 2011 at 3:36 PM, Phil Hunt wrote:
> Hi
> I have iscsid running, no iscsi.
Good. You don't want the system to auto-connect the iSCSI disks on
boot, pacemaker will do that for you.
>
>
>
> Here is the crm status:
>
> Last updated: Tue Apr 19 12:39:03 2011
>
Florian Haas wrote:
On 02/11/2011 07:58 PM, paul harford wrote:
Hi Florian
i had seen apache 2 in one of the pacemaker mails, it may have been a
typo but i just wanted to check, thanks for your help
Welcome. And I noticed I left out a "2" in the dumbest of places in my
original reply, but I tr
On 02/11/2011 07:58 PM, paul harford wrote:
> Hi Florian
> i had seen apache 2 in one of the pacemaker mails, it may have been a
> typo but i just wanted to check, thanks for your help
Welcome. And I noticed I left out a "2" in the dumbest of places in my
original reply, but I trust you figured th
Hi Florian
i had seen apache 2 in one of the pacemaker mails, it may have been a typo
but i just wanted to check, thanks for your help
paul
On 11 February 2011 11:00, Florian Haas wrote:
> On 2011-02-11 11:53, paul harford wrote:
> > Hi Guys
> > Could anyone tell me what the difference betwee
On 2011-02-11 11:53, paul harford wrote:
> Hi Guys
> Could anyone tell me what the difference between (resources)
>
> IPaddr and IPaddr2
IPaddr uses ifconfig and is meant to be portable across platforms,
IPaddr uses ip, has more features, but is Linux only.
> and
>
> Apache and Apache2
Huh?
On 30 November 2010 19:11, Anton Altaparmakov wrote:
> Hi,
>
> I have set up a three node cluster (running Ubuntu 10.04 LTS server with
> Corosync 1.2.0, Pacemaker 1.0.8, drbd 8.3.7), where one node is only present
> to provide quorum to the other two nodes in case one node fails but it itself
On 12/1/2010 at 05:11 AM, Anton Altaparmakov wrote:
> Hi,
>
> I have set up a three node cluster (running Ubuntu 10.04 LTS server with
> Corosync 1.2.0, Pacemaker 1.0.8, drbd 8.3.7), where one node is only present
> to provide quorum to the other two nodes in case one node fails but it its
On 5 October 2010 11:15, Andrew Beekhof wrote:
> On Fri, Oct 1, 2010 at 9:53 AM, Pavlos Parissis
> wrote:
> > Hi,
> > It seams that it happens every time PE wants to check the conf
> > 09:23:55 crmd: [3473]: info: crm_timer_popped: PEngine Recheck Timer
> > (I_PE_CALC) just popped!
> >
> > and t
On Fri, Oct 1, 2010 at 9:53 AM, Pavlos Parissis
wrote:
> Hi,
> It seams that it happens every time PE wants to check the conf
> 09:23:55 crmd: [3473]: info: crm_timer_popped: PEngine Recheck Timer
> (I_PE_CALC) just popped!
>
> and then check_rsc_parameters() wants to reset my resources
>
> 09:23:
Hi,
It seams that it happens every time PE wants to check the conf
09:23:55 crmd: [3473]: info: crm_timer_popped: PEngine Recheck Timer
(I_PE_CALC) just popped!
and then check_rsc_parameters() wants to reset my resources
09:23:55 pengine: [3979]: notice: check_rsc_parameters: Forcing restart of
p
Hi
Could be related to a possible bug mentioned here[1]?
BTW here is the conf of pacemaker
node $id="b8ad13a6-8a6e-4304-a4a1-8f69fa735100" node-02
node $id="d5557037-cf8f-49b7-95f5-c264927a0c76" node-01
node $id="e5195d6b-ed14-4bb3-92d3-9105543f9251" node-03
primitive drbd_01 ocf:linbit:drbd \
Hi Andrew
Thanks for your reply. I got it sorted I has resource fencing enabled in
drbd.conf and forgot to disable the init s script for drbd
Best regards,
Gerry kernan
InfinityIT
On 7 Sep 2010, at 07:22, Andrew Beekhof wrote:
>
>
> On Mon, Sep 6, 2010 at 5:03 PM, Gerry Kernan
> wrote:
On Mon, Sep 6, 2010 at 5:03 PM, Gerry Kernan wrote:
> Hi
>
>
>
> I have a 2 node cluster. I have a drbd:filesystem rescouce plus a IPaddr2
> resource and 3 LSB init resources to start https, asterisk and
> orderlystatse. I can migrate the resources manually but if i power off the
> primary node t
On Fri, Jul 2, 2010 at 6:51 AM, levin wrote:
> Hi,
>
> I have a two node cluster (A/S) running on a SuSE 11 box with clvm on SAN
> shared disk, it was found that a strange behavior on clone resource
> dependency(order) which will cause the whole resources tree restarted in the
> event of 1 clus
lto:marco.vanput...@tudelft.nl]
Gesendet: Sa 10.04.2010 00:19
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] Resources don't start on second node afterping fails
Hi Benjamin,
Congratulations!
Do you mean not connected as in physicly not connected?
I'm no expert on the
Hi Benjamin,
Congratulations!
Do you mean not connected as in physicly not connected?
I'm no expert on the matter but I just ran into the "number" problem a
couple of weeks ago myself.
Maybe in a newer version this is no longer an issue...
Bye,
Marco.
benjamin.b...@t-systems.com wrote:
Hi e
Hi everybody!
I fixed this 'problem'...
My drbd-resource wasn't connected. m(
The configuration of the ping resource and location were correct. I implemented
Marco's advice but I'm sure my solution would've also worked.
The failover works just fine right now.
Thanks for reading!
Benjamin Benz
0 15:46
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] Resources don't start on second node after pingfails
Hi Benjamin,
> rule $id="ms_drbd_ora_on_connected_node-rule" -inf: not_defined pingval
> or pingval lte 0
You coul
Hi Benjamin,
rule $id="ms_drbd_ora_on_connected_node-rule" -inf: not_defined pingval
or pingval lte 0
You could give this a try instead:
rule $id="ms_drbd_ora_on_connected_node-rule" -inf: not_defined pingval
or pingval number:lte 0
Good luck,
Marco.
___
Raoul Bhatia [IPAX] wrote:
> Lasantha Marian wrote:
>> I have a two node cluster setup using Heartbeat 2.99.3/Pacemaker 1.0.2
>> on an Ubuntu 8.10 server built on two identical Dell 2900 servers.
>> Basically Heartbeat works as expected. My configuration uses CRM to
>> manage and DRBD for storages.
Lasantha Marian wrote:
> I have a two node cluster setup using Heartbeat 2.99.3/Pacemaker 1.0.2
> on an Ubuntu 8.10 server built on two identical Dell 2900 servers.
> Basically Heartbeat works as expected. My configuration uses CRM to
> manage and DRBD for storages.
>
> I have noticed the followin
85 matches
Mail list logo