> On 14 Nov 2014, at 5:52 am, Scott Donoho wrote:
>
> Here is a simple Active/Passive configuration with a single Dummy resource
> (see end of message). The resource-stickiness default is set to 100. I was
> assuming that this would be enough to keep the Dummy resource on the active
> node as
We are running the following versions:
crmsh 1.2.6
pacemaker 1.1.10
corosync 1.4.1
On 11/14/14 9:28 AM, "Dejan Muhamedagic" wrote:
>Hi,
>
>On Thu, Nov 13, 2014 at 06:52:29PM +, Scott Donoho wrote:
>> Here is a simple Active/Passive configuration with a single Dummy
>>resource (see end of
Hi,
On Thu, Nov 13, 2014 at 06:52:29PM +, Scott Donoho wrote:
> Here is a simple Active/Passive configuration with a single Dummy resource
> (see end of message). The resource-stickiness default is set to 100. I was
> assuming that this would be enough to keep the Dummy resource on the activ
- Original Message -
> Here is a simple Active/Passive configuration with a single Dummy resource
> (see end of message). The resource-stickiness default is set to 100. I was
> assuming that this would be enough to keep the Dummy resource on the active
> node as long as the active node st
Here is a simple Active/Passive configuration with a single Dummy resource (see
end of message). The resource-stickiness default is set to 100. I was assuming
that this would be enough to keep the Dummy resource on the active node as long
as the active node stays healthy. However, stickiness is
Subject: [Pacemaker]
Resource stickiness not working as expected? Hi guys, I have a two
node
cluster (corosync + pacemaker) on Fedora Core 17. Works well to move
resources over to the secondary cluster node, but when an "unmove"
command is issued now the resources fail back to the pr
- Original Message -
> From: "Allen Pomeroy"
> To: pacemaker@oss.clusterlabs.org
> Sent: Thursday, February 28, 2013 2:49:40 PM
> Subject: [Pacemaker] Resource stickiness not working as expected?
>
> Hi guys,
>
> I have a two node cluster (coros
Hi guys,
I have a two node cluster (corosync + pacemaker) on Fedora Core 17.
Works
well to move resources over to the secondary cluster node, but when an
"unmove" command is issued now the resources fail back to the primary
cluster node - seemingly ignoring the resource-stickiness settings.
Wh
On Fri, Feb 1, 2013 at 4:04 PM, Takatoshi MATSUO wrote:
> Hi Keith
>
> It seems that you use LSB.
>
>primitive PostgreSQL lsb:postgresql-9.2
>
> And you use it with Master/Slave.
>
> ms msPostresql PostgreSQL
>
> Dose your LSB support Master/Slave configuration ?
> I think LSB can't support
From: Michael Schwartzkopff [m...@sys4.de]
Sent: Friday, February 01, 2013 2:39 PM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] Resource Stickiness
Am Freitag, 1. Februar 2013, 19:33:15 schrieb Keith Ouellette:
> Takatoshi,
>
> I do have PostSQL run
Am Freitag, 1. Februar 2013, 19:33:15 schrieb Keith Ouellette:
> Takatoshi,
>
> I do have PostSQL running in a Master/Slave mode, however, I do not
> think the LSB (postgres-9.2) actually supports any master/slave functions
> like "promote" or "demote" My collegue believes that Pacemaker will
ject: Re: [Pacemaker] Resource Stickiness
Hi Keith
It seems that you use LSB.
primitive PostgreSQL lsb:postgresql-9.2
And you use it with Master/Slave.
ms msPostresql PostgreSQL
Dose your LSB support Master/Slave configuration ?
I think LSB can't support it.
Thanks,
Takatoshi MATSUO
Hi Keith
It seems that you use LSB.
primitive PostgreSQL lsb:postgresql-9.2
And you use it with Master/Slave.
ms msPostresql PostgreSQL
Dose your LSB support Master/Slave configuration ?
I think LSB can't support it.
Thanks,
Takatoshi MATSUO
2013/1/22 Keith Ouellette :
> Sorry if this
On Tue, Jan 22, 2013 at 2:35 AM, Keith Ouellette
wrote:
> Sorry if this sounds like a simple issue, but for some reason I can not get
> this to work properly. I have two openSuSE servers running in a cluster (one
> Master and one Slave). I have an OCF resource defined using Ipaddr2 for a
> virtual
Sorry if this sounds like a simple issue, but for some reason I can not get
this to work properly. I have two openSuSE servers running in a cluster (one
Master and one Slave). I have an OCF resource defined using Ipaddr2 for a
virtual IP (ClusterIP). The ClusterIP resource fails over to the slav
On Tue, Sep 20, 2011 at 1:58 PM, Brian J. Murrell wrote:
> On 11-09-19 11:02 PM, Andrew Beekhof wrote:
>> On Wed, Aug 24, 2011 at 6:56 AM, Brian J. Murrell
>> wrote:
>>>
>>> 2. preventing the active node from being STONITHed when the resource
>>> is moved back to it's failed-and-restored node
On 11-09-19 11:02 PM, Andrew Beekhof wrote:
> On Wed, Aug 24, 2011 at 6:56 AM, Brian J. Murrell
> wrote:
>>
>> 2. preventing the active node from being STONITHed when the resource
>> is moved back to it's failed-and-restored node after a failover.
>> IOW: BAR1 is available on foo1, which fail
On Wed, Aug 24, 2011 at 6:56 AM, Brian J. Murrell wrote:
> Hi All,
>
> I am trying to configure pacemaker (1.0.10) to make a single filesystem
> highly available by two nodes (please don't be distracted by the dangers
> of multiply mounted filesystems and clustering filesystems, etc., as I
> am ab
Hello Brian,
On 08/23/2011 10:56 PM, Brian J. Murrell wrote:
Hi All,
I am trying to configure pacemaker (1.0.10) to make a single filesystem
highly available by two nodes (please don't be distracted by the dangers
of multiply mounted filesystems and clustering filesystems, etc., as I
am absolut
Hi All,
I am trying to configure pacemaker (1.0.10) to make a single filesystem
highly available by two nodes (please don't be distracted by the dangers
of multiply mounted filesystems and clustering filesystems, etc., as I
am absolutely clear about that -- consider that I am using a filesystem
re
On Tue, 2010-03-23 at 16:26 +0100, Andrew Beekhof wrote:
> > Killed Corosync on data01, the node goes down as expected and the
> > resource fails over to data02. After data01 is up again the failover-ip
> > moves back to data01.
> >
> > Any ideas?
>
> yes, you told it to:
>
> > location cli-pref
On Tue, Mar 23, 2010 at 3:58 PM, frank wrote:
>
> Hey Guys,
> wondering why resource stickiness does not work.
>
> node data01 \
> attributes standby="off"
> node data02 \
> attributes standby="off"
> primitive data01-stonith stonith:external/riloe \
> params hostlist="data01"
Hey Guys,
wondering why resource stickiness does not work.
node data01 \
attributes standby="off"
node data02 \
attributes standby="off"
primitive data01-stonith stonith:external/riloe \
params hostlist="data01" ilo_user="root"
ilo_hostname="data01-ilo" ilo_password="x
On Tue, 2009-10-13 at 13:52 +0200, Dejan Muhamedagic wrote:
> crm knows when the user's not in the interactive mood, so it may
> behave accordingly. Though the error message is still going to
> remain, it will be less obtrusive and go to stderr.
Awesome!
Thanks,
J.
___
Hi,
On Tue, Oct 13, 2009 at 11:55:38AM +0200, Johan Verrept wrote:
> Hi Dejan,
>
> On Wed, 2009-10-07 at 17:06 +0200, Dejan Muhamedagic wrote:
> > Yes, that's no problem, it's just that I'm not sure about how to
> > design it since the language is, well, rather flat.
>
> Might it be possible to
Hi Dejan,
On Wed, 2009-10-07 at 17:06 +0200, Dejan Muhamedagic wrote:
> Yes, that's no problem, it's just that I'm not sure about how to
> design it since the language is, well, rather flat.
Might it be possible to at least let crm recognise the configuration as
valid or ignore it even if it isn'
On Sun, 2009-10-11 at 21:57 +0200, Andrew Beekhof wrote:
> just fixed it now, thanks
> also needed to change days -> weekdays and add a score to the rule itself.
It works now. Thank you for your help and confirmation.
J.
___
Pacemaker mailing
2009/10/8 Johan Verrept :
> Hi Andrew,
>
> thank you for answering. cibadmin does not want to accept the snippet
> though:
>
> Call cib_modify failed (-47): Update does not conform to the configured
> schema/DTD
>
> I have corrected the end tag which should be
> ? (same mistake in the manual)
j
On Thu, 2009-10-08 at 12:03 +0200, Johan Verrept wrote:
> thank you for answering. cibadmin does not want to accept the snippet
> though:
>
> Call cib_modify failed (-47): Update does not conform to the configured
> schema/DTD
I have been playing with this and it only applies if I remove the ru
Hi Andrew,
thank you for answering. cibadmin does not want to accept the snippet
though:
Call cib_modify failed (-47): Update does not conform to the configured
schema/DTD
I have corrected the end tag which should be
? (same mistake in the manual)
I used this:
On Wed, Oct 07, 2009 at 04:48:48PM +0200, Andrew Beekhof wrote:
> On Wed, Oct 7, 2009 at 4:45 PM, Dejan Muhamedagic wrote:
> > Hi,
> >
> > On Wed, Oct 07, 2009 at 01:58:30PM +0200, Andrew Beekhof wrote:
> >> On Fri, Sep 25, 2009 at 11:21 AM, Johan Verrept
> >> wrote:
> >> >
> >> > Hi,
> >> >
> >
On Wed, Oct 7, 2009 at 4:45 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Wed, Oct 07, 2009 at 01:58:30PM +0200, Andrew Beekhof wrote:
>> On Fri, Sep 25, 2009 at 11:21 AM, Johan Verrept
>> wrote:
>> >
>> > Hi,
>> >
>> > I have seen this mentioned in the "Configuration Explained" manual and
>> > it
Hi,
On Wed, Oct 07, 2009 at 01:58:30PM +0200, Andrew Beekhof wrote:
> On Fri, Sep 25, 2009 at 11:21 AM, Johan Verrept wrote:
> >
> > Hi,
> >
> > I have seen this mentioned in the "Configuration Explained" manual and
> > it listed the rules to use, but it didn't specify how to actually apply
> >
On Fri, Sep 25, 2009 at 11:21 AM, Johan Verrept wrote:
>
> Hi,
>
> I have seen this mentioned in the "Configuration Explained" manual and
> it listed the rules to use, but it didn't specify how to actually apply
> that rule to the stickiness attribute. I have looked with google and
> through the
Hi,
I have seen this mentioned in the "Configuration Explained" manual and
it listed the rules to use, but it didn't specify how to actually apply
that rule to the stickiness attribute. I have looked with google and
through the crm help but that didn't do me much good either.
Can I create thi
35 matches
Mail list logo