Ho,
On Fri, Mar 18, 2011 at 10:38:08PM +, Robert Schumann wrote:
> Charles KOPROWSKI writes:
>
> >
> > Le 14/03/2011 09:43, Andrew Beekhof a écrit :
> > > On Sat, Mar 12, 2011 at 9:53 AM, Pavel Levshin wrote:
> > >> 11.03.2011 16:27, Andrew Beekhof:
> > >>>
> > >>> On Fri, Mar 11, 2011 at
Charles KOPROWSKI writes:
>
> Le 14/03/2011 09:43, Andrew Beekhof a écrit :
> > On Sat, Mar 12, 2011 at 9:53 AM, Pavel Levshin wrote:
> >> 11.03.2011 16:27, Andrew Beekhof:
> >>>
> >>> On Fri, Mar 11, 2011 at 2:19 PM, Charles KOPROWSKI
> >>> wrote:
> >>>
> Is there any possibility to mov
Le 14/03/2011 09:43, Andrew Beekhof a écrit :
On Sat, Mar 12, 2011 at 9:53 AM, Pavel Levshin wrote:
11.03.2011 16:27, Andrew Beekhof:
On Fri, Mar 11, 2011 at 2:19 PM, Charles KOPROWSKI
wrote:
Is there any possibility to move back manualy a part of the ClusterIP
resource (for example Clust
On Sat, Mar 12, 2011 at 9:53 AM, Pavel Levshin wrote:
> 11.03.2011 16:27, Andrew Beekhof:
>>
>> On Fri, Mar 11, 2011 at 2:19 PM, Charles KOPROWSKI
>> wrote:
>>
>>> Is there any possibility to move back manualy a part of the ClusterIP
>>> resource (for example ClusterIP:1) to the other node ? Or i
11.03.2011 16:27, Andrew Beekhof:
On Fri, Mar 11, 2011 at 2:19 PM, Charles KOPROWSKI wrote:
Is there any possibility to move back manualy a part of the ClusterIP
resource (for example ClusterIP:1) to the other node ? Or is it just
impossible with this version ?
I _think_ its impossible - whic
On Fri, Mar 11, 2011 at 2:19 PM, Charles KOPROWSKI wrote:
> Le 11/03/2011 11:47, Andrew Beekhof a écrit :
>> Essentially you have encountered a limitation in the allocation
>> algorithm for clones in 1.0.x
>> The recently released 1.1.5 has the behavior you're looking for, but
>> the patch is far
Le 11/03/2011 11:47, Andrew Beekhof a écrit :
On Thu, Mar 10, 2011 at 1:50 PM, Charles KOPROWSKI wrote:
Hello,
I set up a 2 nodes cluster (active/active) to build an http reverse
proxy/firewall. There is one vip shared by both nodes and an apache instance
running on each node.
Here is the con
On Thu, Mar 10, 2011 at 1:50 PM, Charles KOPROWSKI wrote:
> Hello,
>
> I set up a 2 nodes cluster (active/active) to build an http reverse
> proxy/firewall. There is one vip shared by both nodes and an apache instance
> running on each node.
>
> Here is the configuration :
>
> node lpa \
>
Hello,
I set up a 2 nodes cluster (active/active) to build an http reverse
proxy/firewall. There is one vip shared by both nodes and an apache
instance running on each node.
Here is the configuration :
node lpa \
attributes standby="off"
node lpb \
attributes standby="off"
pr
- Original Message -
From: "Andrew Beekhof"
To:
Cc:
Sent: Thursday, October 15, 2009 2:38 AM
Subject: Re: [Pacemaker] failback off
On Wed, Oct 14, 2009 at 8:05 PM, E-Blokos wrote:
- Original Message - From: "Andrew Beekhof"
To:
Cc:
Sent: Sunday, O
On Wed, Oct 14, 2009 at 8:05 PM, E-Blokos wrote:
>
> - Original Message - From: "Andrew Beekhof"
> To:
> Cc:
> Sent: Sunday, October 11, 2009 4:04 PM
> Subject: Re: [Pacemaker] failback off
>
>
>> Should be. Did you try it?
>>
>
- Original Message -
From: "Andrew Beekhof"
To:
Cc:
Sent: Sunday, October 11, 2009 4:04 PM
Subject: Re: [Pacemaker] failback off
Should be. Did you try it?
On Wed, Oct 7, 2009 at 5:36 PM, E-Blokos wrote:
Hi,
Is it possible to have a resource-stickiness in clone or
Should be. Did you try it?
On Wed, Oct 7, 2009 at 5:36 PM, E-Blokos wrote:
> Hi,
>
> Is it possible to have a resource-stickiness in clone or group
> meta-attribute ?
> I'd like to keep the state of resource location even after a failback
>
> Thanks
>
> Franck Chionna
>
> ___
Hi,
Is it possible to have a resource-stickiness in clone or group
meta-attribute ?
I'd like to keep the state of resource location even after a failback
Thanks
Franck Chionna
___
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.c
2009/6/9 Димитър Бойн :
> Thanks Andrew!
>
> Why even allow "-INFINITY" then
> Shouldn't we hard limit in the code the "stickiness" to ">=0" then ?
-INFINITY makes more sense in other contexts, and is great for
exercising the cluster :-)
___
Pacemaker m
: pacema...@clusterlabs.org
Cc: pacema...@clusterlabs.org
Subject: Re: [Pacemaker] failback
2009/6/8 Димитър Бойн :
> Hi,
>
> Check if you have something like
>
> value="INFINITY"/>
>
> In your current
>
> Or similar setting by resources.
>
>
>
> The ability
2009/6/8 Димитър Бойн :
> Hi,
>
> Check if you have something like
>
> value="INFINITY"/>
>
> In your current
>
> Or similar setting by resources.
>
>
>
> The ability to set resource stickiness controls the "fail back on recovery".
>
>
>
> If you want your resources to failback on default set:
>
fos E-Blokos [mailto:in...@e-blokos.com]
Sent: Monday, June 08, 2009 1:22 PM
To: pacema...@clusterlabs.org
Subject: [Pacemaker] failback
Hi,
I configured a clone for 4 nodes with inside a group of 30 ipaddr2 resources.
when I reboot a node the group resources are taken by another node but
Hi,
I configured a clone for 4 nodes with inside a group of 30 ipaddr2 resources.
when I reboot a node the group resources are taken by another node but the once
the rebooted node the failed node resources don't go back.
What the settings to do it right ?
THanks
Franck Chionna
--
This message h
19 matches
Mail list logo