Hi,
> rrp_mode
> This specifies the mode of redundant ring, which may
> be none,
> active, or passive. Active replication offers
> slightly lower
> latency from transmit to delivery in faulty network
> environ-
> ments but with l
On Wed, Dec 16, 2009 at 8:56 AM, Alain.Moulle wrote:
>> No. It is a split-brain situation as soon as nodes can't
>> communicate.
>>
> Ok, you're rigtht, in fact, I wanted to talk about the risk of shared
> resources mounted on both sides, which
> is in fact the worst thing that could happen in ca
perhaps try the openais mailing list rather than their competitor ;-)
On Wed, Dec 16, 2009 at 9:18 AM, Alain.Moulle wrote:
> Hi,
>> rrp_mode
>> This specifies the mode of redundant ring, which may
>> be none,
>> active, or passive. Active replication offers
>> slig
On Wed, Dec 16, 2009 at 8:48 AM, artur.k wrote:
> I have built a cluster with two nodes on pacemacker 1.0.4 + DRBD (8.0.14). If
> one machine is restarted after returning pacemacker trying to switch all
> services to this server. How to prevent it?
Set default-resource-stickiness to something h
--
在线游戏技术部网管组
李森(Jason)
POPO :listen1...@163.com
Email:li...@corp.netease.com
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
Andrew Beekhof-3 wrote:
>
> On Wed, Dec 2, 2009 at 5:23 AM, Jessy wrote:
>>> Yes, but did you add a monitor action to the resource's definition in
>>> the configuration?
>>>
>>> [Jessy] : I have added monitor operation defination in cib.xml with
>>> certain interval time in cib.xml file as bel
Hi,
On Tue, Dec 15, 2009 at 09:52:14AM -0600, justin.kin...@academy.com wrote:
> Hello everyone.
>
> I'm configuring a new 2 node cluster using SLES11 and the HAE using
> openais 0.80.3-26.1 and pacemaker 1.0.3-4.1
>
> The problem I'm having is that the nodes do not seem to find each other as
> > I'm configuring a new 2 node cluster using SLES11 and the HAE using
> > openais 0.80.3-26.1 and pacemaker 1.0.3-4.1
> >
> > The problem I'm having is that the nodes do not seem to find each
other as
> > the documentation says they should.
> >
> > Here's a brief rundown of what I've done:
> >
>
Am Mittwoch, 16. Dezember 2009 14:43:33 schrieb justin.kin...@academy.com:
> > > I'm configuring a new 2 node cluster using SLES11 and the HAE using
> > > openais 0.80.3-26.1 and pacemaker 1.0.3-4.1
> > >
> > > The problem I'm having is that the nodes do not seem to find each
>
> other as
>
> > > t
Hi,
On Wed, Dec 16, 2009 at 08:56:00AM +0100, Alain.Moulle wrote:
> Hi Dejan, and thanks for responses,
> yet several remarks below ...
> Alain
> > Hi,
> > >
> > > I'm trying to clearly evaluate the risk of split brain and the risk of
> > > dual-fencing with pacemaker/openais in
> > > the case I
> > I've captured some packets using tcpdump, and indeed, I never see the
> > multicast traffic being received, only sent. The odd thing is that
these
> > machines respond to other multicast traffic, like pinging 224.0.0.1.
> >
> > Is there a kernel option that anyone is aware of that could be ca
Andrew Beekhof pisze:
> On Wed, Dec 16, 2009 at 8:48 AM, artur.k wrote:
>
>> I have built a cluster with two nodes on pacemacker 1.0.4 + DRBD (8.0.14).
>> If one machine is restarted after returning pacemacker trying to switch all
>> services to this server. How to prevent it?
>>
>
> Set
Not anything to do with this is it?
https://lists.linux-foundation.org/pipermail/openais/2007-November/00947
8.html
-Original Message-
From: linux-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of
justin.kin...@academy.com
Sent: 16 December 2009 16:0
> Not anything to do with this is it?
> https://lists.linux-foundation.org/pipermail/openais/2007-November/00947
> 8.html
It looks like it is an issue with Cisco catalyst switches (we are using
catalyst 3750s).
The resolution to the problem is documented here in case anyone is
interested:
htt
On Fri, Dec 11, 2009 at 11:28 AM, Andrew Beekhof wrote:
> On Fri, Dec 11, 2009 at 2:17 AM, infernix wrote:
>> Are these location constraints conflicting with the order constraints? I
>> mean, the cluster shouldn't care where they [start|migrate_to], as long as
>> they [start|migrate_to] in order,
15 matches
Mail list logo