On 26/10/14 12:32 PM, Andrei Borzenkov wrote:
В Sun, 26 Oct 2014 12:01:03 +0100
Vladimir пишет:
On Sat, 25 Oct 2014 19:11:02 -0400
Digimer wrote:
On 25/10/14 06:35 PM, Vladimir wrote:
On Sat, 25 Oct 2014 17:30:07 -0400
Digimer wrote:
On 25/10/14 05:09 PM, Vladimir wrote:
Hi,
currently
В Sun, 26 Oct 2014 12:01:03 +0100
Vladimir пишет:
> On Sat, 25 Oct 2014 19:11:02 -0400
> Digimer wrote:
>
> > On 25/10/14 06:35 PM, Vladimir wrote:
> > > On Sat, 25 Oct 2014 17:30:07 -0400
> > > Digimer wrote:
> > >
> > >> On 25/10/14 05:09 PM, Vladimir wrote:
> > >>> Hi,
> > >>>
> > >>> curre
On Sat, 25 Oct 2014 19:11:02 -0400
Digimer wrote:
> On 25/10/14 06:35 PM, Vladimir wrote:
> > On Sat, 25 Oct 2014 17:30:07 -0400
> > Digimer wrote:
> >
> >> On 25/10/14 05:09 PM, Vladimir wrote:
> >>> Hi,
> >>>
> >>> currently I'm testing a 2 node setup using ubuntu trusty.
> >>>
> >>> # The sce
On 25/10/14 06:35 PM, Vladimir wrote:
On Sat, 25 Oct 2014 17:30:07 -0400
Digimer wrote:
On 25/10/14 05:09 PM, Vladimir wrote:
Hi,
currently I'm testing a 2 node setup using ubuntu trusty.
# The scenario:
All communication links betwenn the 2 nodes are cut off. This
results in a split brain
On Sat, 25 Oct 2014 17:30:07 -0400
Digimer wrote:
> On 25/10/14 05:09 PM, Vladimir wrote:
> > Hi,
> >
> > currently I'm testing a 2 node setup using ubuntu trusty.
> >
> > # The scenario:
> >
> > All communication links betwenn the 2 nodes are cut off. This
> > results in a split brain situation
On 25/10/14 05:09 PM, Vladimir wrote:
Hi,
currently I'm testing a 2 node setup using ubuntu trusty.
# The scenario:
All communication links betwenn the 2 nodes are cut off. This results
in a split brain situation and both nodes take their resources online.
When the communication links get bac
Hi,
currently I'm testing a 2 node setup using ubuntu trusty.
# The scenario:
All communication links betwenn the 2 nodes are cut off. This results
in a split brain situation and both nodes take their resources online.
When the communication links get back, I see following behaviour:
On drbd l
In case anybody comes across this on Google, the solution for me was-
In /etc/corosync/corosync.conf, enable the "ring recovery protocol"-
> totem {
> > ...
> > rrp_mode: active
> > ...
> }
Additionally, my IPaddr2's (with their near-instant start/stop times) were
reaching an INFINITY failcount
Dan Frincu writes:
Thanks, this gives me a great entrypoint for research.
Alex
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started:
On Thu, Aug 11, 2011 at 8:12 PM, Digimer wrote:
> On 08/11/2011 12:58 PM, Alex Forster wrote:
>> I have a two node Pacemaker/Corosync cluster with no resources configured
>> yet.
>> I'm running RHEL 6.1 with the official 1.1.5-5.el6 package.
>>
>> While doing various network configuration, I happ
On 08/11/2011 12:58 PM, Alex Forster wrote:
> I have a two node Pacemaker/Corosync cluster with no resources configured yet.
> I'm running RHEL 6.1 with the official 1.1.5-5.el6 package.
>
> While doing various network configuration, I happened to notice that if I
> issue
> a "service network res
I have a two node Pacemaker/Corosync cluster with no resources configured yet.
I'm running RHEL 6.1 with the official 1.1.5-5.el6 package.
While doing various network configuration, I happened to notice that if I issue
a "service network restart" on one node, then approx. four seconds later issue
12 matches
Mail list logo