On 13/06/2013, at 8:08 PM, Arvydas <arvy...@artogama.lt> wrote:

> i wonder what is the mechanics behind fence looping? Could anyone explain it 
> in detail.
> 
> Since in master/slave scenario, slave shoots master and becomes master, why 
> would shot-down-node shoot new-master when it is back?

Because it can't reach the master and therefore has no way of knowing what 
state it is in (good or bad).
It also can't become the master until it has shot the other guy - to make sure 
they're not both the master.

> i only see it like this: a shot-down-node comes back and can't see the state 
> of new master, but it would take some really nasty glich in networking

Or a switch gone bad, or someone misconfiguring a firewall, or an upgrade where 
some idiot (me) forgot to reapply a patch and now the two versions can't talk 
to each other, there are plenty of ways and people inevitably find them. 

> or some bug, since we can specify two rings for example, so only way to not 
> see the state of other node is not able to ping internet at all, thus not 
> able to shoot anything
> 
> 
> sincerely,
> arvydas
> ----- Original Message ----- From: "Lars Marowsky-Bree" <l...@suse.com>
> To: "The Pacemaker cluster resource manager" <pacemaker@oss.clusterlabs.org>
> Cc: "Michael Schwartzkopff" <mi...@clusterbau.com>
> Sent: Thursday, June 13, 2013 12:33 PM
> Subject: Re: [Pacemaker] Two resource nodes + one quorum node
> 
> 
> On 2013-06-13T07:45:09, Andrew Beekhof <and...@beekhof.net> wrote:
> 
>> Its certainly possible to build a decent 2-node cluster, but there are 
>> several non-obvious steps that are required - preventing fencing loops being 
>> one.
> 
> Given that 2-node clusters are probably still the 90%+ majority, I
> wonder if we shouldn't make them easier somehow.
> 
> One of the caveats is that no-quorum-policy defaults to a value that
> doesn't make sense for <=2 nodes; maybe we should change that again?
> 
> And maybe we can add generic code to pacemaker/corosync to avoid fence
> loops: don't start automatically after an unclean restart, generic delay
> of the not-yet-DC node if a 2-node cluster splits, etc.
> 
> What other caveats are there?
> 
> 
> Regards,
>   Lars
> 
> -- 
> Architect Storage/HA
> SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, 
> HRB 21284 (AG Nürnberg)
> "Experience is the name everyone gives to their mistakes." -- Oscar Wilde
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to