Booth version 0.1.0 has no retry method for packets, one single packet loss and the election does not work anymore.
Also it has a stupid checking of ballot values against promised values, in which case it communicates but does not do things right.
Booth version 0.1.0 is very sensitive about start order and saved ballot values as ticket attributes. The fault might come from your start procedure and the state it remembers from previous runs.
And it communicates through two different ways, one is the UDP port you configure in booth.conf used for renew and the other is a fixed TCP port somewhere in 23000 used for catchup.
I might give you a hint if you attach the complete log from Aug 25 10:07:10  until Aug 25 10:08:10 for all three sites and attach the snipplet of booth.conf to see the order of lines with site and arbitrator.
Anyhow booth 0.1.0 is really unusable for productive environment, it has so many errors in it, you cant rely on it.
The booth version from GIT after 4th august 2014 behaves very well even in case of double fault (restart during network down) and builds under SLES and RHEL quite well now.
Rainer
 
Gesendet: Dienstag, 26. August 2014 um 14:58 Uhr
Von: "Sutherland, Rob" <rsutherl...@broadviewnet.com>
An: "pacemaker@oss.clusterlabs.org" <pacemaker@oss.clusterlabs.org>
Betreff: Re: [Pacemaker] SLES 11 SP3 boothd behaviour

All nodes in question NTP from the same time source (yes, we have run into synchronicity issues in the past).

 

Interestingly, increasing the lease from 60 seconds to 120 seconds did not affect the behaviour.

 

Rob

 

From: John Lauro [mailto:john.la...@covenanteyes.com]
Sent: Monday, August 25, 2014 6:17 PM
To: Sutherland, Rob
Subject: Re: [Pacemaker] SLES 11 SP3 boothd behaviour

 

You probably already checked this, but just in case...

No experience at all with geo-redundancy, but this sounds suspiciously like it could be a time sync problem.  Have you tried something like "ntpq -np" on all 3 nodes and verify the offsets are all low (ie: < +/- 10) and times are in sync?
(Assuming you are running ntpd, and the process didn't stop.)

 


From: "Rob Sutherland" <rsutherl...@broadviewnet.com>
To: pacemaker@oss.clusterlabs.org
Sent: Monday, August 25, 2014 3:43:34 PM
Subject: [Pacemaker] SLES 11 SP3 boothd behaviour

Hello all,

 

We’re in the process of implementing geo-redundancy on SLES 11 SP3 (version 0.1.0). We are seeing behavior in which site 2 in a geo-cluster decides that the ticket has expired long before actual expiry. Here’s an example time-line:

 

1 - All sites (site 1, site 2 and arbitrator) agree on ticket owner and expiry. i.e. site 2 has the ticket with a 60-second expiry:

Aug 25 10:07:10 linux-4i31 booth-arbitrator: [22526]: info: command: 'crm_ticket -t geo-ticket -S expires -v 1408975690' was executed

Aug 25 10:07:10 bb5Btas0 booth-site: [27782]: info: command: 'crm_ticket -t geo-ticket -S expires -v 1408975690' was executed

Aug 25 10:07:10 bb5Atas1 booth-site: [7826]: info: command: 'crm_ticket -t geo-ticket -S expires -v 1408975690' was executed

 

2 - After 48 seconds (80% into lease), all three nodes are still in agreement:

Site 2:

Aug 25 10:07:58 bb5Btas0 booth-site: [27782]: info: command: 'crm_ticket -t geo-ticket -S owner -v 2' was executed 

Aug 25 10:07:58 bb5Btas0 booth-site: [27782]: info: command: 'crm_ticket -t geo-ticket -S expires -v 1408975738' was executed

 

The arbitrator:

Aug 25 10:07:58 linux-4i31 crm_ticket[23836]:   notice: crm_log_args: Invoked: crm_ticket -t geo-ticket -S owner -v 2

Aug 25 10:07:58 linux-4i31 booth-arbitrator: [22526]: info: command: 'crm_ticket -t geo-ticket -S expires -v 1408975738' was executed

 

Site 1:

Aug 25 10:07:58 bb5Atas1 booth-site: [7826]: info: command: 'crm_ticket -t geo-ticket -S owner -v 2' was executed

Aug 25 10:07:58 bb5Atas1 booth-site: [7826]: info: command: 'crm_ticket -t geo-ticket -S expires -v 1408975738' was executed

 

3 - Site 2 decides that the ticket has expired (at the  expiry time set in step 1)

Aug 25 10:08:10 bb5Btas0 booth-site: [27782]: debug: lease expires ...

 

4 - At 10:08:58, both site 1 and the arbitrator expire the lease and pick a new master.

 

I presume that there was some missed communication between site 2 and the rest of the geo-cluster. There is nothing in the logs to help debug this, though. Any hints on debugging this?

 

BTW: we only ever see this on a site 2 – never a site 1. This is consistent across several labs. Is there a bias towards site 1?

 

Thanks in advance,

 

Rob

 

 


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

 

_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to