[ 
https://issues.apache.org/jira/browse/SOLR-15672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17423739#comment-17423739
 ] 

Mark Robert Miller edited comment on SOLR-15672 at 10/3/21, 11:56 PM:
----------------------------------------------------------------------

h3. Suggestions
 * LeaderElector class is modified to contain all leader election code. 
Largely, this would mostly mean moving the ElectionContext into the 
LeaderElector. You would no longer create more than one LeaderElector for a 
given shard election. It would have start or join to enter an election and 
close to stop or cancel it. Call start or join again to go back into the 
election. You would no longer ever create new LeaderElector instances for a 
shard once you had one.
 * You could move the actual LeaderElector and runLeaderProcess code out of the 
Watcher event threads and simply have those event threads trigger the correct 
methods on the LeaderElector. The LeaderElector could have something like a 
single thread Executor that actually executes any logic that needs to occur in 
a different thread. If something was already in action, you could request it to 
stop and when it did, run the requested logic on that single threaded Executor.
 * You could remove Watchers when appropriate. This is useful in many other 
instances beyond leader election. Often, when an object is closed (or in this 
case cancelled), the Watcher involved with that object may still be active. 
close/cancel could call removeWatcher.
 * You could simplify the various methods involved to be easier to understand. 
Most of these methods have very high common metrics for evaluating human 
understandably and test-ability.
 * You could add better targeted testing using code coverage tools, mockito, 
awaitability. Failure and connection-loss/expiration behavior is not tested 
well by our suite of tests and even with effort on that, its difficult to 
verify election code is correct in all cases.
 * You could pull in Curator for the raw ZK election itself. It can be tricky 
to just use Curator in a single location as it adds another ZK client which has 
some ramifications and to a less important degree, costs. If its for an 
isolated enough situation though, that may be just fine. In a larger move, 
Curator could be brought in in an incremental fashion by keeping our 
ZkSolrClient and backing it by a Curator instance that can also be obtained 
from that client for direct curator usage and recipes.
 * You could inspect and consider where retry on connection-loss is used in 
leader election and the ramifications with the current impl. In most cases you 
want to retry on connection-loss, but in some cases it is a tricky business, 
made trickier by the current retry implementation. When you get a 
connection-loss exception back you dont know if your request succeeded or 
failed. In some cases that may mean you dont know if your watcher was created 
or not. Because connection-loss can happen, the connection is reestablished, 
and then connection-loss can happen again - as an example - this can be a 
tricky situation for code that is sensitive to multiple watchers firing for a 
single event for instance.
 * You could remove recursion from the election algorithm. For example, 
currently checkIfIamLeader will recursively call itself once called from 
joinElection code. Instead it could return a boolean on whether to retry or not 
and you could have something like the following:

{code:java}
boolean tryagain = true;
while (tryagain) {tryagain = checkIfIamLeader(context, replacement);}
{code}


was (Author: markrmiller):
h3. Suggestions
 * LeaderElector class is modified to contain all leader election code. 
Largely, this would mostly mean moving the ElectionContext into the 
LeaderElector. You would no longer create more than one LeaderElector for a 
given shard election. It would have start or join to enter an election and 
close to stop or cancel it. Call start or join again to go back into the 
election. You would no longer ever create new LeaderElector instances for a 
shard once you had one.
 * You could move the actual LeaderElector and runLeaderProcess code out of the 
Watcher event threads and simply have those event threads trigger the correct 
methods on the LeaderElector. The LeaderElector could have something like a 
single thread Executor that actually executes any logic that needs to occur in 
a different thread. If something was already in action, you could request it to 
stop and when it did, run the requested logic on that single threaded Executor.
 * You could remove Watchers when appropriate. This is useful in many other 
instances beyond leader election. Often, when an object is closed (or in this 
case cancelled), the Watcher involved with that object may still be active. 
close/cancel could call removeWatcher.
 * You could simplify the various methods involved to be easier to understand. 
Most of these methods have very high common metrics for evaluating human 
understandably and test-ability.
 * You could add better targeted testing using code coverage tools, mockito, 
awaitability. Failure and connection-loss/expiration behavior is not tested 
well by our suite of tests and even with effort on that, its difficult to 
verify election code is correct in all cases.
 * You could pull in Curator for the raw ZK election itself. It can be tricky 
to just use Curator in a single location as it adds another ZK client which has 
some ramifications and to a less important degree, costs. If its for an 
isolated enough situation though, that may be just fine. In a larger move, 
Curator could be brought in in an incremental fashion by keeping our 
ZkSolrClient and backing it by a Curator instance that can also be obtained 
from that client for direct curator usage and recipes.
 * You could inspect and consider where retry on connection-loss is used in 
leader election and the ramifications with the current impl. In most cases you 
want to retry on connection-loss, but in some cases it is a tricky business, 
made trickier by the current retry implementation. When you get a 
connection-loss exception back you dont know if your request succeeded or 
failed. In some cases that may mean you dont know if your watcher was created 
or not. Because connection-loss can happen, the connection is reestablished, 
and then connection-loss can happen again - as an example - this can be a 
tricky situation for code that is sensitive to multiple watchers firing for a 
single event for instance.

> Leader Election is flawed. 
> ---------------------------
>
>                 Key: SOLR-15672
>                 URL: https://issues.apache.org/jira/browse/SOLR-15672
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Mark Robert Miller
>            Priority: Major
>
> Filing this not as a work item I’m assigning my to myself, but to note a open 
> issue where some notes can accumulate. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org

Reply via email to