Hi, Satish,

Thanks for the KIP.

1. For Solution 2, we probably want to be a bit careful with letting each
broker automatically relinquish leadership. The danger of doing that is if
all brokers start doing the same (say due to increased data volume), the
whole cluster could get into a state with no leaders.

2. For Solution 1, I am wondering to what extent it solves the problem. As
Lucas mentioned earlier, if the disk is slow, eventually it will slow down
the produce requests and delay the processing of the follower fetch
requests. Do you know how well this approach works in practice?

3. I am thinking that yet another approach is to introduce some kind of
pluggable failure detection module to detect individual broker failure.
Admins can then build a plugin for their environment, configure
replica.lag.max.time.ms that matches how quickly failure can be detected
and build tools to determine what to do with detected failures.

Jun

On Tue, Jun 29, 2021 at 11:49 PM Satish Duggana <satish.dugg...@gmail.com>
wrote:

> > That clarification in the document helps. But then setting the first
> option
> > to true does not necessarily mean that the condition is happening. Did
> you
> > mean to say that relinquish the leadership if it is taking longer than
> > leader.fetch.process.time.max.ms AND there are fetch requests pending
> which
> > are >= log-end-offset of the earlier fetch request ?
>
> Right. This config triggers relinquishing the leadership only for the
> mentioned cases in the KIP.
>
> Thanks,
> Satish.
>
> On Mon, 28 Jun 2021 at 23:11, Mohan Parthasarathy <mposde...@gmail.com>
> wrote:
> >
> > Hi Satish,
> >
> >
> > >
> > >
> > > >It is not clear to me whether Solution 2 can happen independently. For
> > > example, if the leader exceeds *leader.fetch.process.time.max.ms
> > > <http://leader.fetch.process.time.max.ms>* due to a transient
> condition,
> > > should it relinquish leadership immediately ? That might be aggressive
> in
> > > some cases. Detecting that a leader is slow cannot be determined by
> just
> > > one occurrence, right ?
> > >
> > > Solution(2) is an extension to Solution(1) as mentioned earlier in the
> > > KIP. This config is applicable only if
> > > `follower.fetch.pending.reads.insync.enable` is set as true. I have
> > > also updated the config description in the KIP to make that clear.
> > > In our observations, we do not always see this behavior continuously.
> > > It occurs intermittently and makes all the other requests pile up in
> > > the request queue. Sometimes, the broker goes down and makes the
> > > partitions offline.  Users need to set the config based on their
> > > host's configuration and behavior. We can also think about extending
> > > this config based on others observations.
> > >
> > >
> > That clarification in the document helps. But then setting the first
> option
> > to true does not necessarily mean that the condition is happening. Did
> you
> > mean to say that relinquish the leadership if it is taking longer than
> > leader.fetch.process.time.max.ms AND there are fetch requests pending
> which
> > are >= log-end-offset of the earlier fetch request ?
> >
> > -Thanks
> > Mohan
> >
> > > Thanks,
> > > Satish.
> > >
> > > On Mon, 28 Jun 2021 at 04:36, Mohan Parthasarathy <mposde...@gmail.com
> >
> > > wrote:
> > > >
> > > > Hi Satish,
> > > >
> > > > One small clarification regarding the proposal. I understand how
> Solution
> > > > (1) enables the other replicas to be chosen as the leader. But it is
> > > > possible that the other replicas may not be in sync yet and if
> unclean
> > > > leader election is not enabled, the other replicas may not become the
> > > > leader right ?
> > > >
> > > >  It is not clear to me whether Solution 2 can happen independently.
> For
> > > > example, if the leader exceeds *leader.fetch.process.time.max.ms
> > > > <http://leader.fetch.process.time.max.ms>* due to a transient
> condition,
> > > > should it relinquish leadership immediately ? That might be
> aggressive in
> > > > some cases. Detecting that a leader is slow cannot be determined by
> just
> > > > one occurrence, right ?
> > > >
> > > > Thanks
> > > > Mohan
> > > >
> > > >
> > > > On Sun, Jun 27, 2021 at 4:01 AM Satish Duggana <
> satish.dugg...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi Dhruvil,
> > > > > Thanks for looking into the KIP and providing your comments.
> > > > >
> > > > > There are two problems about the scenario raised in this KIP:
> > > > >
> > > > > a) Leader is slow and it is not available for reads or writes.
> > > > > b) Leader is causing the followers to be out of sync and cause the
> > > > > partitions unavailability.
> > > > >
> > > > > (a) should be detected and mitigated so that the broker can become
> a
> > > > > leader or replace with a different node if this node continues
> having
> > > > > issues.
> > > > >
> > > > > (b) will cause the partition to go under minimum ISR and eventually
> > > > > make that partition offline if the leader goes down. In this case,
> > > > > users have to enable unclean leader election for making the
> partition
> > > > > available. This may cause data loss based on the replica chosen as
> a
> > > > > leader. This is what several folks(including us) observed in their
> > > > > production environments.
> > > > >
> > > > > Solution(1) in the KIP addresses (b) to avoid offline partitions by
> > > > > not removing the replicas from ISR. This allows the partition to be
> > > > > available if the leader is moved to one of the other replicas in
> ISR.
> > > > >
> > > > > Solution (2) in the KIP extends solution (1) by relinquishing the
> > > > > leadership and allowing one of the other insync replicas to become
> a
> > > > > leader.
> > > > >
> > > > > ~Satish.
> > > > >
> > >
>

Reply via email to