[ 
https://issues.apache.org/jira/browse/KAFKA-7040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514401#comment-16514401
 ] 

Anna Povzner commented on KAFKA-7040:
-------------------------------------

Hi [~luwang],

Did you actually see data loss (or log divergence)? If so, what was your 
producer's acks config and min.insync.replicas?

If I am understanding this correctly, this is not a correctness issue, but 
rather an efficiency issue. In other words, Broker0 may truncate based on the 
OffsetForLeaderEpoch response that is no longer valid, but it will re-fetch the 
truncated messages from the leader (and since this is a fast leader change, 
there should be very few messages to re-fetch). If the message that got 
truncated is not on the new leader, it should be truncated anyway. I see that 
in the example in your last comment, offset 100 got replicated to broker1, so 
this is the case where broker0 may truncate that offset and re-fetch again. 

So, at this moment, I don't see a possibility of losing data (with the 
appropriate configs/settings). However, I agree that it would be useful to 
fence the replica fetcher from processing OffsetsFoLeaderEpoch response that 
arrived before partition was removed and then re-added to the fetcher. 

 

> The replica fetcher thread may truncate accepted messages during multiple 
> fast leadership transitions
> -----------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-7040
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7040
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Lucas Wang
>            Priority: Minor
>
> Problem Statement:
> Consider the scenario where there are two brokers, broker0, and broker1, and 
> there are two partitions "t1p0", and "t1p1"[1], both of which have broker1 as 
> the leader and broker0 as the follower. The following sequence of events 
> happened on broker0
> 1. The replica fetcher thread on a broker0 issues a LeaderEpoch request to 
> broker1, and awaits to get the response
> 2. A LeaderAndISR request causes broker0 to become the leader for one 
> partition t1p0, which in turn will remove the partition t1p0 from the replica 
> fetcher thread
> 3. Broker0 accepts some messages from a producer
> 4. A 2nd LeaderAndISR request causes broker1 to become the leader, and 
> broker0 to become the follower for partition t1p0. This will cause the 
> partition t1p0 to be added back to the replica fetcher thread on broker0.
> 5. The replica fetcher thread on broker0 receives a response for the 
> LeaderEpoch request issued in step 1, and truncates the accepted messages in 
> step3.
> The issue can be reproduced with the test from 
> https://github.com/gitlw/kafka/commit/8956e743f0e432cc05648da08c81fc1167b31bea
> [1] Initially we set up broker0 to be the follower of two partitions instead 
> of just one, to avoid the shutting down of the replica fetcher thread when it 
> becomes idle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to