[ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14715742#comment-14715742
 ] 

Håkon Hitland edited comment on KAFKA-2477 at 8/26/15 11:38 PM:
----------------------------------------------------------------

We use a replication factor of 3.
The only line with "start offset" that day is the one in the attached log:
[2015-08-24 18:32:32,299] WARN [ReplicaFetcherThread-3-0], Replica 3 for 
partition [log.event,3] reset its fetch offset from 10200597616 to current 
leader 0's start offset 10200597616 (kafka.server.ReplicaFetcherThread)

e: the leader error reads:
[2015-08-24 18:32:32,145] ERROR [Replica Manager on Broker 0]: Error when 
processing fetch request for partition [log.event,3] offset 10349592111 from 
follower with correlation id 141609587. Possible cause: Request for offset 
10349592111 but we only have log segments in the range 10200597616 to 
10349592109. (kafka.server.ReplicaManager)


was (Author: hakon):
We use a replication factor of 3.
The only line with "start offset" that day is the one in the attached log:
[2015-08-24 18:32:32,299] WARN [ReplicaFetcherThread-3-0], Replica 3 for 
partition [log.event,3] reset its fetch offset from 10200597616 to current 
leader 0's start offset 10200597616 (kafka.server.ReplicaFetcherThread)

> Replicas spuriously deleting all segments in partition
> ------------------------------------------------------
>
>                 Key: KAFKA-2477
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2477
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.8.2.1
>            Reporter: Håkon Hitland
>         Attachments: kafka_log.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to