[ https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730102#comment-14730102 ]
Jun Rao commented on KAFKA-2477: -------------------------------- Could you then try the following? In the above situation, go to broker 0's log dir for partition [log.event,3]. Get the name of the last log segment (the .log file). Then run the following bin/kafka-run-class.sh kafka.tools.DumpLogSegments [logsegmentname] This will print out the offset of each message. In the normal case, those offsets should be monotonically increasing. Could you check if there is any out of sequence offsets in the output especially close to 10349592109? > Replicas spuriously deleting all segments in partition > ------------------------------------------------------ > > Key: KAFKA-2477 > URL: https://issues.apache.org/jira/browse/KAFKA-2477 > Project: Kafka > Issue Type: Bug > Affects Versions: 0.8.2.1 > Reporter: HÃ¥kon Hitland > Attachments: kafka_log.txt > > > We're seeing some strange behaviour in brokers: a replica will sometimes > schedule all segments in a partition for deletion, and then immediately start > replicating them back, triggering our check for under-replicating topics. > This happens on average a couple of times a week, for different brokers and > topics. > We have per-topic retention.ms and retention.bytes configuration, the topics > where we've seen this happen are hitting the size limit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)