[ https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14952600#comment-14952600 ]
Jiangjie Qin commented on KAFKA-2477: ------------------------------------- [~cpsoman] I think the likelihood of the issue is related to the scale as you observed, because there would be potentially more threads trying to read/write from the same log segment. It looks there are a few other patches on the files touched by this patch since 0.8.2.0. However, I checked the code of 0.8.2.0, it seems the code blocks related to this patch are still the same as the latest trunk. So you should be able to patch 0.8.2.0 easily although the patch itself may not apply. > Replicas spuriously deleting all segments in partition > ------------------------------------------------------ > > Key: KAFKA-2477 > URL: https://issues.apache.org/jira/browse/KAFKA-2477 > Project: Kafka > Issue Type: Bug > Affects Versions: 0.8.2.1 > Reporter: HÃ¥kon Hitland > Assignee: Jiangjie Qin > Fix For: 0.9.0.0 > > Attachments: Screen Shot 2015-10-10 at 6.54.44 PM.png, kafka_log.txt, > kafka_log_trace.txt > > > We're seeing some strange behaviour in brokers: a replica will sometimes > schedule all segments in a partition for deletion, and then immediately start > replicating them back, triggering our check for under-replicating topics. > This happens on average a couple of times a week, for different brokers and > topics. > We have per-topic retention.ms and retention.bytes configuration, the topics > where we've seen this happen are hitting the size limit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)