[ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14954511#comment-14954511
 ] 

Ewen Cheslack-Postava commented on KAFKA-2477:
----------------------------------------------

[~cpsoman] Beyond applying to 0.8.2.0 with the patch, any reason not to update 
to 0.8.2.2 and apply the patch on top of that, where it definitely applies 
cleanly? It looks like 8 patches, and some of the patches on top of 0.8.2.0 are 
likely to be useful if you might have a large number of partitions or use 
snappy compression, among other key fixes. Maybe you're not hitting any of the 
critical fixes in those releases, but since they're low risk maybe catching up 
with the latest release and only having a minor patch would simplify things?

> Replicas spuriously deleting all segments in partition
> ------------------------------------------------------
>
>                 Key: KAFKA-2477
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2477
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.8.2.1
>            Reporter: HÃ¥kon Hitland
>            Assignee: Jiangjie Qin
>             Fix For: 0.9.0.0
>
>         Attachments: Screen Shot 2015-10-10 at 6.54.44 PM.png, kafka_log.txt, 
> kafka_log_trace.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to