[ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106442#comment-14106442
 ] 

Steven Zhen Wu commented on KAFKA-1489:
---------------------------------------

retention among replicas may be somewhat different. I also think it should be 
ok, because this is a safety blank. we should normally try to plan the capacity 
to avoid the scenario.

yeah. disk full policy is what I am looking for. "drop latest" would sound like 
a weird option/policy though, because it can trigger offset gap/jump error on 
consumer side. and in general, it's rare for business use case to drop "new" 
data.

I didn't quite understand "per-data" dir. I thought each kafka server/process 
can only have one root/data dir specified by "log.dir" property. then it can't 
use multiple volumes. please correct me if I am wrong here.



> Global threshold on data retention size
> ---------------------------------------
>
>                 Key: KAFKA-1489
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1489
>             Project: Kafka
>          Issue Type: New Feature
>          Components: log
>    Affects Versions: 0.8.1.1
>            Reporter: Andras Sereny
>            Assignee: Jay Kreps
>              Labels: newbie
>
> Currently, Kafka has per topic settings to control the size of one single log 
> (log.retention.bytes). With lots of topics of different volume and as they 
> grow in number, it could become tedious to maintain topic level settings 
> applying to a single log. 
> Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
> stored, so it'd make sense to have a configurable threshold to control how 
> much space *all* data in Kafka can take up.
> See also:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
> http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to