[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964382#comment-14964382
 ] 

Jay Kreps commented on KAFKA-2580:
----------------------------------

Yeah as [~toddpalino] says it is totally not graceful--it's a hard limit like 
disk space or memory. We do have per-ip connection limits in place now, though, 
so if you use that, the cluster overall should not be impacted by client leaks 
you have to actually have more clients than your limit can support.

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> ---------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-2580
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2580
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.8.2.1
>            Reporter: Vinoth Chandar
>            Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to