[ https://issues.apache.org/jira/browse/KAFKA-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212560#comment-14212560 ]
Ewen Cheslack-Postava commented on KAFKA-1745: ---------------------------------------------- [~Vishal M] I'm not sure what to do about it. If my analysis is correct, this is internal to NIO and we don't really have any control over it -- we just allocate the socket and use it normally, albeit from multiple threads. The new producer uses a dedicated thread for IO which explains why it doesn't seem to exhibit the same behavior. The two options I can see are to shift to using the new producer (which I realize isn't an option for your current Kafka version) or to reorganize your code to have a dedicated thread per producer and make your existing send operations just push data to that thread for processing instead. > Each new thread creates a PIPE and KQUEUE as open files during > producer.send() and does not get cleared when the thread that creates them is > cleared. > ----------------------------------------------------------------------------------------------------------------------------------------------------- > > Key: KAFKA-1745 > URL: https://issues.apache.org/jira/browse/KAFKA-1745 > Project: Kafka > Issue Type: Bug > Affects Versions: 0.8.1.1 > Environment: Mac OS Mavericks > Reporter: Vishal > Priority: Critical > > Hi, > I'm using the java client API for Kafka. I wanted to send data to Kafka > by using a producer pool as I'm using a sync producer. The thread that sends > the data is from the thread pool that grows and shrinks depending on the > usage. So, when I try to send data from one thread, 1 KQUEUE and 2 PIPES are > created (got this info by using lsof). If I keep using the same thread it's > fine but when a new thread sends data to Kafka (using producer.send() ) a new > KQUEUE and 2 PIPEs are created. > This is okay, but when the thread is cleared from the thread pool and a new > thread is created, then new KQUEUEs and PIPEs are created. The problem is > that the old ones which were created are not getting destroyed and they are > showing up as open files. This is causing a major problem as the number of > open file keep increasing and does not decrease. > Please suggest any solutions. > FYI, the number of TCP connections established from the producer system to > the Kafka Broker remain constant throughout. -- This message was sent by Atlassian JIRA (v6.3.4#6332)