Maybe you are exhausting your sockets, not file handles for some reason? 

________________________________________
From: Kessiler Rodrigues [kessi...@callinize.com]
Sent: 31 July 2016 22:14
To: users@kafka.apache.org
Subject: Re: Too Many Open Files

I’m still experiencing this issue…

Here are the kafka logs.

[2016-07-31 20:10:35,658] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.io.IOException: Too many open files
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
        at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
        at kafka.network.Acceptor.accept(SocketServer.scala:323)
        at kafka.network.Acceptor.run(SocketServer.scala:268)
        at java.lang.Thread.run(Thread.java:745)
[2016-07-31 20:10:35,658] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.io.IOException: Too many open files
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
        at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
        at kafka.network.Acceptor.accept(SocketServer.scala:323)
        at kafka.network.Acceptor.run(SocketServer.scala:268)
        at java.lang.Thread.run(Thread.java:745)
[2016-07-31 20:10:35,658] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.io.IOException: Too many open files
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
        at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
        at kafka.network.Acceptor.accept(SocketServer.scala:323)
        at kafka.network.Acceptor.run(SocketServer.scala:268)
        at java.lang.Thread.run(Thread.java:745)

My ulimit is 1 million, how is that possible?

Can someone help with this?

> On Jul 30, 2016, at 5:05 AM, Kessiler Rodrigues <kessi...@callinize.com> 
> wrote:
>
> I have changed it a bit.
>
> I have 10 brokers and 20k topics with 1 partition each.
>
> I looked at the kaka’s logs dir and I only have 3318 files.
>
> I’m doing some tests to see how many topics/partitions I can have, but it is 
> throwing too many files once it hits 15k topics..
>
> Any thoughts?
>
>
>
>> On Jul 29, 2016, at 10:33 PM, Gwen Shapira <g...@confluent.io> wrote:
>>
>> woah, it looks like you have 15,000 replicas per broker?
>>
>> You can go into the directory you configured for kafka's log.dir and
>> see how many files you have there. Depending on your segment size and
>> retention policy, you could have hundreds of files per partition
>> there...
>>
>> Make sure you have at least that many file handles and then also add
>> handles for the client connections.
>>
>> 1 million file handles sound like a lot, but you are running lots of
>> partitions per broker...
>>
>> We normally don't see more than maybe 4000 per broker and most
>> clusters have a lot fewer, so consider adding brokers and spreading
>> partitions around a bit.
>>
>> Gwen
>>
>> On Fri, Jul 29, 2016 at 12:00 PM, Kessiler Rodrigues
>> <kessi...@callinize.com> wrote:
>>> Hi guys,
>>>
>>> I have been experiencing some issues on kafka, where its throwing too many 
>>> open files.
>>>
>>> I have around of 6k topics and 5 partitions each.
>>>
>>> My cluster was made with 6 brokers. All of them are running Ubuntu 16 and 
>>> the file limits settings are:
>>>
>>> `cat  /proc/sys/fs/file-max`
>>> 2000000
>>>
>>> `ulimit -n`
>>> 1000000
>>>
>>> Anyone has experienced it before?
>

Reply via email to