What are the producers/consumers for the Kafka cluster?
Remember that its not just files but also sockets that add to the count.

I had seen issues when we had a network switch problem and had Storm consumers.
The switch would cause issues in connectivity between Kafka brokers, zookeepers 
and clients, causing a flood of connections from everyone to each other.

On 8/1/16, 7:14 AM, "Scott Thibault" <scott.thiba...@multiscalehn.com> wrote:

    Did you verify that the process has the correct limit applied?
    cat /proc/<your PID>/limits
    
    --Scott Thibault
    
    
    On Sun, Jul 31, 2016 at 4:14 PM, Kessiler Rodrigues <kessi...@callinize.com>
    wrote:
    
    > I’m still experiencing this issue…
    >
    > Here are the kafka logs.
    >
    > [2016-07-31 20:10:35,658] ERROR Error while accepting connection
    > (kafka.network.Acceptor)
    > java.io.IOException: Too many open files
    >         at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
    >         at
    > 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
    >         at
    > 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
    >         at kafka.network.Acceptor.accept(SocketServer.scala:323)
    >         at kafka.network.Acceptor.run(SocketServer.scala:268)
    >         at java.lang.Thread.run(Thread.java:745)
    > [2016-07-31 20:10:35,658] ERROR Error while accepting connection
    > (kafka.network.Acceptor)
    > java.io.IOException: Too many open files
    >         at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
    >         at
    > 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
    >         at
    > 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
    >         at kafka.network.Acceptor.accept(SocketServer.scala:323)
    >         at kafka.network.Acceptor.run(SocketServer.scala:268)
    >         at java.lang.Thread.run(Thread.java:745)
    > [2016-07-31 20:10:35,658] ERROR Error while accepting connection
    > (kafka.network.Acceptor)
    > java.io.IOException: Too many open files
    >         at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
    >         at
    > 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
    >         at
    > 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
    >         at kafka.network.Acceptor.accept(SocketServer.scala:323)
    >         at kafka.network.Acceptor.run(SocketServer.scala:268)
    >         at java.lang.Thread.run(Thread.java:745)
    >
    > My ulimit is 1 million, how is that possible?
    >
    > Can someone help with this?
    >
    >
    > > On Jul 30, 2016, at 5:05 AM, Kessiler Rodrigues <kessi...@callinize.com>
    > wrote:
    > >
    > > I have changed it a bit.
    > >
    > > I have 10 brokers and 20k topics with 1 partition each.
    > >
    > > I looked at the kaka’s logs dir and I only have 3318 files.
    > >
    > > I’m doing some tests to see how many topics/partitions I can have, but
    > it is throwing too many files once it hits 15k topics..
    > >
    > > Any thoughts?
    > >
    > >
    > >
    > >> On Jul 29, 2016, at 10:33 PM, Gwen Shapira <g...@confluent.io> wrote:
    > >>
    > >> woah, it looks like you have 15,000 replicas per broker?
    > >>
    > >> You can go into the directory you configured for kafka's log.dir and
    > >> see how many files you have there. Depending on your segment size and
    > >> retention policy, you could have hundreds of files per partition
    > >> there...
    > >>
    > >> Make sure you have at least that many file handles and then also add
    > >> handles for the client connections.
    > >>
    > >> 1 million file handles sound like a lot, but you are running lots of
    > >> partitions per broker...
    > >>
    > >> We normally don't see more than maybe 4000 per broker and most
    > >> clusters have a lot fewer, so consider adding brokers and spreading
    > >> partitions around a bit.
    > >>
    > >> Gwen
    > >>
    > >> On Fri, Jul 29, 2016 at 12:00 PM, Kessiler Rodrigues
    > >> <kessi...@callinize.com> wrote:
    > >>> Hi guys,
    > >>>
    > >>> I have been experiencing some issues on kafka, where its throwing too
    > many open files.
    > >>>
    > >>> I have around of 6k topics and 5 partitions each.
    > >>>
    > >>> My cluster was made with 6 brokers. All of them are running Ubuntu 16
    > and the file limits settings are:
    > >>>
    > >>> `cat  /proc/sys/fs/file-max`
    > >>> 2000000
    > >>>
    > >>> `ulimit -n`
    > >>> 1000000
    > >>>
    > >>> Anyone has experienced it before?
    > >
    >
    >
    
    
    -- 
    *This e-mail is not encrypted.  Due to the unsecured nature of unencrypted
    e-mail, there may be some level of risk that the information in this e-mail
    could be read by a third party.  Accordingly, the recipient(s) named above
    are hereby advised to not communicate protected health information using
    this e-mail address.  If you desire to send protected health information
    electronically, please contact MultiScale Health Networks at (206)538-6090*
    





This email and any files included with it may contain privileged,
proprietary and/or confidential information that is for the sole use
of the intended recipient(s).  Any disclosure, copying, distribution,
posting, or use of the information contained in or attached to this
email is prohibited unless permitted by the sender.  If you have
received this email in error, please immediately notify the sender
via return email, telephone, or fax and destroy this original transmission
and its included files without reading or saving it in any manner.
Thank you.

Reply via email to