If you're using a systemd based OS you'll actually need to set it in the
unit file.

LimitNOFILE=100000

https://kafka.apache.org/documentation/#upgrade_10_1_breaking contains some
changes re file handles as well.


__

Sam Pegler

PRODUCTION ENGINEER

T. +44(0) 07 562 867 486

<http://www.infectiousmedia.com/>
3-7 Herbal Hill / London / EC1R 5EJ
www.infectiousmedia.com

This email and any attachments are confidential and may also be privileged.
If you
are not the intended recipient, please notify the sender immediately, and
do not
disclose the contents to another person, use it for any purpose, or store,
or copy
the information in any medium. Please also destroy and delete the message
from
your computer.


On 13 May 2017 at 02:57, Caleb Welton <ca...@autonomic.ai> wrote:

> You need to up your OS open file limits, something like this should work:
>
> # /etc/security/limits.conf
> * - nofile 65536
>
>
>
>
> On Fri, May 12, 2017 at 6:34 PM, Yang Cui <y...@freewheel.tv> wrote:
>
> > Our Kafka cluster is broken down by  the problem “java.io.IOException:
> Too
> > many open files”  three times in 3 weeks.
> >
> > We encounter these problem on both 0.9.0.1 and 0.10.2.1 version.
> >
> > The error is like:
> >
> > java.io.IOException: Too many open files
> >         at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> >         at sun.nio.ch.ServerSocketChannelImpl.accept(
> > ServerSocketChannelImpl.java:422)
> >         at sun.nio.ch.ServerSocketChannelImpl.accept(
> > ServerSocketChannelImpl.java:250)
> >         at kafka.network.Acceptor.accept(SocketServer.scala:340)
> >         at kafka.network.Acceptor.run(SocketServer.scala:283)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> > Is someone encounter the similar problem?
> >
> >
> >
>

Reply via email to