: Too many open files in kafka 0.9
There is KAFKA-3317 which is still open.
Have you seen this ?
http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+
On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE
TECHNOL
difference with the others brokers., so is it safe to remove these
directories __consumer_offsets-XX if not acceded since one day ?
-Message d'origine-
De : Ted Yu [mailto:yuzhih...@gmail.com]
Envoyé : mercredi 29 novembre 2017 19:41
À : users@kafka.apache.org
Objet : Re: Too many open
There is KAFKA-3317 which is still open.
Have you seen this ?
http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+
On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE
TECHNOLOGIES) wrote:
> We have a clus
We have a cluster with 3 brokers and kafka 0.9.0.1. One week ago, we decide to
adjust log.retention.hours from 10 days to 2 days. Stop and go the cluster and
it is ok. But for one broker, we have every day more and more datas and two
days later crash with message too many open files. lsof
I’ve seen where setting network configurations within the OS can help mitigate
some of the “Too many open files” issue as well.
Try changing the following items on the OS to try to have used network
connections close as quickly as possible in order to keep file handle use down:
sysctl -w
# /etc/security/limits.conf
* - nofile 65536
On Fri, May 12, 2017 at 6:34 PM, Yang Cui wrote:
> Our Kafka cluster is broken down by the problem “java.io.IOException: Too
> many open files” three times in 3 weeks.
>
> We encounter
s.conf
> * - nofile 65536
>
>
>
>
> On Fri, May 12, 2017 at 6:34 PM, Yang Cui wrote:
>
> > Our Kafka cluster is broken down by the problem “java.io.IOException:
> Too
> > many open files” three times in 3 weeks.
> >
> > We encounter thes
You need to up your OS open file limits, something like this should work:
# /etc/security/limits.conf
* - nofile 65536
On Fri, May 12, 2017 at 6:34 PM, Yang Cui wrote:
> Our Kafka cluster is broken down by the problem “java.io.IOException: Too
> many open files” three times in 3
Our Kafka cluster is broken down by the problem “java.io.IOException: Too many
open files” three times in 3 weeks.
We encounter these problem on both 0.9.0.1 and 0.10.2.1 version.
The error is like:
java.io.IOException: Too many open files
at
:
[2016-09-12 09:34:49,522] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at
] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at
5-node Kafka cluster, bare metal, Ubuntu 14.04.x LTS with 64GB RAM, 8-core,
960GB SSD boxes and a single node in cluster is filling logs with the following:
[2016-09-12 09:34:49,522] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.io.IOException: Too many open files
Rodrigues
wrote:
> I’m still experiencing this issue…
>
> Here are the kafka logs.
>
> [2016-07-31 20:10:35,658] ERROR Error while accepting connection
> (kafka.network.Acceptor)
> java.io.IOException: Too m
ting connection
> (kafka.network.Acceptor)
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
> at
> sun.n
o start figuring out what's open and why.
>>
>>-Steve
>>
>>> On Jul 31, 2016, at 4:14 PM, Kessiler Rodrigues
>> wrote:
>>>
>>> I’m still experiencing this issue…
>>>
>>> Here are the kafka logs.
>>&g
cing this issue…
> >
> > Here are the kafka logs.
> >
> > [2016-07-31 20:10:35,658] ERROR Error while accepting connection
> (kafka.network.Acceptor)
> > java.io.IOException: Too many open files
> >at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
&
Error while accepting connection
> (kafka.network.Acceptor)
> java.io.IOException: Too many open files
>at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
>at
> sun
roker and most
> clusters have a lot fewer, so consider adding brokers and spreading
> partitions around a bit.
>
> Gwen
>
> On Fri, Jul 29, 2016 at 12:00 PM, Kessiler Rodrigues
> wrote:
> > Hi guys,
> >
> > I have been experiencing some issues on kafk
Maybe you are exhausting your sockets, not file handles for some reason?
From: Kessiler Rodrigues [kessi...@callinize.com]
Sent: 31 July 2016 22:14
To: users@kafka.apache.org
Subject: Re: Too Many Open Files
I’m still experiencing this issue…
Here are
I’m still experiencing this issue…
Here are the kafka logs.
[2016-07-31 20:10:35,658] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
> We normally don't see more than maybe 4000 per broker and most
> clusters have a lot fewer, so consider adding brokers and spreading
> partitions around a bit.
>
> Gwen
>
> On Fri, Jul 29, 2016 at 12:00 PM, Kessiler Rodrigues
> wrote:
>> Hi guys,
>>
&
rokers and spreading
partitions around a bit.
Gwen
On Fri, Jul 29, 2016 at 12:00 PM, Kessiler Rodrigues
wrote:
> Hi guys,
>
> I have been experiencing some issues on kafka, where its throwing too many
> open files.
>
> I have around of 6k topics and 5 partitions each.
>
>
Hi guys,
I have been experiencing some issues on kafka, where its throwing too many open
files.
I have around of 6k topics and 5 partitions each.
My cluster was made with 6 brokers. All of them are running Ubuntu 16 and the
file limits settings are:
`cat /proc/sys/fs/file-max`
200
such error
> >
> > [2015-01-15 19:03:45,057] ERROR Error in acceptor
> (kafka.network.Acceptor)
> > java.io.IOException: Too many open files
> > at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> > at sun.nio.ch.ServerSocketChannelImpl.accept(
>
test our production kafka, and getting such error
>
> [2015-01-15 19:03:45,057] ERROR Error in acceptor (kafka.network.Acceptor)
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at sun.nio.ch.Server
t; in /etc/sysctl.conf
Gwen
On Thu, Jan 15, 2015 at 12:30 PM, Sa Li wrote:
> Hi, all
>
> We test our production kafka, and getting such error
>
> [2015-01-15 19:03:45,057] ERROR Error in acceptor (kafka.network.Acceptor)
> java.io.IOException: Too
Hi, all
We test our production kafka, and getting such error
[2015-01-15 19:03:45,057] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
ch.SocketChannelImpl.write(SocketChannelImpl.java:352)
>> at FdTest$ClientThread.run(FdTest.java:108)
>>
>>
>> But all I have to do is sleep for a bit on the client, and then retry
>> again. However, 4K does seem like a magic number, since that¹s seems to
>>b
l.java:352)
> at FdTest$ClientThread.run(FdTest.java:108)
>
>
> But all I have to do is sleep for a bit on the client, and then retry
> again. However, 4K does seem like a magic number, since that¹s seems to be
> the number that the Kafka broker machine can handle before it
;
> >
> > But all I have to do is sleep for a bit on the client, and then retry
> > again. However, 4K does seem like a magic number, since that¹s seems to
> be
> > the number that the Kafka broker machine can handle before it gives me
> the
> > ³Too Many Open Fi
number, since that¹s seems to be
> the number that the Kafka broker machine can handle before it gives me the
> ³Too Many Open Files² error and eventually crashes.
>
> Paul Lung
>
> On 7/8/14, 9:29 PM, "Jun Rao" wrote:
>
> >Does your test program run as the s
Kafka broker machine can handle before it gives me the
³Too Many Open Files² error and eventually crashes.
Paul Lung
On 7/8/14, 9:29 PM, "Jun Rao" wrote:
>Does your test program run as the same user as Kafka broker?
>
>Thanks,
>
>Jun
>
>
>On Tue, Jul 8,
ternal Kafka limit that I
> don’t know about?
>
> Paul Lung
>
>
>
> java.io.IOException: Too many open files
>
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>
> at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImp
¹m really
>confused as to why I¹m seeing this error. Is there some internal Kafka
>limit that I don¹t know about?
>
>Paul Lung
>
>
>
>java.io.IOException: Too many open files
>
>at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>
>
Kafka broker, I’m really confused as to why
I’m seeing this error. Is there some internal Kafka limit that I don’t know
about?
Paul Lung
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
ul wrote:
> >>>
> >>>> Hi All,
> >>>>
> >>>>
> >>>> I just upgraded my cluster from 0.8.1 to 0.8.1.1. I¹m seeing the
> >>>>following
> >>>> error messages on the same 3 brokers once in a while:
&g
-l /proc//fd'
>>>
>>>
>>>
>>>
>>>On Tue, Jun 24, 2014 at 10:18 PM, Lung, Paul wrote:
>>>
>>>> Hi All,
>>>>
>>>>
>>>> I just upgraded my cluster from 0.8.1 to 0.8.1.1. I¹m seein
pl.accept0(Native Method)
>>>
>>> at
>>>
>>>sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:1
>>>6
>>>3)
>>>
>>> at kafka.network.Acceptor.accept(SocketServer.scala:200)
>>>
>>>
Tue, Jun 24, 2014 at 10:18 PM, Lung, Paul wrote:
>
>> Hi All,
>>
>>
>> I just upgraded my cluster from 0.8.1 to 0.8.1.1. I¹m seeing the
>>following
>> error messages on the same 3 brokers once in a while:
>>
>>
>> [2014-06-24 21:43:44,711]
3 brokers once in a while:
>
>
> [2014-06-24 21:43:44,711] ERROR Error in acceptor (kafka.network.Acceptor)
>
> java.io.IOException: Too many open files
>
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>
> at
> sun.nio.ch.ServerSo
Hi All,
I just upgraded my cluster from 0.8.1 to 0.8.1.1. I’m seeing the following
error messages on the same 3 brokers once in a while:
[2014-06-24 21:43:44,711] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at
.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(Unknown
>> Source)
>> at
>> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(Unknown
>> Source)
>> at kafka.utils.Utils$.inLock(Unknown Source)
>> at kafka.server.AbstractFetcherThread.processF
server.AbstractFetcherThread.processFetchRequest(Unknown
> Source)
> at kafka.server.AbstractFetcherThread.doWork(Unknown Source)
> at kafka.utils.ShutdownableThread.run(Unknown Source)
> Caused by: java.io.FileNotFoundException:
> /disk1/kafka-logs/perf1-4/00010558.i
at kafka.server.AbstractFetcherThread.doWork(Unknown Source)
at kafka.utils.ShutdownableThread.run(Unknown Source)
Caused by: java.io.FileNotFoundException:
/disk1/kafka-logs/perf1-4/00010558.index (Too many open
files)
at java.io.RandomAccessFile.open(Native Method)
g the issue in a cross data center context ?
>
> Best regards,
>
> Nicolas Berthet
>
>
> -Original Message-
> From: Mark [mailto:static.void....@gmail.com]
> Sent: Friday, September 27, 2013 6:08 AM
> To: users@kafka.apache.org
> Subject: Re: Too many open files
>
&
getting higher.
Best regards,
Nicolas Berthet
-Original Message-
From: Mark [mailto:static.void@gmail.com]
Sent: Saturday, September 28, 2013 12:35 AM
To: users@kafka.apache.org
Subject: Re: Too many open files
No, this is all within
s,
>
> Nicolas Berthet
>
>
> -Original Message-
> From: Mark [mailto:static.void@gmail.com]
> Sent: Friday, September 27, 2013 6:08 AM
> To: users@kafka.apache.org
> Subject: Re: Too many open files
>
> What OS settings did you change? How high is
ser to the solution to my issue.
Are you also experiencing the issue in a cross data center context ?
Best regards,
Nicolas Berthet
-Original Message-
From: Mark [mailto:static.void@gmail.com]
Sent: Friday, September 27, 2013 6:08 AM
To: users@kafka.apache.org
Subject: Re: Too
observation for the time being.
>
> Note that, for clients in the same datacenter, we didn't see this issue, the
> socket count matches on both ends.
>
> Nicolas Berthet
>
> -Original Message-
> From: Jun Rao [mailto:jun...@gmail.com]
> Sent: Thursday, S
servation for the time being.
>>
>> Note that, for clients in the same datacenter, we didn't see this issue,
>> the socket count matches on both ends.
>>
>> Nicolas Berthet
>>
>> -Original Message-
>> From: Jun Rao [mailto:jun...@gmail.com]
to:jun...@gmail.com]
> Sent: Thursday, September 26, 2013 12:39 PM
> To: users@kafka.apache.org
> Subject: Re: Too many open files
>
> If a client is gone, the broker should automatically close those broken
> sockets. Are you using a hardware load balancer?
>
> Thanks,
>
Jun Rao [mailto:jun...@gmail.com]
Sent: Thursday, September 26, 2013 12:39 PM
To: users@kafka.apache.org
Subject: Re: Too many open files
If a client is gone, the broker should automatically close those broken
sockets. Are you using a hardware load balancer?
Thanks,
Jun
On Wed, Sep 25, 2013 at
If a client is gone, the broker should automatically close those broken
sockets. Are you using a hardware load balancer?
Thanks,
Jun
On Wed, Sep 25, 2013 at 4:48 PM, Mark wrote:
> FYI if I kill all producers I don't see the number of open files drop. I
> still see all the ESTABLISHED connecti
FYI if I kill all producers I don't see the number of open files drop. I still
see all the ESTABLISHED connections.
Is there a broker setting to automatically kill any inactive TCP connections?
On Sep 25, 2013, at 4:30 PM, Mark wrote:
> Any other ideas?
>
> On Sep 25, 2013, at 9:06 AM, Jun R
Any other ideas?
On Sep 25, 2013, at 9:06 AM, Jun Rao wrote:
> We haven't seen any socket leaks with the java producer. If you have lots
> of unexplained socket connections in established mode, one possible cause
> is that the client created new producer instances, but didn't close the old
> one
We haven't seen any socket leaks with the java producer. If you have lots
of unexplained socket connections in established mode, one possible cause
is that the client created new producer instances, but didn't close the old
ones.
Thanks,
Jun
On Wed, Sep 25, 2013 at 6:08 AM, Mark wrote:
> No.
No. We are using the kafka-rb ruby gem producer.
https://github.com/acrosa/kafka-rb
Now that you asked that question I need to ask. Is there a problem with the
java producer?
Sent from my iPhone
> On Sep 24, 2013, at 9:01 PM, Jun Rao wrote:
>
> Are you using the java producer client?
>
> Th
Are you using the java producer client?
Thanks,
Jun
On Tue, Sep 24, 2013 at 5:33 PM, Mark wrote:
> Our 0.7.2 Kafka cluster keeps crashing with:
>
> 2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in
> acceptor
> java.io.IOException: Too many open
>
> The obvious fix i
Our 0.7.2 Kafka cluster keeps crashing with:
2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in acceptor
java.io.IOException: Too many open
The obvious fix is to bump up the number of open files but I'm wondering if
there is a leak on the Kafka side and/or our applicati
o netstat, what hosts are those connections for and what state are
> those connections in?
>
> Thanks,
>
> Jun
>
>
> On Thu, Aug 1, 2013 at 9:04 AM, Nandigam, Sujitha >wrote:
>
> > Hi,
> >
> > In producer I was continuously getting this exception
>
If you do netstat, what hosts are those connections for and what state are
those connections in?
Thanks,
Jun
On Thu, Aug 1, 2013 at 9:04 AM, Nandigam, Sujitha wrote:
> Hi,
>
> In producer I was continuously getting this exception
> java.net.SocketException: Too many open files
>
Hi,
In producer I was continuously getting this exception java.net.SocketException:
Too many open files
even though I added the below line to /etc/security/limits.conf
kafka-0.8.0-beta1-src-nofile983040
ERROR Producer connection to localhost:9093 unsuccessful
62 matches
Mail list logo