: Too many open files in kafka 0.9
There is KAFKA-3317 which is still open.
Have you seen this ?
http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+
On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE
TECHNOL
difference with the others brokers., so is it safe to remove these
directories __consumer_offsets-XX if not acceded since one day ?
-Message d'origine-
De : Ted Yu [mailto:yuzhih...@gmail.com]
Envoyé : mercredi 29 novembre 2017 19:41
À : users@kafka.apache.org
Objet : Re: Too many open
There is KAFKA-3317 which is still open.
Have you seen this ?
http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+
On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE
TECHNOLOGIES) wrote:
> We have a cluster w
What does the output of:
lsof -p
show on that specific node?
-Jaikiran
On Monday 12 September 2016 10:03 PM, Michael Sparr wrote:
5-node Kafka cluster, bare metal, Ubuntu 14.04.x LTS with 64GB RAM, 8-core,
960GB SSD boxes and a single node in cluster is filling logs with the following:
[20
What does the output of:
lsof -p
show?
-Jaikiran
On Monday 12 September 2016 10:03 PM, Michael Sparr wrote:
5-node Kafka cluster, bare metal, Ubuntu 14.04.x LTS with 64GB RAM, 8-core,
960GB SSD boxes and a single node in cluster is filling logs with the following:
[2016-09-12 09:34:49,522]
What are the producers/consumers for the Kafka cluster?
Remember that its not just files but also sockets that add to the count.
I had seen issues when we had a network switch problem and had Storm consumers.
The switch would cause issues in connectivity between Kafka brokers, zookeepers
and clie
Did you verify that the process has the correct limit applied?
cat /proc//limits
--Scott Thibault
On Sun, Jul 31, 2016 at 4:14 PM, Kessiler Rodrigues
wrote:
> I’m still experiencing this issue…
>
> Here are the kafka logs.
>
> [2016-07-31 20:10:35,658] ERROR Error while accepting connection
>
Hey guys
I got a solution for this. The kafka process wasn’t getting the limits config
because I was running it under supervisor.
I changed it and right now I’m using systemd to put kafka up and running!
On systemd services you can setup your FD limit using a property called
“LimitNOFile”.
Th
I agree with Steve. We had a similar problem where we set the ulimit to a
certain value but it was getting overridden.
It only worked when we set the ulimit after logging in as root. You might
want to give that a try if you have not done so already
- Anirudh
On Mon, Aug 1, 2016 at 1:19 PM, Steve
Can you run lsof -p (pid) for whatever the pid is for your Kafka process?
For the fd limits you've set, I don't think subtlety is required: if there's a
millionish lines in the output, the fd limit you set is where you think it is,
and if it's a lot lower than that, the limit isn't being applied
Gwen,
Is there any particular reason why "inactive" (no consumers or producers
for a topic) files need to be open?
Chris
--
Learn microservices - http://learnmicroservices.io
Microservices application platform http://eventuate.io
On Fri, Jul 29, 2016 at 6:33 PM, Gwen Shapira wrote:
> woah,
Maybe you are exhausting your sockets, not file handles for some reason?
From: Kessiler Rodrigues [kessi...@callinize.com]
Sent: 31 July 2016 22:14
To: users@kafka.apache.org
Subject: Re: Too Many Open Files
I’m still experiencing this issue…
Here are
I’m still experiencing this issue…
Here are the kafka logs.
[2016-07-31 20:10:35,658] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketC
I have changed it a bit.
I have 10 brokers and 20k topics with 1 partition each.
I looked at the kaka’s logs dir and I only have 3318 files.
I’m doing some tests to see how many topics/partitions I can have, but it is
throwing too many files once it hits 15k topics..
Any thoughts?
> On Jul
woah, it looks like you have 15,000 replicas per broker?
You can go into the directory you configured for kafka's log.dir and
see how many files you have there. Depending on your segment size and
retention policy, you could have hundreds of files per partition
there...
Make sure you have at least
Hi Jun,
That was the problem. It was actually the Ubuntu upstart job over writing
the limit. Thank you very much for your help.
Paul Lung
On 7/9/14, 1:58 PM, "Jun Rao" wrote:
>Is it possible your container wrapper somehow overrides the file handler
>limit?
>
>Thanks,
>
>Jun
>
>
>On Wed, Jul 9,
Is it possible your container wrapper somehow overrides the file handler
limit?
Thanks,
Jun
On Wed, Jul 9, 2014 at 9:59 AM, Lung, Paul wrote:
> Yup. In fact, I just ran the test program again while the Kafak broker is
> still running, using the same user of course. I was able to get up to 10K
I don't know if that is your problem, but I had this output when my brokers
couldn't talk to each others...
The zookeeper were using the FQDN but my brokers didn't know the FQDN of
the other brokers...
If you look at you brokers info in zk (get /brokers/ids/#ID_OF_BROKER) can
you ping/connect to
I have the same problem. I didn't dig deeper but I saw this happen when I
launch kafka in daemon mode. I found the daemon mode is just launch kafka
with nohup. Not quite clear why this happen.
On Wed, Jul 9, 2014 at 9:59 AM, Lung, Paul wrote:
> Yup. In fact, I just ran the test program again wh
Yup. In fact, I just ran the test program again while the Kafak broker is
still running, using the same user of course. I was able to get up to 10K
connections with the test program. The test program uses the same java NIO
library that the broker does. So the machine is capable of handling that
man
Does your test program run as the same user as Kafka broker?
Thanks,
Jun
On Tue, Jul 8, 2014 at 1:42 PM, Lung, Paul wrote:
> Hi Guys,
>
> I’m seeing the following errors from the 0.8.1.1 broker. This occurs most
> often on the Controller machine. Then the controller process crashes, and
> the
Hit the send button too fast. I verified the number of open file
descriptors from the broker by using ³sudo lsof -p², and by using ³sudo ls
-l /proc//fd | wc -l².
Paul
On 7/8/14, 1:42 PM, "Lung, Paul" wrote:
>Hi Guys,
>
>I¹m seeing the following errors from the 0.8.1.1 broker. This occurs most
Thanks, Jun.
On Sat, Nov 2, 2013 at 8:31 PM, Jun Rao wrote:
> The # of required open file handlers is # client socket connections + # log
> segment and index files.
>
> Thanks,
>
> Jun
>
>
> On Fri, Nov 1, 2013 at 10:28 PM, Kane Kane wrote:
>
>> I had only 1 topic with 45 partitions replicated a
The # of required open file handlers is # client socket connections + # log
segment and index files.
Thanks,
Jun
On Fri, Nov 1, 2013 at 10:28 PM, Kane Kane wrote:
> I had only 1 topic with 45 partitions replicated across 3 brokers.
> After several hours of uploading some data to kafka 1 broke
g the issue in a cross data center context ?
>
> Best regards,
>
> Nicolas Berthet
>
>
> -Original Message-
> From: Mark [mailto:static.void....@gmail.com]
> Sent: Friday, September 27, 2013 6:08 AM
> To: users@kafka.apache.org
> Subject: Re: Too many open files
>
&
getting higher.
Best regards,
Nicolas Berthet
-Original Message-
From: Mark [mailto:static.void@gmail.com]
Sent: Saturday, September 28, 2013 12:35 AM
To: users@kafka.apache.org
Subject: Re: Too many open files
No, this is all within
s,
>
> Nicolas Berthet
>
>
> -Original Message-
> From: Mark [mailto:static.void@gmail.com]
> Sent: Friday, September 27, 2013 6:08 AM
> To: users@kafka.apache.org
> Subject: Re: Too many open files
>
> What OS settings did you change? How high is
ser to the solution to my issue.
Are you also experiencing the issue in a cross data center context ?
Best regards,
Nicolas Berthet
-Original Message-
From: Mark [mailto:static.void@gmail.com]
Sent: Friday, September 27, 2013 6:08 AM
To: users@kafka.apache.org
Subject: Re: Too
observation for the time being.
>
> Note that, for clients in the same datacenter, we didn't see this issue, the
> socket count matches on both ends.
>
> Nicolas Berthet
>
> -Original Message-
> From: Jun Rao [mailto:jun...@gmail.com]
> Sent: Thursday, S
servation for the time being.
>>
>> Note that, for clients in the same datacenter, we didn't see this issue,
>> the socket count matches on both ends.
>>
>> Nicolas Berthet
>>
>> -Original Message-
>> From: Jun Rao [mailto:jun...@gmail.com]
to:jun...@gmail.com]
> Sent: Thursday, September 26, 2013 12:39 PM
> To: users@kafka.apache.org
> Subject: Re: Too many open files
>
> If a client is gone, the broker should automatically close those broken
> sockets. Are you using a hardware load balancer?
>
> Thanks,
>
Jun Rao [mailto:jun...@gmail.com]
Sent: Thursday, September 26, 2013 12:39 PM
To: users@kafka.apache.org
Subject: Re: Too many open files
If a client is gone, the broker should automatically close those broken
sockets. Are you using a hardware load balancer?
Thanks,
Jun
On Wed, Sep 25, 2013 at
If a client is gone, the broker should automatically close those broken
sockets. Are you using a hardware load balancer?
Thanks,
Jun
On Wed, Sep 25, 2013 at 4:48 PM, Mark wrote:
> FYI if I kill all producers I don't see the number of open files drop. I
> still see all the ESTABLISHED connecti
FYI if I kill all producers I don't see the number of open files drop. I still
see all the ESTABLISHED connections.
Is there a broker setting to automatically kill any inactive TCP connections?
On Sep 25, 2013, at 4:30 PM, Mark wrote:
> Any other ideas?
>
> On Sep 25, 2013, at 9:06 AM, Jun R
Any other ideas?
On Sep 25, 2013, at 9:06 AM, Jun Rao wrote:
> We haven't seen any socket leaks with the java producer. If you have lots
> of unexplained socket connections in established mode, one possible cause
> is that the client created new producer instances, but didn't close the old
> one
We haven't seen any socket leaks with the java producer. If you have lots
of unexplained socket connections in established mode, one possible cause
is that the client created new producer instances, but didn't close the old
ones.
Thanks,
Jun
On Wed, Sep 25, 2013 at 6:08 AM, Mark wrote:
> No.
No. We are using the kafka-rb ruby gem producer.
https://github.com/acrosa/kafka-rb
Now that you asked that question I need to ask. Is there a problem with the
java producer?
Sent from my iPhone
> On Sep 24, 2013, at 9:01 PM, Jun Rao wrote:
>
> Are you using the java producer client?
>
> Th
Are you using the java producer client?
Thanks,
Jun
On Tue, Sep 24, 2013 at 5:33 PM, Mark wrote:
> Our 0.7.2 Kafka cluster keeps crashing with:
>
> 2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in
> acceptor
> java.io.IOException: Too many open
>
> The obvious fix i
38 matches
Mail list logo