RE: Too many open files in kafka 0.9

2017-12-07 Thread REYMOND Jean-max (BPCE-IT - SYNCHRONE TECHNOLOGIES)
: Too many open files in kafka 0.9 There is KAFKA-3317 which is still open. Have you seen this ? http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+ On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE TECHNOL

RE: Too many open files in kafka 0.9

2017-11-30 Thread REYMOND Jean-max (BPCE-IT - SYNCHRONE TECHNOLOGIES)
difference with the others brokers., so is it safe to remove these directories __consumer_offsets-XX if not acceded since one day ? -Message d'origine- De : Ted Yu [mailto:yuzhih...@gmail.com] Envoyé : mercredi 29 novembre 2017 19:41 À : users@kafka.apache.org Objet : Re: Too many open

Re: Too many open files in kafka 0.9

2017-11-29 Thread Ted Yu
There is KAFKA-3317 which is still open. Have you seen this ? http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+ On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE TECHNOLOGIES) wrote: > We have a cluster w

Re: Too many open files

2016-09-14 Thread Jaikiran Pai
What does the output of: lsof -p show on that specific node? -Jaikiran On Monday 12 September 2016 10:03 PM, Michael Sparr wrote: 5-node Kafka cluster, bare metal, Ubuntu 14.04.x LTS with 64GB RAM, 8-core, 960GB SSD boxes and a single node in cluster is filling logs with the following: [20

Re: Too many open files

2016-09-14 Thread Jaikiran Pai
What does the output of: lsof -p show? -Jaikiran On Monday 12 September 2016 10:03 PM, Michael Sparr wrote: 5-node Kafka cluster, bare metal, Ubuntu 14.04.x LTS with 64GB RAM, 8-core, 960GB SSD boxes and a single node in cluster is filling logs with the following: [2016-09-12 09:34:49,522]

Re: Too Many Open Files

2016-08-01 Thread Thakrar, Jayesh
What are the producers/consumers for the Kafka cluster? Remember that its not just files but also sockets that add to the count. I had seen issues when we had a network switch problem and had Storm consumers. The switch would cause issues in connectivity between Kafka brokers, zookeepers and clie

Re: Too Many Open Files

2016-08-01 Thread Scott Thibault
Did you verify that the process has the correct limit applied? cat /proc//limits --Scott Thibault On Sun, Jul 31, 2016 at 4:14 PM, Kessiler Rodrigues wrote: > I’m still experiencing this issue… > > Here are the kafka logs. > > [2016-07-31 20:10:35,658] ERROR Error while accepting connection >

Re: Too Many Open Files

2016-08-01 Thread Kessiler Rodrigues
Hey guys I got a solution for this. The kafka process wasn’t getting the limits config because I was running it under supervisor. I changed it and right now I’m using systemd to put kafka up and running! On systemd services you can setup your FD limit using a property called “LimitNOFile”. Th

Re: Too Many Open Files

2016-08-01 Thread Anirudh P
I agree with Steve. We had a similar problem where we set the ulimit to a certain value but it was getting overridden. It only worked when we set the ulimit after logging in as root. You might want to give that a try if you have not done so already - Anirudh On Mon, Aug 1, 2016 at 1:19 PM, Steve

Re: Too Many Open Files

2016-08-01 Thread Steve Miller
Can you run lsof -p (pid) for whatever the pid is for your Kafka process? For the fd limits you've set, I don't think subtlety is required: if there's a millionish lines in the output, the fd limit you set is where you think it is, and if it's a lot lower than that, the limit isn't being applied

Re: Too Many Open Files

2016-07-31 Thread Chris Richardson
Gwen, Is there any particular reason why "inactive" (no consumers or producers for a topic) files need to be open? Chris -- Learn microservices - http://learnmicroservices.io Microservices application platform http://eventuate.io On Fri, Jul 29, 2016 at 6:33 PM, Gwen Shapira wrote: > woah,

RE: Too Many Open Files

2016-07-31 Thread Krzysztof Nawara
Maybe you are exhausting your sockets, not file handles for some reason? From: Kessiler Rodrigues [kessi...@callinize.com] Sent: 31 July 2016 22:14 To: users@kafka.apache.org Subject: Re: Too Many Open Files I’m still experiencing this issue… Here are

Re: Too Many Open Files

2016-07-31 Thread Kessiler Rodrigues
I’m still experiencing this issue… Here are the kafka logs. [2016-07-31 20:10:35,658] ERROR Error while accepting connection (kafka.network.Acceptor) java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketC

Re: Too Many Open Files

2016-07-30 Thread Kessiler Rodrigues
I have changed it a bit. I have 10 brokers and 20k topics with 1 partition each. I looked at the kaka’s logs dir and I only have 3318 files. I’m doing some tests to see how many topics/partitions I can have, but it is throwing too many files once it hits 15k topics.. Any thoughts? > On Jul

Re: Too Many Open Files

2016-07-29 Thread Gwen Shapira
woah, it looks like you have 15,000 replicas per broker? You can go into the directory you configured for kafka's log.dir and see how many files you have there. Depending on your segment size and retention policy, you could have hundreds of files per partition there... Make sure you have at least

Re: Too Many Open Files Broker Error

2014-07-10 Thread Lung, Paul
Hi Jun, That was the problem. It was actually the Ubuntu upstart job over writing the limit. Thank you very much for your help. Paul Lung On 7/9/14, 1:58 PM, "Jun Rao" wrote: >Is it possible your container wrapper somehow overrides the file handler >limit? > >Thanks, > >Jun > > >On Wed, Jul 9,

Re: Too Many Open Files Broker Error

2014-07-09 Thread Jun Rao
Is it possible your container wrapper somehow overrides the file handler limit? Thanks, Jun On Wed, Jul 9, 2014 at 9:59 AM, Lung, Paul wrote: > Yup. In fact, I just ran the test program again while the Kafak broker is > still running, using the same user of course. I was able to get up to 10K

Re: Too Many Open Files Broker Error

2014-07-09 Thread François Langelier
I don't know if that is your problem, but I had this output when my brokers couldn't talk to each others... The zookeeper were using the FQDN but my brokers didn't know the FQDN of the other brokers... If you look at you brokers info in zk (get /brokers/ids/#ID_OF_BROKER) can you ping/connect to

Re: Too Many Open Files Broker Error

2014-07-09 Thread hsy...@gmail.com
I have the same problem. I didn't dig deeper but I saw this happen when I launch kafka in daemon mode. I found the daemon mode is just launch kafka with nohup. Not quite clear why this happen. On Wed, Jul 9, 2014 at 9:59 AM, Lung, Paul wrote: > Yup. In fact, I just ran the test program again wh

Re: Too Many Open Files Broker Error

2014-07-09 Thread Lung, Paul
Yup. In fact, I just ran the test program again while the Kafak broker is still running, using the same user of course. I was able to get up to 10K connections with the test program. The test program uses the same java NIO library that the broker does. So the machine is capable of handling that man

Re: Too Many Open Files Broker Error

2014-07-08 Thread Jun Rao
Does your test program run as the same user as Kafka broker? Thanks, Jun On Tue, Jul 8, 2014 at 1:42 PM, Lung, Paul wrote: > Hi Guys, > > I’m seeing the following errors from the 0.8.1.1 broker. This occurs most > often on the Controller machine. Then the controller process crashes, and > the

Re: Too Many Open Files Broker Error

2014-07-08 Thread Lung, Paul
Hit the send button too fast. I verified the number of open file descriptors from the broker by using ³sudo lsof -p², and by using ³sudo ls -l /proc//fd | wc -l². Paul On 7/8/14, 1:42 PM, "Lung, Paul" wrote: >Hi Guys, > >I¹m seeing the following errors from the 0.8.1.1 broker. This occurs most

Re: too many open files - broker died

2013-11-02 Thread Kane Kane
Thanks, Jun. On Sat, Nov 2, 2013 at 8:31 PM, Jun Rao wrote: > The # of required open file handlers is # client socket connections + # log > segment and index files. > > Thanks, > > Jun > > > On Fri, Nov 1, 2013 at 10:28 PM, Kane Kane wrote: > >> I had only 1 topic with 45 partitions replicated a

Re: too many open files - broker died

2013-11-02 Thread Jun Rao
The # of required open file handlers is # client socket connections + # log segment and index files. Thanks, Jun On Fri, Nov 1, 2013 at 10:28 PM, Kane Kane wrote: > I had only 1 topic with 45 partitions replicated across 3 brokers. > After several hours of uploading some data to kafka 1 broke

Re: Too many open files

2013-10-04 Thread Florian Weingarten
g the issue in a cross data center context ? > > Best regards, > > Nicolas Berthet > > > -Original Message- > From: Mark [mailto:static.void....@gmail.com] > Sent: Friday, September 27, 2013 6:08 AM > To: users@kafka.apache.org > Subject: Re: Too many open files > &

RE: Too many open files

2013-10-04 Thread Nicolas Berthet
getting higher. Best regards, Nicolas Berthet -Original Message- From: Mark [mailto:static.void@gmail.com] Sent: Saturday, September 28, 2013 12:35 AM To: users@kafka.apache.org Subject: Re: Too many open files No, this is all within

Re: Too many open files

2013-09-27 Thread Mark
s, > > Nicolas Berthet > > > -Original Message- > From: Mark [mailto:static.void@gmail.com] > Sent: Friday, September 27, 2013 6:08 AM > To: users@kafka.apache.org > Subject: Re: Too many open files > > What OS settings did you change? How high is

RE: Too many open files

2013-09-26 Thread Nicolas Berthet
ser to the solution to my issue. Are you also experiencing the issue in a cross data center context ? Best regards, Nicolas Berthet -Original Message- From: Mark [mailto:static.void@gmail.com] Sent: Friday, September 27, 2013 6:08 AM To: users@kafka.apache.org Subject: Re: Too

Re: Too many open files

2013-09-26 Thread Mark
observation for the time being. > > Note that, for clients in the same datacenter, we didn't see this issue, the > socket count matches on both ends. > > Nicolas Berthet > > -Original Message- > From: Jun Rao [mailto:jun...@gmail.com] > Sent: Thursday, S

Re: Too many open files

2013-09-26 Thread Mark
servation for the time being. >> >> Note that, for clients in the same datacenter, we didn't see this issue, >> the socket count matches on both ends. >> >> Nicolas Berthet >> >> -Original Message- >> From: Jun Rao [mailto:jun...@gmail.com]

Re: Too many open files

2013-09-26 Thread Jun Rao
to:jun...@gmail.com] > Sent: Thursday, September 26, 2013 12:39 PM > To: users@kafka.apache.org > Subject: Re: Too many open files > > If a client is gone, the broker should automatically close those broken > sockets. Are you using a hardware load balancer? > > Thanks, >

RE: Too many open files

2013-09-25 Thread Nicolas Berthet
Jun Rao [mailto:jun...@gmail.com] Sent: Thursday, September 26, 2013 12:39 PM To: users@kafka.apache.org Subject: Re: Too many open files If a client is gone, the broker should automatically close those broken sockets. Are you using a hardware load balancer? Thanks, Jun On Wed, Sep 25, 2013 at

Re: Too many open files

2013-09-25 Thread Jun Rao
If a client is gone, the broker should automatically close those broken sockets. Are you using a hardware load balancer? Thanks, Jun On Wed, Sep 25, 2013 at 4:48 PM, Mark wrote: > FYI if I kill all producers I don't see the number of open files drop. I > still see all the ESTABLISHED connecti

Re: Too many open files

2013-09-25 Thread Mark
FYI if I kill all producers I don't see the number of open files drop. I still see all the ESTABLISHED connections. Is there a broker setting to automatically kill any inactive TCP connections? On Sep 25, 2013, at 4:30 PM, Mark wrote: > Any other ideas? > > On Sep 25, 2013, at 9:06 AM, Jun R

Re: Too many open files

2013-09-25 Thread Mark
Any other ideas? On Sep 25, 2013, at 9:06 AM, Jun Rao wrote: > We haven't seen any socket leaks with the java producer. If you have lots > of unexplained socket connections in established mode, one possible cause > is that the client created new producer instances, but didn't close the old > one

Re: Too many open files

2013-09-25 Thread Jun Rao
We haven't seen any socket leaks with the java producer. If you have lots of unexplained socket connections in established mode, one possible cause is that the client created new producer instances, but didn't close the old ones. Thanks, Jun On Wed, Sep 25, 2013 at 6:08 AM, Mark wrote: > No.

Re: Too many open files

2013-09-25 Thread Mark
No. We are using the kafka-rb ruby gem producer. https://github.com/acrosa/kafka-rb Now that you asked that question I need to ask. Is there a problem with the java producer? Sent from my iPhone > On Sep 24, 2013, at 9:01 PM, Jun Rao wrote: > > Are you using the java producer client? > > Th

Re: Too many open files

2013-09-24 Thread Jun Rao
Are you using the java producer client? Thanks, Jun On Tue, Sep 24, 2013 at 5:33 PM, Mark wrote: > Our 0.7.2 Kafka cluster keeps crashing with: > > 2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in > acceptor > java.io.IOException: Too many open > > The obvious fix i