Re: Too many open files

2018-01-22 Thread Jeff Jirsa
> From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay > Mihaylov > Sent: Monday, January 22, 2018 11:47 AM > To: user@cassandra.apache.org > Subject: Re: Too many open files > > You can increase system open files, > also if you compact, open files

RE: Too many open files

2018-01-22 Thread Andreou, Arys (Nokia - GR/Athens)
a global session object or to create it and shut it down for every request? From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay Mihaylov Sent: Monday, January 22, 2018 11:47 AM To: user@cassandra.apache.org Subject: Re: Too many open files You can increase system open

Re: Too many open files

2018-01-22 Thread Nikolay Mihaylov
k connection in the calculation (everything > in *nix is a file). If it makes you feel better my laptop > has 40k open files for Chrome.. > > On Sun, Jan 21, 2018 at 11:59 PM, Andreou, Arys (Nokia - GR/Athens) < > arys.andr...@nokia.com> wrote: > >> Hi, >> >

Re: Too many open files

2018-01-22 Thread Dor Laor
2018 at 11:59 PM, Andreou, Arys (Nokia - GR/Athens) < arys.andr...@nokia.com> wrote: > Hi, > > > > I keep getting a “Last error: Too many open files” followed by a list of > node IPs. > > The output of “lsof -n|grep java|wc -l” is about 674970 on each node. >

Too many open files

2018-01-22 Thread Andreou, Arys (Nokia - GR/Athens)
Hi, I keep getting a "Last error: Too many open files" followed by a list of node IPs. The output of "lsof -n|grep java|wc -l" is about 674970 on each node. What is a normal number of open files? Thank you.

Re: Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread 郝加来
many connection ? 郝加来 From: Jason Lewis Date: 2015-11-07 10:38 To: user@cassandra.apache.org Subject: Re: Too many open files Cassandra 2.1.11.872 cat /proc/5980/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Jason Lewis
> On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng >> wrote: >> >>> Is your compaction progressing as expected? If not, this may cause an >>> excessive number of tiny db files. Had a node refuse to start recently >>> because of this, had to temporarily

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Sebastian Estevez
, 2015 at 12:49 PM, Bryan Cheng > wrote: > >> Is your compaction progressing as expected? If not, this may cause an >> excessive number of tiny db files. Had a node refuse to start recently >> because of this, had to temporarily remove limits on that process. >> >

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Branton Davis
mpaction progressing as expected? If not, this may cause an > excessive number of tiny db files. Had a node refuse to start recently > because of this, had to temporarily remove limits on that process. > > On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis > wrote: > >> I'm getti

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Bryan Cheng
Is your compaction progressing as expected? If not, this may cause an excessive number of tiny db files. Had a node refuse to start recently because of this, had to temporarily remove limits on that process. On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis wrote: > I'm getting too many op

Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Jason Lewis
I'm getting too many open files errors and I'm wondering what the cause may be. lsof -n | grep java show 1.4M files ~90k are inodes ~70k are pipes ~500k are cassandra services in /usr ~700K are the data files. What might be causing so many files to be open? jas

Re: too many open files

2014-08-09 Thread Andrew
Yes, that was the problem—I actually knew better, but had briefly overlooked this that when I was doing some refactoring.  I am not the OP (although he himself realized his mistake). if you follow the thread, I was explaining that the Datastax Java driver allowed me to basically open a signific

Re: too many open files

2014-08-09 Thread Jonathan Haddad
It really doesn't need to be this complicated. You only need 1 session per application. It's thread safe and manages the connection pool for you. http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/Session.html On Sat, Aug 9, 2014 at 1:29 PM, Kevin Burton wrote: > Another idea

Re: too many open files

2014-08-09 Thread Kevin Burton
Another idea to detect this is when the number of open sessions exceeds the number of threads. On Aug 9, 2014 10:59 AM, "Andrew" wrote: > I just had a generator that (in the incorrect way) had a cluster as a > member variable, and would call .connect() repeatedly. I _thought_, > incorrectly, tha

Re: too many open files

2014-08-09 Thread Marcelo Elias Del Valle
know Linux open a FD for each connection received and honestly I still don't know much about the details of this. When I got a "too many open files" error it took a good while to think about checking the connections. I think the documentation could point this fact, it would help other

Re: too many open files

2014-08-09 Thread Andrew
I just had a generator that (in the incorrect way) had a cluster as a member variable, and would call .connect() repeatedly.  I _thought_, incorrectly, that the Session was thread unsafe, and so I should request a separate Session each time—obviously wrong in hind sight. There was no special lo

Re: too many open files

2014-08-09 Thread Andrew
Tyler, I’ll see if I can reproduce this on a local instance, but just in case, the error was basically—instead of storing the session in my connection factory, I stored a cluster and called “connect” each time I requested a Session.  I had defined a max/min number of connections for the connect

Re: too many open files

2014-08-09 Thread Jack Krupansky
From: Marcelo Elias Del Valle Sent: Saturday, August 9, 2014 12:41 AM To: user@cassandra.apache.org Subject: Re: too many open files Indeed, that was my mistake, that was exactly what we were doing in the code. []s 2014-08-09 0:56 GMT-03:00 Brian Zhang : For cassandra driver,session is just

Re: too many open files

2014-08-08 Thread Marcelo Elias Del Valle
Del Valle < >> marc...@s1mbi0se.com.br> wrote: >> >>> Hi, >>> >>> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having >>> "too many open files" exceptions when I try to perform a large number of >>> operations in my

Re: too many open files

2014-08-08 Thread Brian Zhang
gt; > Thank you, > Andrey > > > On Fri, Aug 8, 2014 at 12:35 PM, Marcelo Elias Del Valle > wrote: > Hi, > > I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too > many open files" exceptions when I try to perform a large nu

Re: too many open files

2014-08-08 Thread J. Ryan Earl
arc...@s1mbi0se.com.br> wrote: > >> Hi, >> >> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too >> many open files" exceptions when I try to perform a large number of >> operations in my 10 node cluster. >> >> I saw th

Re: too many open files

2014-08-08 Thread Tyler Hobbs
On Fri, Aug 8, 2014 at 5:52 PM, Redmumba wrote: > Just to chime in, I also ran into this issue when I was migrating to the > Datastax client. Instead of reusing the session, I was opening a new > session each time. For some reason, even though I was still closing the > session on the client side,

Re: too many open files

2014-08-08 Thread Redmumba
I am not sure what I could do to solve the problem. >> >> Any hint about how to solve it? >> >> My client is written in python and uses Cassandra Python Driver. Here are >> the exceptions I am having in the client: >> [s1log] 2014-08-08 12:16:09,631 - cassandra.po

Re: too many open files

2014-08-08 Thread Andrey Ilinykh
e.com.br> wrote: > Hi, > > I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too > many open files" exceptions when I try to perform a large number of > operations in my 10 node cluster. > > I saw the documentation > http://www.datas

Re: too many open files

2014-08-08 Thread Marcelo Elias Del Valle
distribution ups the file handle limit to 10. That >>> number's hard to exceed. >>> >>> >>> >>> On Fri, Aug 8, 2014 at 1:35 PM, Marcelo Elias Del Valle < >>> marc...@s1mbi0se.com.br> wrote: >>> >>>> Hi, >&

Re: too many open files

2014-08-08 Thread Kevin Burton
>> >> On Fri, Aug 8, 2014 at 1:35 PM, Marcelo Elias Del Valle < >> marc...@s1mbi0se.com.br> wrote: >> >>> Hi, >>> >>> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having >>> "too many open files" exceptions w

Re: too many open files

2014-08-08 Thread Marcelo Elias Del Valle
celo Elias Del Valle < > marc...@s1mbi0se.com.br> wrote: > >> Hi, >> >> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too >> many open files" exceptions when I try to perform a large number of >> operations in my 10 node cluster. >

Re: too many open files

2014-08-08 Thread Shane Hansen
n Debian Wheezy, and I am having "too > many open files" exceptions when I try to perform a large number of > operations in my 10 node cluster. > > I saw the documentation > http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting/trblshootTooManyFi

too many open files

2014-08-08 Thread Marcelo Elias Del Valle
Hi, I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having "too many open files" exceptions when I try to perform a large number of operations in my 10 node cluster. I saw the documentation http://www.datastax.com/documentation/cassandra/2.0/cassandra/troub

Re: Too Many Open Files (sockets) - VNodes - Map/Reduce Job

2014-06-04 Thread Michael Shuler
, We are running ElasticMapReduce Jobs from Amazon on a 25 nodes Cassandra cluster (with VNodes). Since we have increased the size of the cluster we are facing a too many open files (due to sockets) exception when creating the splits. Does anyone has an idea? Thanks, Here is the stacktrace: 14

Re: Getting into Too many open files issues

2013-11-20 Thread J. Ryan Earl
cation. > > Write are doing good. but when comes to reads i have obsereved that > cassandra is getting into too many open files issues. When i check the logs > its not able to open the cassandra data files any more before of the file > descriptors limits. > > > Can some one su

Re: Getting into Too many open files issues

2013-11-11 Thread Aaron Morton
Thursday, November 07, 2013 4:22 AM > To: user@cassandra.apache.org > Subject: RE: Getting into Too many open files issues > > Hi Murthy, > > 32768 is a bit low (I know datastax docs recommend this). But our production > env is now running on 1kk, or you can even pu

RE: Getting into Too many open files issues

2013-11-07 Thread Arindam Barua
, November 07, 2013 4:22 AM To: user@cassandra.apache.org Subject: RE: Getting into Too many open files issues Hi Murthy, 32768 is a bit low (I know datastax docs recommend this). But our production env is now running on 1kk, or you can even put it on unlimited. Pieter From: Murthy Chelankuri

RE: Getting into Too many open files issues

2013-11-07 Thread Pieter Callewaert
: Getting into Too many open files issues Thanks Pieter for giving quick reply. I have downloaded the tar ball. And have changed the limits.conf as per the documentation like below. * soft nofile 32768 * hard nofile 32768 root soft nofile 32768 root hard nofile 32768 * soft memlock unlimited * hard

Re: Getting into Too many open files issues

2013-11-07 Thread Murthy Chelankuri
However, with the 2.0.x I had to raise it to 1 000 000 because 100 000 was > too low. > > > > Kind regards, > > Pieter Callewaert > > > > *From:* Murthy Chelankuri [mailto:kmurt...@gmail.com] > *Sent:* donderdag 7 november 2013 12:15 > *To:* user@cassandra

RE: Getting into Too many open files issues

2013-11-07 Thread Pieter Callewaert
@cassandra.apache.org Subject: Getting into Too many open files issues I have experimenting cassandra latest version for storing the huge the in our application. Write are doing good. but when comes to reads i have obsereved that cassandra is getting into too many open files issues. When i check the logs its not

Getting into Too many open files issues

2013-11-07 Thread Murthy Chelankuri
I have experimenting cassandra latest version for storing the huge the in our application. Write are doing good. but when comes to reads i have obsereved that cassandra is getting into too many open files issues. When i check the logs its not able to open the cassandra data files any more before

Re: Too many open files with Cassandra 1.2.11

2013-10-31 Thread Aaron Morton
Oleg Dulin wrote: > Got this error: > > WARN [Thread-8] 2013-10-29 02:58:24,565 CustomTThreadPoolServer.java (line > 122) Transport error occurred during acceptance of message. >2 org.apache.thrift.transport.TTransportException: > java.net.SocketException: Too many open f

Re: Too many open files (Cassandra 2.0.1)

2013-10-29 Thread Jon Haddad
es when I do some > stress testing (5 select’s spread over multiple threads) > -Unexpected exception in the selector loop. Seems not related with > the Too many open files, it just happens. > -It’s not socket related. > -Using Oracle Java(TM) SE Ru

RE: Too many open files (Cassandra 2.0.1)

2013-10-29 Thread Pieter Callewaert
Investigated a bit more: -I can reproduce it, happened already on several nodes when I do some stress testing (5 select's spread over multiple threads) -Unexpected exception in the selector loop. Seems not related with the Too many open files, it just ha

Too many open files (Cassandra 2.0.1)

2013-10-29 Thread Pieter Callewaert
Hi, I've noticed some nodes in our cluster are dying after some period of time. WARN [New I/O server boss #17] 2013-10-29 12:22:20,725 Slf4JLogger.java (line 76) Failed to accept a connection. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(N

Too many open files with Cassandra 1.2.11

2013-10-29 Thread Oleg Dulin
Got this error: WARN [Thread-8] 2013-10-29 02:58:24,565 CustomTThreadPoolServer.java (line 122) Transport error occurred during acceptance of message. 2 org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files 3 at

Re: too many open files

2013-07-15 Thread Hiller, Dean
I believe too many open files is really too many open file descriptors so you may want to check number of sockets open as well to see if you hit the open file descriptor limit. Sockets open a descriptor and count toward the limit I believe….I am quite rusty in this and this is from my bad

Re: too many open files

2013-07-15 Thread Brian Tarbox
each of my 6 CFs. ERROR [ReadStage:62384] 2013-07-14 18:04:26,062 AbstractCassandraDaemon.java (line 135) Exception in thread Thread[ReadStage:62384,5,main] java.io.IOError: java.io.FileNotFoundException: /tmp_vol/cassandra/data/dev_a/portfoliodao/dev_a-portfoliodao-hf-166-Data.db (Too many

Re: too many open files

2013-07-15 Thread Michał Michalski
It doesn't tell you anything if file ends it with "ic-###", except pointing out the SSTable version it uses ("ic" in this case). Files related to secondary index contain something like this in the filename: -., while in "regular" CFs do not contain any dots except the one just before file exte

Re: too many open files

2013-07-15 Thread Paul Ingalls
Also, looking through the log, it appears a lot of the files end with ic- which I assume is associated with a secondary index I have on the table. Are secondary indexes really expensive from a file descriptor standpoint? That particular table uses the default compaction scheme... On Jul 1

Re: too many open files

2013-07-15 Thread Paul Ingalls
I have one table that is using leveled. It was set to 10MB, I will try changing it to 256MB. Is there a good way to merge the existing sstables? On Jul 14, 2013, at 5:32 PM, Jonathan Haddad wrote: > Are you using leveled compaction? If so, what do you have the file size set > at? If you're

Re: too many open files

2013-07-14 Thread Jonathan Haddad
Are you using leveled compaction? If so, what do you have the file size set at? If you're using the defaults, you'll have a ton of really small files. I believe Albert Tobey recommended using 256MB for the table sstable_size_in_mb to avoid this problem. On Sun, Jul 14, 2013 at 5:10 PM, Paul In

too many open files

2013-07-14 Thread Paul Ingalls
I'm running into a problem where instances of my cluster are hitting over 450K open files. Is this normal for a 4 node 1.2.6 cluster with replication factor of 3 and about 50GB of data on each node? I can push the file descriptor limit up, but I plan on having a much larger load so I'm wonderi

RE: Too many open files and stopped compaction with many pending compaction tasks

2013-06-27 Thread Desimpel, Ignace
[mailto:jeremy.hanna1...@gmail.com] Sent: donderdag 27 juni 2013 15:36 To: user@cassandra.apache.org Subject: Re: Too many open files and stopped compaction with many pending compaction tasks Are you on SSDs? On 27 Jun 2013, at 14:24, "Desimpel, Ignace" wrote: > On a test with 3 cass

Re: Too many open files and stopped compaction with many pending compaction tasks

2013-06-27 Thread Jeremy Hanna
start querying (using thrift), I get a ’too many open files’ error > on the machine with pending compaction tasks. > > Limits.conf setting for nofile is 65536 > Using ‘lsof’ and ‘wc –l’ I get a count of 59577 files for Cassandra. > Total count of keyspace files on disk : 20464. >

Too many open files and stopped compaction with many pending compaction tasks

2013-06-27 Thread Desimpel, Ignace
(via jmx). compaction_throughput_mb_per_sec is 0. Concurrent_compactors is 3. multithreaded_compaction = false. No other load on these machines. And when I start querying (using thrift), I get a 'too many open files' error on the machine with pending compaction tasks. Limits.conf s

Re: Too Many Open files error

2012-12-20 Thread aaron morton
t; > > On Thu, Dec 20, 2012 at 1:44 PM, Andrey Ilinykh wrote: > This bug is fixed in 1.1.5 > > Andrey > > > On Thu, Dec 20, 2012 at 12:01 AM, santi kumar wrote: > While running the nodetool repair , we are running into FileNotFoundException > with too many open files

Re: Too Many Open files error

2012-12-20 Thread Andrey Ilinykh
fixed in 1.1.5 >> >> Andrey >> >> >> On Thu, Dec 20, 2012 at 12:01 AM, santi kumar wrote: >> >>> While running the nodetool repair , we are running into >>> FileNotFoundException with too many open files error. We increased the >>> ulimit

Re: Too Many Open files error

2012-12-20 Thread santi kumar
, 2012 at 12:01 AM, santi kumar wrote: > >> While running the nodetool repair , we are running into >> FileNotFoundException with too many open files error. We increased the >> ulimit value to 32768, and still we have seen this issue. >> >> THe number of files

Re: Too Many Open files error

2012-12-20 Thread Andrey Ilinykh
This bug is fixed in 1.1.5 Andrey On Thu, Dec 20, 2012 at 12:01 AM, santi kumar wrote: > While running the nodetool repair , we are running into > FileNotFoundException with too many open files error. We increased the > ulimit value to 32768, and still we have seen this issue. >

Too Many Open files error

2012-12-20 Thread santi kumar
While running the nodetool repair , we are running into FileNotFoundException with too many open files error. We increased the ulimit value to 32768, and still we have seen this issue. THe number of files in the data directory is around 29500+. If we further increase the limit of ulimt, would it

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-19 Thread Thorsten von Eicken
ver which hit a wall >> yesterday: >> >> ERROR [CompactionExecutor:2918] 2012-01-12 20 >> :37:06,327 >> AbstractCassandraDaemon.java (line 133) Fatal exception in thread >> Thread[CompactionExecutor:2918,1,main] java.io.IOError: >> java.io.FileNotFoundExc

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-18 Thread Janne Jalkanen
AbstractCassandraDaemon.java (line 133) Fatal exception in thread > Thread[CompactionExecutor:2918,1,main] java.io.IOError: > java.io.FileNotFoundException: > /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many > open files in system) > > After that it stopped wor

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-18 Thread Sylvain Lebresne
d[CompactionExecutor:2918,1,main] java.io.IOError: > java.io.FileNotFoundException: > /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many > open files in system) > > After that it stopped working and just say there with this error > (undestandable). I did an lso

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-17 Thread dir dir
; yesterday: > > ERROR [CompactionExecutor:2918] 2012-01-12 20:37:06,327 > AbstractCassandraDaemon.java (line 133) Fatal exception in thread > Thread[CompactionExecutor:2918,1,main] java.io.IOError: > java.io.FileNotFoundException: > /mnt/ebs/data/rslog_production/req_word_idx-hc-45

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-15 Thread aaron morton
actionExecutor:2918] 2012-01-12 20:37:06,327 > AbstractCassandraDaemon.java (line 133) Fatal exception in thread > Thread[CompactionExecutor:2918,1,main] java.io.IOError: > java.io.FileNotFoundException: > /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many > ope

cassandra hit a wall: Too many open files (98567!)

2012-01-13 Thread Thorsten von Eicken
t/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many open files in system) After that it stopped working and just say there with this error (undestandable). I did an lsof and saw that it had 98567 open files, yikes! An ls in the data directory shows 234011 files. After restarting it

Re: Too many open files

2011-07-27 Thread Adil
hows the socket number is very few. > > ** ** > > WARN [main] 2011-07-27 16:14:04,872 CustomTThreadPoolServer.java (line 104) > Transport error occurred during acceptance of message. > > org.apache.thrift.transport.TTransportException: java.net.SocketException: >

Re: Too many open files

2011-07-27 Thread Peter Schuller
> What does the following error mean? One of my cassandra servers print this > error, and nodetool shows the state of the server is down. Netstat result > shows the socket number is very few. The operating system enforced limits have been hit, so Cassandra is unable to create additional file descr

Too many open files

2011-07-27 Thread Donna Li
during acceptance of message. org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java: 124) at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java

Re: Too many open files during Repair operation

2011-07-19 Thread Attila Babo
If you are using Linux, especially Ubuntu, check the linked document below. This is my favorite: "Using sudo has side effects in terms of open file limits. On Ubuntu they’ll be reset to 1024, no matter what’s set in /etc/security/limits.conf" http://wiki.basho.com/Open-Files-Limit.html /Attila

Re: Too many open files during Repair operation

2011-07-19 Thread Sameer Farooqui
I'm guessing you've seen this already? http://www.datastax.com/docs/0.8/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files Check out the # of File Descriptors opened with the "lsof- -n | grep java" command. On Tue, Jul 19, 2011 at 8:30 AM, cber

Too many open files during Repair operation

2011-07-19 Thread cbert...@libero.it
Hi all. In production we want to run nodetool repair but each time we do it we get the too many open files error. We've increased the number of available FD for Cassandra till 8192 but still we get the same error after few seconds. Should I increase it more? WARN [Thread-7] 2011-07-19

Re: too many open files - maybe a fd leak in indexslicequeries

2011-04-05 Thread Jonathan Ellis
> An: user@cassandra.apache.org > Cc: Roland Gude; Juergen Link; Johannes Hoerle > Betreff: Re: too many open files - maybe a fd leak in indexslicequeries > > Index queries (ColumnFamilyStore.scan) don't do any low-level i/o > themselves, they go through CFS.getColumnFamily, which i

AW: too many open files - maybe a fd leak in indexslicequeries

2011-04-02 Thread Roland Gude
Nachricht- Von: Jonathan Ellis [mailto:jbel...@gmail.com] Gesendet: Freitag, 1. April 2011 06:07 An: user@cassandra.apache.org Cc: Roland Gude; Juergen Link; Johannes Hoerle Betreff: Re: too many open files - maybe a fd leak in indexslicequeries Index queries (ColumnFamilyStore.scan) don'

Re: too many open files - maybe a fd leak in indexslicequeries

2011-03-31 Thread Jonathan Ellis
Index queries (ColumnFamilyStore.scan) don't do any low-level i/o themselves, they go through CFS.getColumnFamily, which is what normal row fetches also go through. So if there is a leak there it's unlikely to be specific to indexes. What is your open-file limit (remember that sockets count towar

too many open files - maybe a fd leak in indexslicequeries

2011-03-31 Thread Roland Gude
I experience something that looks exactly like https://issues.apache.org/jira/browse/CASSANDRA-1178 On cassandra 0.7.3 when using index slice queries (lots of them) Crashing multiple nodes and rendering the cluster useless. But I have no clue where to look if index queries still leak fd Does any

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Kani
my hector client to insert 5.000.000 rows but after a couple of > >> > hours, > >> > the following Exception occurs : > >> > > >> > WARN [main] 2010-12-15 16:38:53,335 CustomTThreadPoolServer.java > (line > >&g

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Germán Kondolf
> the following Exception occurs : >> > >> >  WARN [main] 2010-12-15 16:38:53,335 CustomTThreadPoolServer.java (line >> > 104) >> > Transport error occurred during acceptance of message. >> > org

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Nate McCall
lServer.java (line >> > 104) >> > Transport error occurred during acceptance of message. >> > org.apache.thrift.transport.TTransportException: >> > java.net.SocketException: >> > Too many open files >> > at >> > >> > org.apache.thri

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Amin Sakka, Novapost
ort error occurred during acceptance of message. > > org.apache.thrift.transport.TTransportException: > java.net.SocketException: > > Too many open files > > at > > > org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124) > > at > > > org.apache.cas

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Ryan King
ft.transport.TTransportException: java.net.SocketException: > Too many open files > at > org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124) > at > org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl(TCustomServerSocket.java:67) > at > org.a

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Jake Luciani
s to "unlimted". > Now, I get exactly the same exception after 3.50 rows : > > *CustomTThreadPoolServer.java (line 104) Transport error occurred during > acceptance of message.* > *org.apache.thrift.transport.TTransportException: > java.net.SocketException: Too many

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Germán Kondolf
the amount of the allowed file descriptors to "unlimted". > Now, I get exactly the same exception after 3.50 rows : > > *CustomTThreadPoolServer.java (line 104) Transport error occurred during > acceptance of message.* > *org.apache.thrift.transport.TTransportException: >

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Amin Sakka, Novapost
tException: java.net.SocketException: Too many open files* * * What worries me is this / by zero exception when I try to restart cassandra ! At least, I want to backup the 3.50 rows to continue then my insertion, is there a way to do this? * Exception encountered during startup. java.lang.ArithmeticException: / b

Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-15 Thread Amin Sakka, Novapost
3,335 CustomTThreadPoolServer.java (line 104) Transport error occurred during acceptance of message. org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:12

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-15 Thread Jake Luciani
http://www.riptano.com/docs/0.6/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files On Wed, Dec 15, 2010 at 11:13 AM, Amin Sakka, Novapost < amin.sa...@novapost.fr> wrote: > *Hello,* > *I'm using cassandra 0.7.0 rc1, a single node configura

Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-15 Thread Amin Sakka, Novapost
3,335 CustomTThreadPoolServer.java (line 104) Transport error occurred during acceptance of message. org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:12

Re: too many open files 0.7.0 beta1

2010-08-25 Thread Aaron Morton
6, 2010 at 2:05 PM, Aaron Morton <aa...@thelastpickle.com> wrote: Under 0.7.0 beta1 am seeing cassandra run out of files handles...Caused by: java.io.FileNotFoundException: /local1/junkbox/cassandra/data/junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open f

Re: too many open files 0.7.0 beta1

2010-08-25 Thread Dan Washusen
cassandra/data/ > junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files) > at java.ioRandomAccessFile.open(Native Method) > at java.io.RandomAccessFile.(RandomAccessFile.java:212) > at java.io.RandomAccessFile.(RandomAccess

too many open files 0.7.0 beta1

2010-08-25 Thread Aaron Morton
Under 0.7.0 beta1 am seeing cassandra run out of files handles...Caused by: java.io.FileNotFoundException: /local1/junkbox/cassandra/data/junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files)        at java.ioRandomAccessFile.open(Native Method)        at java.io.RandomAccessFile

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Jorge Barrios
Each of my top-level functions was allocating a Hector client connection at the top, and releasing it when returning. The problem arose when a top-level function had to call another top-level function, which led to the same thread allocating two connections. Hector was not releasing one of them eve

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread shimi
do you mean that you don't release the connection back to fhe pool? On 2010 7 14 20:51, "Jorge Barrios" wrote: Thomas, I had a similar problem a few weeks back. I changed my code to make sure that each thread only creates and uses one Hector connection. It seems that client sockets are not being

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Jorge Barrios
Peter Schuller wrote: > > [snip] > > I'm not sure that is the case. > > > > When the server gets into the unrecoverable state, the repeating > exceptions > > are indeed "SocketException: Too many open files". > [snip] > > Although this is unque

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Peter Schuller
> [snip] > I'm not sure that is the case. > > When the server gets into the unrecoverable state, the repeating exceptions > are indeed "SocketException: Too many open files". [snip] > Although this is unquestionably a network error,  I don't think it is >

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Jonathan Ellis
>> >> > > Thanks for the suggestion.  I gave it a whirl, but no go.  The file handles > in > in use stayed at around 500 for the first 30M or so mutates, then within > 4 seconds they jumped to about 800, stayed there for about 30 seconds, > then within 5 seconds went ove