Re: Too many open files

2018-01-22 Thread Jeff Jirsa
> From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay > Mihaylov > Sent: Monday, January 22, 2018 11:47 AM > To: user@cassandra.apache.org > Subject: Re: Too many open files > > You can increase system open files, > also if you compact, open files

RE: Too many open files

2018-01-22 Thread Andreou, Arys (Nokia - GR/Athens)
a global session object or to create it and shut it down for every request? From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay Mihaylov Sent: Monday, January 22, 2018 11:47 AM To: user@cassandra.apache.org Subject: Re: Too many open files You can increase system open

Re: Too many open files

2018-01-22 Thread Nikolay Mihaylov
You can increase system open files, also if you compact, open files will go down. On Mon, Jan 22, 2018 at 10:19 AM, Dor Laor wrote: > It's a high number, your compaction may run behind and thus > many small sstables exist. However, you're also taking the > number of network connection in the cal

Re: Too many open files

2018-01-22 Thread Dor Laor
It's a high number, your compaction may run behind and thus many small sstables exist. However, you're also taking the number of network connection in the calculation (everything in *nix is a file). If it makes you feel better my laptop has 40k open files for Chrome.. On Sun, Jan 21, 2018 at 11:59

Re: Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread 郝加来
many connection ? 郝加来 From: Jason Lewis Date: 2015-11-07 10:38 To: user@cassandra.apache.org Subject: Re: Too many open files Cassandra 2.1.11.872 cat /proc/5980/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Jason Lewis
cat /proc/5980/limits Limit Soft Limit Hard Limit Units Max cpu time unlimitedunlimitedseconds Max file size unlimitedunlimitedbytes Max data size unlimitedunlimi

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Sebastian Estevez
You probably need to configure ulimits correctly . What does this give you? /proc//limits All the best, [image: datastax_logo.png] Sebastián Estévez Solutions Architect | 9

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Branton Davis
We recently went down the rabbit hole of trying to understand the output of lsof. lsof -n has a lot of duplicates (files opened by multiple threads). Use 'lsof -p $PID' or 'lsof -u cassandra' instead. On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng wrote: > Is your compaction progressing as expect

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Bryan Cheng
Is your compaction progressing as expected? If not, this may cause an excessive number of tiny db files. Had a node refuse to start recently because of this, had to temporarily remove limits on that process. On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis wrote: > I'm getting too many open files er

Re: too many open files

2014-08-09 Thread Andrew
Yes, that was the problem—I actually knew better, but had briefly overlooked this that when I was doing some refactoring.  I am not the OP (although he himself realized his mistake). if you follow the thread, I was explaining that the Datastax Java driver allowed me to basically open a signific

Re: too many open files

2014-08-09 Thread Jonathan Haddad
It really doesn't need to be this complicated. You only need 1 session per application. It's thread safe and manages the connection pool for you. http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/Session.html On Sat, Aug 9, 2014 at 1:29 PM, Kevin Burton wrote: > Another idea

Re: too many open files

2014-08-09 Thread Kevin Burton
Another idea to detect this is when the number of open sessions exceeds the number of threads. On Aug 9, 2014 10:59 AM, "Andrew" wrote: > I just had a generator that (in the incorrect way) had a cluster as a > member variable, and would call .connect() repeatedly. I _thought_, > incorrectly, tha

Re: too many open files

2014-08-09 Thread Marcelo Elias Del Valle
y > > *From:* Marcelo Elias Del Valle > *Sent:* Saturday, August 9, 2014 12:41 AM > *To:* user@cassandra.apache.org > *Subject:* Re: too many open files > > Indeed, that was my mistake, that was exactly what we were doing in the > code. > []s > > > 2014-08-09

Re: too many open files

2014-08-09 Thread Andrew
I just had a generator that (in the incorrect way) had a cluster as a member variable, and would call .connect() repeatedly.  I _thought_, incorrectly, that the Session was thread unsafe, and so I should request a separate Session each time—obviously wrong in hind sight. There was no special lo

Re: too many open files

2014-08-09 Thread Andrew
Tyler, I’ll see if I can reproduce this on a local instance, but just in case, the error was basically—instead of storing the session in my connection factory, I stored a cluster and called “connect” each time I requested a Session.  I had defined a max/min number of connections for the connect

Re: too many open files

2014-08-09 Thread Jack Krupansky
From: Marcelo Elias Del Valle Sent: Saturday, August 9, 2014 12:41 AM To: user@cassandra.apache.org Subject: Re: too many open files Indeed, that was my mistake, that was exactly what we were doing in the code. []s 2014-08-09 0:56 GMT-03:00 Brian Zhang : For cassandra driver,session is just

Re: too many open files

2014-08-08 Thread Marcelo Elias Del Valle
Indeed, that was my mistake, that was exactly what we were doing in the code. []s 2014-08-09 0:56 GMT-03:00 Brian Zhang : > For cassandra driver,session is just like database connection pool,it > maybe contains many tcp connections,if you create a new session every > time,more and more tcp conne

Re: too many open files

2014-08-08 Thread Brian Zhang
For cassandra driver,session is just like database connection pool,it maybe contains many tcp connections,if you create a new session every time,more and more tcp connections will be created,till surpass the max file description limit of os. You should create one session,use it repeatedly ,ses

Re: too many open files

2014-08-08 Thread J. Ryan Earl
Yes, definitely look how many open files are actual file handles versus networks sockets. We found a file handle leak in 2.0 but it was patched in 2.0.3 or .5 I think. A million open files is way too high. On Fri, Aug 8, 2014 at 5:19 PM, Andrey Ilinykh wrote: > You may have this problem if yo

Re: too many open files

2014-08-08 Thread Tyler Hobbs
On Fri, Aug 8, 2014 at 5:52 PM, Redmumba wrote: > Just to chime in, I also ran into this issue when I was migrating to the > Datastax client. Instead of reusing the session, I was opening a new > session each time. For some reason, even though I was still closing the > session on the client side,

Re: too many open files

2014-08-08 Thread Redmumba
Just to chime in, I also ran into this issue when I was migrating to the Datastax client. Instead of reusing the session, I was opening a new session each time. For some reason, even though I was still closing the session on the client side, I was getting the same error. Plus, the only way I could

Re: too many open files

2014-08-08 Thread Andrey Ilinykh
You may have this problem if your client doesn't reuse the connection but opens new every type. So, run netstat and check the number of established connections. This number should not be big. Thank you, Andrey On Fri, Aug 8, 2014 at 12:35 PM, Marcelo Elias Del Valle < marc...@s1mbi0se.com.br>

Re: too many open files

2014-08-08 Thread Marcelo Elias Del Valle
I just solved the issue, it was Cassandra process which was opening too many fds, indeed, but the problem was the amount of client connections being opened. It was opening more connection than needed in the client' side. Thanks for the help. []s 2014-08-08 17:17 GMT-03:00 Kevin Burton : > You ma

Re: too many open files

2014-08-08 Thread Kevin Burton
You may want to look at the the actual filenames. You might have an app leaving them open. Also, remember, sockets use FDs so they are in the list too. On Fri, Aug 8, 2014 at 1:13 PM, Marcelo Elias Del Valle < marc...@s1mbi0se.com.br> wrote: > I am using datastax community, the packaged versio

Re: too many open files

2014-08-08 Thread Marcelo Elias Del Valle
I am using datastax community, the packaged version for Debian. I am also using last version of opscenter and datastax-agent However, I just listed open files here: sudo lsof -n | grep java | wc -l 1096599 It seems it has exceed. Should I just increase? Or is it possible to be a memory leak? Be

Re: too many open files

2014-08-08 Thread Shane Hansen
Are you using apache or Datastax cassandra? The datastax distribution ups the file handle limit to 10. That number's hard to exceed. On Fri, Aug 8, 2014 at 1:35 PM, Marcelo Elias Del Valle < marc...@s1mbi0se.com.br> wrote: > Hi, > > I am using Cassandra 2.0.9 running on Debian Wheezy, and

Re: Too Many Open Files (sockets) - VNodes - Map/Reduce Job

2014-06-04 Thread Michael Shuler
(this is probably a better question for the user list - cc/reply-to set) Allow more files to be open :) http://www.datastax.com/documentation/cassandra/1.2/cassandra/install/installRecommendSettings.html -- Kind regards, Michael On 06/04/2014 12:15 PM, Florian Dambrine wrote: Hi every body,

Re: Too many open files with Cassandra 1.2.11

2013-10-31 Thread Aaron Morton
What’s in /etc/security/limits.conf ? and just for fun what does lsof -n | grep java | wc -l say ? Cheers - Aaron Morton New Zealand @aaronmorton Co-Founder & Principal Consultant Apache Cassandra Consulting http://www.thelastpickle.com On 30/10/2013, at 12:21 am, Oleg Dulin

Re: Too many open files (Cassandra 2.0.1)

2013-10-29 Thread Jon Haddad
In general, my understanding is that memory mapped files use a lot of open file handlers. We raise all our DBs to unlimited open files. On Oct 29, 2013, at 8:30 AM, Pieter Callewaert wrote: > Investigated a bit more: > > -I can reproduce it, happened already on several nodes when I

RE: Too many open files (Cassandra 2.0.1)

2013-10-29 Thread Pieter Callewaert
Investigated a bit more: -I can reproduce it, happened already on several nodes when I do some stress testing (5 select's spread over multiple threads) -Unexpected exception in the selector loop. Seems not related with the Too many open files, it just happens. -It'

Re: too many open files

2013-07-15 Thread Hiller, Dean
ndra.apache.org>> Date: Monday, July 15, 2013 7:16 AM To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" mailto:user@cassandra.apache.org>> Subject: Re: too many open files Odd that this discussion happens now as I'm also getting this error. I get a

Re: too many open files

2013-07-15 Thread Brian Tarbox
Odd that this discussion happens now as I'm also getting this error. I get a burst of error messages and then the system continues...with no apparent ill effect. I can't tell what the system was doing at the timehere is the stack. BTW Opscenter says I only have 4 or 5 SSTables in each of my 6

Re: too many open files

2013-07-15 Thread Michał Michalski
It doesn't tell you anything if file ends it with "ic-###", except pointing out the SSTable version it uses ("ic" in this case). Files related to secondary index contain something like this in the filename: -., while in "regular" CFs do not contain any dots except the one just before file exte

Re: too many open files

2013-07-15 Thread Paul Ingalls
Also, looking through the log, it appears a lot of the files end with ic- which I assume is associated with a secondary index I have on the table. Are secondary indexes really expensive from a file descriptor standpoint? That particular table uses the default compaction scheme... On Jul 1

Re: too many open files

2013-07-15 Thread Paul Ingalls
I have one table that is using leveled. It was set to 10MB, I will try changing it to 256MB. Is there a good way to merge the existing sstables? On Jul 14, 2013, at 5:32 PM, Jonathan Haddad wrote: > Are you using leveled compaction? If so, what do you have the file size set > at? If you're

Re: too many open files

2013-07-14 Thread Jonathan Haddad
Are you using leveled compaction? If so, what do you have the file size set at? If you're using the defaults, you'll have a ton of really small files. I believe Albert Tobey recommended using 256MB for the table sstable_size_in_mb to avoid this problem. On Sun, Jul 14, 2013 at 5:10 PM, Paul In

RE: Too many open files and stopped compaction with many pending compaction tasks

2013-06-27 Thread Desimpel, Ignace
[mailto:jeremy.hanna1...@gmail.com] Sent: donderdag 27 juni 2013 15:36 To: user@cassandra.apache.org Subject: Re: Too many open files and stopped compaction with many pending compaction tasks Are you on SSDs? On 27 Jun 2013, at 14:24, "Desimpel, Ignace" wrote: > On a test with 3 cass

Re: Too many open files and stopped compaction with many pending compaction tasks

2013-06-27 Thread Jeremy Hanna
Are you on SSDs? On 27 Jun 2013, at 14:24, "Desimpel, Ignace" wrote: > On a test with 3 cassandra servers version 1.2.5 with replication factor 1 > and leveled compaction, I did a store last night and I did not see any > problem with Cassandra. On all 3 machine the compaction is stopped alread

Re: Too Many Open files error

2012-12-20 Thread aaron morton
> THe number of files in the data directory is around 29500+. If you are using Levelled Compaction it is probably easier to set the ulimit to unlimited. Cheers - Aaron Morton Freelance Cassandra Developer New Zealand @aaronmorton http://www.thelastpickle.com On 21/12/2012, at

Re: Too Many Open files error

2012-12-20 Thread Andrey Ilinykh
On Thu, Dec 20, 2012 at 1:17 AM, santi kumar wrote: > Can you please give more details about this bug? bug id or something > https://issues.apache.org/jira/browse/CASSANDRA-4571 > > Now if I want to upgrade, is there any specific process or best practices. > migration from 1.1.4 to 1.1.5 is stra

Re: Too Many Open files error

2012-12-20 Thread santi kumar
Can you please give more details about this bug? bug id or something Now if I want to upgrade, is there any specific process or best practices. Thanks Santi On Thu, Dec 20, 2012 at 1:44 PM, Andrey Ilinykh wrote: > This bug is fixed in 1.1.5 > > Andrey > > > On Thu, Dec 20, 2012 at 12:01 AM,

Re: Too Many Open files error

2012-12-20 Thread Andrey Ilinykh
This bug is fixed in 1.1.5 Andrey On Thu, Dec 20, 2012 at 12:01 AM, santi kumar wrote: > While running the nodetool repair , we are running into > FileNotFoundException with too many open files error. We increased the > ulimit value to 32768, and still we have seen this issue. > > THe number o

Re: Too many open files

2011-07-27 Thread Adil
u should take a look at this http://www.datastax.com/docs/0.7/troubleshooting/index @dil 2011/7/27 Donna Li > All: > > What does the following error mean? One of my cassandra servers print this > error, and nodetool shows the state of the server is down. Netstat result > shows the socket

Re: Too many open files

2011-07-27 Thread Peter Schuller
> What does the following error mean? One of my cassandra servers print this > error, and nodetool shows the state of the server is down. Netstat result > shows the socket number is very few. The operating system enforced limits have been hit, so Cassandra is unable to create additional file descr

Re: Too many open files during Repair operation

2011-07-19 Thread Attila Babo
If you are using Linux, especially Ubuntu, check the linked document below. This is my favorite: "Using sudo has side effects in terms of open file limits. On Ubuntu they’ll be reset to 1024, no matter what’s set in /etc/security/limits.conf" http://wiki.basho.com/Open-Files-Limit.html /Attila

Re: Too many open files during Repair operation

2011-07-19 Thread Sameer Farooqui
I'm guessing you've seen this already? http://www.datastax.com/docs/0.8/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files Check out the # of File Descriptors opened with the "lsof- -n | grep java" command. On Tue, Jul 19, 2011 at 8:30 AM, cbert...@libero.it wrote:

Re: too many open files - maybe a fd leak in indexslicequeries

2011-04-05 Thread Jonathan Ellis
> An: user@cassandra.apache.org > Cc: Roland Gude; Juergen Link; Johannes Hoerle > Betreff: Re: too many open files - maybe a fd leak in indexslicequeries > > Index queries (ColumnFamilyStore.scan) don't do any low-level i/o > themselves, they go through CFS.getColumnFamily, which i

Re: too many open files - maybe a fd leak in indexslicequeries

2011-03-31 Thread Jonathan Ellis
Index queries (ColumnFamilyStore.scan) don't do any low-level i/o themselves, they go through CFS.getColumnFamily, which is what normal row fetches also go through. So if there is a leak there it's unlikely to be specific to indexes. What is your open-file limit (remember that sockets count towar

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Kani
Ya, that happens when some operation throws a time out or any other sort of operation (connection refuse, etc). There is a failback logic that will try to discover all the nodes within the Cluster (not only the ones you configured) in order to reach the cluster and execution the operation. Have y

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Germán Kondolf
Indeed Hector has a connection pool behind it, I think it uses 50 connectios per node. But also uses a node to discover the others, I assume that, as I saw connections from my app to nodes that I didn't configure in Hector. So, you may check the fds in OS level to see if there is a bottleneck ther

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Nate McCall
You probably want to switch to using mutator#addInsertion for some number of iterations (start with 1000 and adjust as needed), then calling execute(). This will be much more efficient. On Thu, Dec 16, 2010 at 11:39 AM, Amin Sakka, Novapost wrote: > > I'm using a unique client instance (using Hec

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Amin Sakka, Novapost
I'm using a unique client instance (using Hector) and a unique connection to cassandra. For each insertion I'm using a new mutator and then I release it. I have 473 sstable "Data.db", the average size of each is 30Mo. 2010/12/16 Ryan King > Are you creating a new connection for each row you

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Ryan King
Are you creating a new connection for each row you insert (and if so are you closing it)? -ryan On Wed, Dec 15, 2010 at 8:13 AM, Amin Sakka, Novapost wrote: > Hello, > I'm using cassandra 0.7.0 rc1, a single node configuration, replication > factor 1, random partitioner, 2 GO heap size. > I ran

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Jake Luciani
how many sstable "Data.db" files do you see in your system and how big are they? Also, how big are the rows you are inserting? On Thu, Dec 16, 2010 at 7:59 AM, Amin Sakka, Novapost < amin.sa...@novapost.fr> wrote: > > I increased the amount of the allowed file descriptors to "unlimted". > Now,

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Germán Kondolf
Be careful with the unlimited value on ulimit, you could end up with a unresponsive server... I mean, you could not even connect via ssh if you don't have enough handles. On Thu, Dec 16, 2010 at 9:59 AM, Amin Sakka, Novapost < amin.sa...@novapost.fr> wrote: > > I increased the amount of the allow

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Amin Sakka, Novapost
I increased the amount of the allowed file descriptors to "unlimted". Now, I get exactly the same exception after 3.50 rows : *CustomTThreadPoolServer.java (line 104) Transport error occurred during acceptance of message.* *org.apache.thrift.transport.TTransportException: java.net.SocketExcept

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-15 Thread Jake Luciani
http://www.riptano.com/docs/0.6/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files On Wed, Dec 15, 2010 at 11:13 AM, Amin Sakka, Novapost < amin.sa...@novapost.fr> wrote: > *Hello,* > *I'm using cassandra 0.7.0 rc1, a single node configuration, replication > factor

Re: too many open files 0.7.0 beta1

2010-08-25 Thread Aaron Morton
That looks like it. I've pushed the limits up to 65k and turned down the testing for now. Otherwise machines were dropping like flies. Thanks. AaronOn 26 Aug, 2010,at 04:16 PM, Dan Washusen wrote:Maybe you're seeing this: https://issues.apache.org/jira/browse/CASSANDRA-1416On Thu, Aug 26, 2010 at

Re: too many open files 0.7.0 beta1

2010-08-25 Thread Dan Washusen
Maybe you're seeing this: https://issues.apache.org/jira/browse/CASSANDRA-1416 On Thu, Aug 26, 2010 at 2:05 PM, Aaron Morton wrote: > Under 0.7.0 beta1 am seeing cassandra run out of files handles... > > Caused by: java.io.FileNotFoundException: /local1/junkbox/cassandra/data/ > junkbox.wetafx.co

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Jorge Barrios
Each of my top-level functions was allocating a Hector client connection at the top, and releasing it when returning. The problem arose when a top-level function had to call another top-level function, which led to the same thread allocating two connections. Hector was not releasing one of them eve

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread shimi
do you mean that you don't release the connection back to fhe pool? On 2010 7 14 20:51, "Jorge Barrios" wrote: Thomas, I had a similar problem a few weeks back. I changed my code to make sure that each thread only creates and uses one Hector connection. It seems that client sockets are not being

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Jorge Barrios
Thomas, I had a similar problem a few weeks back. I changed my code to make sure that each thread only creates and uses one Hector connection. It seems that client sockets are not being released properly, but I didn't have the time to dig into it. Jorge On Wed, Jul 14, 2010 at 8:28 AM, Peter Schu

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Peter Schuller
> [snip] > I'm not sure that is the case. > > When the server gets into the unrecoverable state, the repeating exceptions > are indeed "SocketException: Too many open files". [snip] > Although this is unquestionably a network error,  I don't think it is > actually a > network problem per se, as the

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Jonathan Ellis
socketexception means this is coming from the network, not the sstables knowing the full error message would be nice, but just about any problem on that end should be fixed by adding connection pooling to your client. (moving to user@) On Wed, Jul 14, 2010 at 5:09 AM, Thomas Downing wrote: > On