many connection ?




郝加来

From: Jason Lewis
Date: 2015-11-07 10:38
To: user@cassandra.apache.org
Subject: Re: Too many open files Cassandra 2.1.11.872
cat /proc/5980/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             2063522              2063522              processes
Max open files            100000               100000               files
Max locked memory         unlimited            unlimited            bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       2063522              2063522              signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us




On Fri, Nov 6, 2015 at 4:01 PM, Sebastian Estevez 
<sebastian.este...@datastax.com> wrote:

You probably need to configure ulimits correctly.


What does this give you?


/proc/<cassandra PID>/limits


All the best,



Sebastián Estévez
Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
    






DataStax is the fastest, most scalable distributed database technology, 
delivering Apache Cassandra to the world’s most innovative enterprises. 
Datastax is built to be agile, always-on, and predictably scalable to any size. 
With more than 500 customers in 45 countries, DataStax is the database 
technology and transactional backbone of choice for the worlds most innovative 
companies such as Netflix, Adobe, Intuit, and eBay. 


On Fri, Nov 6, 2015 at 1:56 PM, Branton Davis <branton.da...@spanning.com> 
wrote:

We recently went down the rabbit hole of trying to understand the output of 
lsof.  lsof -n has a lot of duplicates (files opened by multiple threads).  Use 
'lsof -p $PID' or 'lsof -u cassandra' instead.


On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng <br...@blockcypher.com> wrote:

Is your compaction progressing as expected? If not, this may cause an excessive 
number of tiny db files. Had a node refuse to start recently because of this, 
had to temporarily remove limits on that process.


On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis <jle...@packetnexus.com> wrote:

I'm getting too many open files errors and I'm wondering what the
cause may be.

lsof -n | grep java  show 1.4M files

~90k are inodes
~70k are pipes
~500k are cassandra services in /usr
~700K are the data files.

What might be causing so many files to be open?

jas


---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please
immediately notify the sender by return e-mail, and delete the original message 
and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------

Reply via email to