I figured it out. Another process on that machine was leaking threads.
All is well!
Thanks guys!
Oleg
On 2013-12-16 13:48:39 +0000, Maciej Miklas said:
the cassandra-env.sh has option
JVM_OPTS="$JVM_OPTS -Xss180k"
it will give this error if you start cassandra with java 7. So increase
the value, or remove option.
Regards,
Maciej
On Mon, Dec 16, 2013 at 2:37 PM, srmore <comom...@gmail.com> wrote:
What is your thread stack size (xss) ? try increasing that, that could
help. Sometimes the limitation is imposed by the host provider (e.g.
amazon ec2 etc.)
Thanks,
Sandeep
On Mon, Dec 16, 2013 at 6:53 AM, Oleg Dulin <oleg.du...@gmail.com> wrote:
Hi guys!
I beleive my limits settings are correct. Here is the output of "ulimits -a":
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1547135
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 100000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 32768
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
However, I just had a couple of cassandra nodes go down over the
weekend for no apparent reason with the following error:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:691)
at
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Any input is greatly appreciated.
--
Regards,
Oleg Dulin
http://www.olegdulin.com
--
Regards,
Oleg Dulin
http://www.olegdulin.com