Hi
I am using Apace Cassandra version :
[cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
I am running a 5 node cluster and recently added one node to the cluster.
Cluster is running with G1 GC garbage collector with 16GB -Xmx.
Cluster is having one materialised view also;
Hi all,
I got OutOfMemoryError in startup process as below.
I have 3 questions about the error.
1. Why did Cassandra built by myself cause OutOfMemory errors?
OutOfMemory errors happened in startup process in some (not all) nodes on
Cassandra 2.2.8 which I got from github and built by myself
ializing notifications_v1.notifications_tray
> INFO 19:51:59 Initializing notifications_v1.notifications_tray.
> notifications_tray_event_id
> """
>
> The instances spin for a long time then throw an OutOfMemoryError.
>
> I don't need to save this table, but I do need to save other keyspaces. Is
> there any way I can get these nodes operational again?
>
ay
INFO 19:51:59 Initializing
notifications_v1.notifications_tray.notifications_tray_event_id
"""
The instances spin for a long time then throw an OutOfMemoryError.
I don't need to save this table, but I do need to save other keyspaces. Is
there any way I can get these nodes operational again?
Glad to help :P
From: Mikhail Strebkov [mailto:streb...@gmail.com]
Sent: 10 December 2015 22:35
To: user@cassandra.apache.org
Subject: Re: Unable to start one Cassandra node: OutOfMemoryError
Steve, thanks a ton! Removing compactions_in_progress helped! Now the node is
running again.
p.s
On Tue, Dec 15, 2015 at 4:41 PM, Jack Krupansky
wrote:
> Can a core Cassandra committer verify if removing the compactions_in_progress
> folder is indeed to desired and recommended solution to this problem, or
> whether it might in fact be a bug that this workaround is needed at all?
> Thanks!
>
or the compactions to finish.
>>
>> Last time we hit it – was due to testing HA when we forced killed an
>> entire cluster.
>>
>>
>>
>> Steve
>>
>>
>>
>>
>>
>>
>>
>> *From:* Jeff Jirsa [mailto:jeff.ji...@crowdstrike.com
.
>
>
>
> Steve
>
>
>
>
>
>
>
> *From:* Jeff Jirsa [mailto:jeff.ji...@crowdstrike.com]
> *Sent:* 10 December 2015 02:49
> *To:* user@cassandra.apache.org
> *Subject:* Re: Unable to start one Cassandra node: OutOfMemoryError
>
>
>
> 8G is probably too small f
f you didn’t stop cassandra with a drain command and
>> wait for the compactions to finish.
>>
>> Last time we hit it – was due to testing HA when we forced killed an
>> entire cluster.
>>
>>
>>
>> Steve
>>
>>
>>
>>
>>
>&
eff Jirsa [mailto:jeff.ji...@crowdstrike.com]
> *Sent:* 10 December 2015 02:49
> *To:* user@cassandra.apache.org
> *Subject:* Re: Unable to start one Cassandra node: OutOfMemoryError
>
>
>
> 8G is probably too small for a G1 heap. Raise your heap or try CMS instead.
>
>
>
...@crowdstrike.com]
Sent: 10 December 2015 02:49
To: user@cassandra.apache.org
Subject: Re: Unable to start one Cassandra node: OutOfMemoryError
8G is probably too small for a G1 heap. Raise your heap or try CMS instead.
71% of your heap is collections – may be a weird data model quirk, but try CMS
first
9, 2015 at 5:26 PM
To: "user@cassandra.apache.org"
Subject: Unable to start one Cassandra node: OutOfMemoryError
Hi everyone,
While upgrading our 5 machines cluster from DSE version 4.7.1 (Cassandra 2.1.8)
to DSE version: 4.8.2 (Cassandra 2.1.11) one of the nodes can't sta
Hi everyone,
While upgrading our 5 machines cluster from DSE version 4.7.1 (Cassandra
2.1.8) to DSE version: 4.8.2 (Cassandra 2.1.11) one of the nodes can't
start with OutOfMemoryError.
We're using HotSpot 64-Bit Server VM/1.8.0_45 and G1 garbage collector with
8 GiB heap.
Average no
Hi all -
I had a nasty streak of OOMs earlier today (several on one node, and a
single OOM on one other node). I've downloaded a few of the hprof files
for local analysis. In each case, there is a single ReadStage thread with
a huge (> 7.5GB) org.apache.cassandra.db.ArrayBackedSortedColumns
inst
I figured it out. Another process on that machine was leaking threads.
All is well!
Thanks guys!
Oleg
On 2013-12-16 13:48:39 +, Maciej Miklas said:
the cassandra-env.sh has option
JVM_OPTS="$JVM_OPTS -Xss180k"
it will give this error if you start cassandra with java 7. So increase
the
Try using jstack to see if there are a lot of threads there.
Are you using vNodea and Hadoop ?
https://issues.apache.org/jira/browse/CASSANDRA-6169
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelas
the cassandra-env.sh has option
JVM_OPTS="$JVM_OPTS -Xss180k"
it will give this error if you start cassandra with java 7. So increase the
value, or remove option.
Regards,
Maciej
On Mon, Dec 16, 2013 at 2:37 PM, srmore wrote:
> What is your thread stack size (xss) ? try increasing that, that
What is your thread stack size (xss) ? try increasing that, that could
help. Sometimes the limitation is imposed by the host provider (e.g.
amazon ec2 etc.)
Thanks,
Sandeep
On Mon, Dec 16, 2013 at 6:53 AM, Oleg Dulin wrote:
> Hi guys!
>
> I beleive my limits settings are correct. Here is the o
Hi guys!
I beleive my limits settings are correct. Here is the output of "ulimits -a":
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i)
Hi All
We are using Cassandra 1.2.4 and are seeing the following error loading a
pretty small amount of data.
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.utils.obs.OpenBitSet.(OpenBitSet.java:76)
at
org.apache.cassandra.utils.o
e filled a bug with patch on the issue.
24.02.12 23:14, Feng Qu ???(??):
Hello,
We have a 6-node ring running 0.8.6 on RHEL 6.1. The first node also
runs OpsCenter community. This node has crashed few time recently with
"OutOfMemoryError: Java heap space" while several compaction
What does the heap dump show is using the memory?
On Fri, Feb 24, 2012 at 3:14 PM, Feng Qu wrote:
> Hello,
>
> We have a 6-node ring running 0.8.6 on RHEL 6.1. The first node also runs
> OpsCenter community. This node has crashed few time recently with
> "OutOfMemoryError: Ja
EL 6.1. The first node also runs
> OpsCenter community. This node has crashed few time recently with
> "OutOfMemoryError: Java heap space" while several compactions on few 200-300
> GB SSTables were running. We are using 8GB Java heap on host with 96GB RAM.
>
> I would
Hello,
We have a 6-node ring running 0.8.6 on RHEL 6.1. The first node also runs
OpsCenter community. This node has crashed few time recently with
"OutOfMemoryError: Java heap space" while several compactions on few 200-300 GB
SSTables were running. We are using 8GB Java heap on
Step 0: don't use raw Thrift, use one of the clients from
http://wiki.apache.org/cassandra/ClientOptions06
On Thu, Nov 18, 2010 at 10:49 AM, Nick Reeves wrote:
> I was trying to get Cassandra 0.6.8 (latest stable release) going for the
> first time and my attempts at getting the example code to r
> Looking at the thrift code it is allocating arrays based on lengths read of
> the wire, without adequate validation of the length. This allows client
> errors to crash the server :(
This is fixed with latest cassandra and current versions of thrift. I
don't remember whether it was a thrift bug i
I was trying to get Cassandra 0.6.8 (latest stable release) going for
the first time and my attempts at getting the example code to run caused
Cassandra to die with out a JavaMemoryError just by connecting
cassandra-cli to cassandra.
java.lang.OutOfMemoryError: Java heap space
at
org.apac
It's set in the build file:
But I'm not sure if you're using the build file or not. It kind of
sounds like you are not.
Gary.
On Thu, Jun 3, 2010 at 11:24, Lev Stesin wrote:
> Gary,
>
> Is there a directive to set it? Or should I modify the cassandra
> script itself? Thanks.
>
> Lev.
>
> On
Gary,
Is there a directive to set it? Or should I modify the cassandra
script itself? Thanks.
Lev.
On Thu, Jun 3, 2010 at 10:48 AM, Gary Dusbabek wrote:
> Are you running "ant test"? It defaults to setting memory to 1G. If
> you're running them outside of ant, you'll need to set max memory
>
Are you running "ant test"? It defaults to setting memory to 1G. If
you're running them outside of ant, you'll need to set max memory
manually.
Gary.
On Thu, Jun 3, 2010 at 10:35, Lev Stesin wrote:
> Hi,
>
> I am getting OOM during load tests:
>
> java.lang.OutOfMemoryError: Java heap space
>
Hi,
I am getting OOM during load tests:
java.lang.OutOfMemoryError: Java heap space
at java.util.HashSet.(HashSet.java:125)
at
com.google.common.collect.Sets.newHashSetWithExpectedSize(Sets.java:181)
at
com.google.common.collect.HashMultimap.createCollection(HashMultimap
31 matches
Mail list logo