Hi Ben,
We're using the akka persistence layer which doesn't give me much scope for
remodelling data.
So, on the assumption that the guys who wrote the persistence layer knew what
they were doing, I followed your suggestion to increase RAM (still only to a
miserly 8gig, which the startup scrip
I should add - there is probably an option (c) of fiddling with a bunch of
tuning parameters to try to nurse things through with your current config
but I’m not sure that’s useful unless you really need to make the current
set up work for some reason.
On Sun, 12 Jun 2016 at 15:23 Ben Slater wrote
Hi Tobin,
4G RAM is a pretty small machine to be using to run Cassandra. As I
mentioned, 8G of heap is the normal recommendation for a production machine
which means you need at least 14-16G total (and can get performance benefit
from more).
I agree disk space doesn’t look to really be an issue h
Hi Ben,
I think the degraded mode is caused by one or both of these...
• WARN [main] 2016-06-10 14:23:01,690 StartupChecks.java:118 -
jemalloc shared library could not be preloaded to speed up memory allocations
• WARN [main] 2016-06-10 14:23:01,691 StartupChecks.java:150 - JMX
The short-term fix is probably to try increasing heap space (in
cassandra-env.sh). 8GB in the most standard but more may help in some
circumstances.
That said, your logs are pointing to a number of other issues which won’t
be helping and probably need to be fixed for long-term stability:
- swap en
...@gmail.com]
Sent: Monday, April 28, 2014 5:34 PM
To: user@cassandra.apache.org
Subject: Re: java.lang.OutOfMemoryError: Java heap space
Yes, they are virtual machines, but we are using KVM. Is there any solutions
for this issue or we should use physical machines?
On Mon, Apr 28, 2014 at 10:38
Yes, they are virtual machines, but we are using KVM. Is there any
solutions for this issue or we should use physical machines?
On Mon, Apr 28, 2014 at 10:38 AM, Prem Yadav wrote:
> Are the virtual machines? The last time I had this issues was because of
> VMWare "ballooning".
> If not, what ve
Are the virtual machines? The last time I had this issues was because of
VMWare "ballooning".
If not, what versions of Cassandra and Java are you using?
On Mon, Apr 28, 2014 at 6:30 PM, Gary Zhao wrote:
> BTW, the CPU usage on this node is pretty high, but data size is pretty
> small.
>
> PID
BTW, the CPU usage on this node is pretty high, but data size is pretty
small.
PID USERNAME THR PRI NICE SIZE RES SHR STATE TIMECPU COMMAND
28674 cassandr 89 250 9451M 8970M 525M sleep 32.1H 329% java
UN 8.92 GB256 35.4% c2d9d02e-bdb3-47cb-af1b-eabc2eeb503b rac
http://www.riptano.com/docs/0.6/troubleshooting/index#nodes-are-dying-with-oom-errors
On Wed, Nov 24, 2010 at 4:38 PM, zangds wrote:
> Hi,
> I'm using apache-cassandra-0.7.0-beta3 , when I did some insertions into
> cassandra,I got errors that stop cassandra from working, anyone have a look
> on
t; take effect.
>
> -Original Message-
> From: Benjamin Black [mailto:b...@b3k.us]
> Sent: Monday, June 14, 2010 7:46 PM
> To: user@cassandra.apache.org
> Subject: Re: java.lang.OutofMemoryerror: Java heap space
>
> My guess: you are outrunning your disk I/O. Each o
user@cassandra.apache.org
Subject: Re: java.lang.OutofMemoryerror: Java heap space
My guess: you are outrunning your disk I/O. Each of those 5MB rows
gets written to the commitlog, and the memtable is flushed when it
hits the configured limit, which you've probably left at 128MB. Every
25 rows or so you a
Sorry, the record size should be 5KB not 5MB. Coz 4KB is still OK. I will
try Benjamin's suggestion.
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Tuesday, June 15, 2010 8:09 AM
To: user@cassandra.apache.org
Subject: Re: java.lang.OutofMemoryerror: Java
if you are reading 500MB per thrift request from each of 3 threads,
then yes, simple arithmetic indicates that 1GB heap is not enough.
On Mon, Jun 14, 2010 at 6:13 PM, Caribbean410 wrote:
> Hi,
>
> I wrote 200k records to db with each record 5MB. Get this error when I uses
> 3 threads (each threa
My guess: you are outrunning your disk I/O. Each of those 5MB rows
gets written to the commitlog, and the memtable is flushed when it
hits the configured limit, which you've probably left at 128MB. Every
25 rows or so you are getting memtable flushed to disk. Until these
things complete, they ar
15 matches
Mail list logo