O/S buffer cache,
because to write to disk you pass through buffer cache first.
From: Aaron Ploetz
Reply-To: "user@cassandra.apache.org"
Date: Tuesday, June 2, 2020 at 9:38 AM
To: "user@cassandra.apache.org"
Subject: Re: Cassandra crashes when using offheap_objects for
memt
primary key ((partition_key, clustering_key))
Also, this primary key definition does not define a partitioning key and a
clustering key. It defines a *composite* partition key.
If you want it to instantiate both a partition and clustering key, get rid
of one set of parens.
primary key (partitio
I would try running it with memtable_offheap_space_in_mb at the default for
sure, but definitely lower than 8GB. With 32GB of RAM, you're already
allocating half of that for your heap, and then halving the remainder for
off heap memtables. What's left may not be enough for the OS, etc. Giving
so
I just changed these properties to increase flushed file size (decrease number
of compactions):
memtable_allocation_type from heap_buffers to offheap_objects
memtable_offheap_space_in_mb: from default (2048) to 8192
Using default value for other memtable/compaction/commitlog configurations .
>> info map,
>> creationtimestamp bigint,
>> lastupdatedtimestamp bigint,
>> PRIMARY KEY ( (id) )
>> );
>>
>> CREATE INDEX ON message ( hash );
>> -
>> Cassandra crashes when i load data using sstableloade
text,
> category text,
> hash text,
> info map,
> creationtimestamp bigint,
> lastupdatedtimestamp bigint,
> PRIMARY KEY ( (id) )
> );
>
> CREATE INDEX ON message ( hash );
> -
> Cassandra crashes when i load data usi
( hash );
-
Cassandra crashes when i load data using sstableloader. Load is happening
correctly but seems that cassandra crashes when its trying to build index
on table with huge data.
I have two questions.
1. Is there any better way to clone keyspace?
2. How can i optimize
ce used 43488K, capacity 45128K, committed 45696K, reserved
1089536K
class spaceused 5798K, capacity 6098K, committed 6272K, reserved 1048576K
}
[Times: user=5.48 sys=0.54, real=3.48 secs]
From: kurt greaves
Date: Tuesday, August 22, 2017 at 5:40 PM
To: User
Subject: Re: Cass
So the reason for the large number of prepared statements is because of the
nature of the application.
One of the periodic job does lookup with a partial key (key prefix, not
filtered queries) for thousands of rows.
Hence the large number of prepared statements.
Almost of the queries once execut
sounds like Cassandra is being killed by the oom killer. can you check
dmesg to see if this is the case? sounds a bit absurd with 256g of memory
but could be a config problem.
On 08/22/2017 05:39 PM, Thakrar, Jayesh wrote:
Surbhi and Fay,
I agree we have plenty of RAM to spare.
Hi
At the very beginning of system.log there is a
INFO [CompactionExecutor:487] 2017-08-21 23:21:01,684
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate
0.0
AND speculative_retry = '99PERCENTILE';
From: "Fay Hou [Storage Service] "
Date: Tuesday, August 22, 2017 at 10:52 AM
To: "Thakrar, Jayesh"
Cc: "user@cassandra.apache.org" , Surbhi Gupta
Subject: Re: Cassandra crashes
what kind compactio
ed cassandra-gc.log.*)
Thanks for the quick replies!
Jayesh
*From: *Surbhi Gupta
*Date: *Tuesday, August 22, 2017 at 10:19 AM
*To: *"Thakrar, Jayesh" , "
user@cassandra.apache.org"
*Subject: *Re: Cassandra crashes
16GB heap is too small for G1GC . Try at leas
16GB heap is too small for G1GC . Try at least 32GB of heap size
On Tue, Aug 22, 2017 at 7:58 AM Fay Hou [Storage Service] <
fay...@coupang.com> wrote:
> What errors do you see?
> 16gb of 256 GB . Heap is too small. I would give heap at least 160gb.
>
>
> On Aug 22, 2017 7:42 AM, "Thakrar, Jayes
What errors do you see?
16gb of 256 GB . Heap is too small. I would give heap at least 160gb.
On Aug 22, 2017 7:42 AM, "Thakrar, Jayesh"
wrote:
Hi All,
We are somewhat new users to Cassandra 3.10 on Linux and wanted to ping the
user group for their experiences.
Our usage profile is batch
You typically don't want to set the eden space when you're using G1
--
Jeff Jirsa
> On Aug 22, 2017, at 7:42 AM, Thakrar, Jayesh
> wrote:
>
> Hi All,
>
> We are somewhat new users to Cassandra 3.10 on Linux and wanted to ping the
> user group for their experiences.
>
> Our usage profile
Hi All,
We are somewhat new users to Cassandra 3.10 on Linux and wanted to ping the
user group for their experiences.
Our usage profile is batch jobs that load millions of rows to Cassandra every
hour.
And there are similar period batch jobs that read millions of rows and do some
processing,
It could be the linux kernel killing Cassandra b/c of memory usage. When
this happens, nothing is logged in Cassandra. Check the system
logs: /var/log/messages Look for a message saying "Out of Memory"... "kill
process"...
On Mon, Jun 8, 2015 at 1:37 PM, Paulo Motta
wrote:
> try checking your s
try checking your system logs (generally /var/log/syslog) to check if the
cassandra process was killed by the OS oom-killer
2015-06-06 15:39 GMT-03:00 Brian Sam-Bodden :
> Berk,
>1 GB is not enough to run C*, the minimum memory we use on Digital
> Ocean is 4GB.
>
> Cheers,
> Brian
> http://in
Berk,
1 GB is not enough to run C*, the minimum memory we use on Digital Ocean
is 4GB.
Cheers,
Brian
http://integrallis.com
On Sat, Jun 6, 2015 at 10:50 AM, wrote:
> Hi all,
>
> I've installed Cassandra on a test server hosted on Digital Ocean. The
> server has 1GB RAM, and is running a sing
Hi all,
I've installed Cassandra on a test server hosted on Digital Ocean. The server
has 1GB RAM, and is running a single docker container alongside C*. Somehow,
every night, the Cassandra instance crashes. The annoying part is that I cannot
see anything wrong with the log files, so I can't te
Hi John,
On 10.09.2013, at 01:06, John Sanda wrote:
> Check your file limits -
> http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html?pagename=docs&version=1.2&file=#cassandra/troubleshooting/trblshootInsufficientResources_r.html
Did that already - without success.
Meanwhil
Check your file limits -
http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html?pagename=docs&version=1.2&file=#cassandra/troubleshooting/trblshootInsufficientResources_r.html
On Friday, September 6, 2013, Jan Algermissen wrote:
>
> On 06.09.2013, at 13:12, Alex Major >
> wrote:
>
On 06.09.2013, at 17:07, Jan Algermissen wrote:
>
> On 06.09.2013, at 13:12, Alex Major wrote:
>
>> Have you changed the appropriate config settings so that Cassandra will run
>> with only 2GB RAM? You shouldn't find the nodes go down.
>>
>> Check out this blog post
>> http://www.opensourc
On 06.09.2013, at 13:12, Alex Major wrote:
> Have you changed the appropriate config settings so that Cassandra will run
> with only 2GB RAM? You shouldn't find the nodes go down.
>
> Check out this blog post
> http://www.opensourceconnections.com/2013/08/31/building-the-perfect-cassandra-tes
Have you changed the appropriate config settings so that Cassandra will run
with only 2GB RAM? You shouldn't find the nodes go down.
Check out this blog post
http://www.opensourceconnections.com/2013/08/31/building-the-perfect-cassandra-test-environment/,
it outlines the configuration settings nee
Hi,
I have set up C* in a very limited environment: 3 VMs at digitalocean with 2GB
RAM and 40GB SSDs, so my expectations about overall performance are low.
Keyspace uses replication level of 2.
I am loading 1.5 Mio rows (each 60 columns of a mix of numbers and small texts,
300.000 wide rows ef
> I'm sorry for the lack of information
> I'm using 0.6.3.
> The move was moving the data dir and the commitlog dir
> But i now removed them and let the system bootstrap from the ring.
> i know i'm lacking in information here.. but i thought i needed to be
> mentioned overhere this could happen.
D
Hi,
I'm sorry for the lack of information
I'm using 0.6.3.
The move was moving the data dir and the commitlog dir
But i now removed them and let the system bootstrap from the ring.
i know i'm lacking in information here.. but i thought i needed to be
mentioned overhere this could happen.
Pieter
> I've moved my cassandra to another machine, started it up again, but got
> this error
Which version of Cassandra exactly? (So that one can look at matching
source code)
Also, were you running the exact same version of Cassandra on both
servers (i.e., both the "source" and the "destination")?
W
Hi,
I've moved my cassandra to another machine, started it up again, but got
this error
INFO 22:06:28,931 Replaying
/var/lib/cassandra/commitlog/CommitLog-1279609619367.log,
/var/lib/cassandra/commitlog/CommitLog-1279805020866.log,
/var/lib/cassandra/commitlog/CommitLog-1279840051243.log
INFO
31 matches
Mail list logo