O/S buffer cache,
because to write to disk you pass through buffer cache first.
From: Aaron Ploetz
Reply-To: "user@cassandra.apache.org"
Date: Tuesday, June 2, 2020 at 9:38 AM
To: "user@cassandra.apache.org"
Subject: Re: Cassandra crashes when using offheap_objects for
memt
primary key ((partition_key, clustering_key))
Also, this primary key definition does not define a partitioning key and a
clustering key. It defines a *composite* partition key.
If you want it to instantiate both a partition and clustering key, get rid
of one set of parens.
primary key (partitio
I would try running it with memtable_offheap_space_in_mb at the default for
sure, but definitely lower than 8GB. With 32GB of RAM, you're already
allocating half of that for your heap, and then halving the remainder for
off heap memtables. What's left may not be enough for the OS, etc. Giving
so
What’s the cardinality of hash?
Do they have the same schema? If so you may be able to take a snapshot and
hardlink it in / refresh instead of sstableloader. Alternatively you could drop
the index from the destination keyspace and add it back in after the load
finishes.
How big are the sstabl
What does “hash” Data look like?
Rahul
On Jul 24, 2018, 11:30 AM -0400, Arpan Khandelwal , wrote:
> I need to clone data from one keyspace to another keyspace.
> We do it by taking snapshot of keyspace1 and restoring in keyspace2 using
> sstableloader.
>
> Suppose we have following table with ind
ce used 43488K, capacity 45128K, committed 45696K, reserved
1089536K
class spaceused 5798K, capacity 6098K, committed 6272K, reserved 1048576K
}
[Times: user=5.48 sys=0.54, real=3.48 secs]
From: kurt greaves
Date: Tuesday, August 22, 2017 at 5:40 PM
To: User
Subject: Re: Cass
So the reason for the large number of prepared statements is because of the
nature of the application.
One of the periodic job does lookup with a partial key (key prefix, not
filtered queries) for thousands of rows.
Hence the large number of prepared statements.
Almost of the queries once execut
sounds like Cassandra is being killed by the oom killer. can you check
dmesg to see if this is the case? sounds a bit absurd with 256g of memory
but could be a config problem.
On 08/22/2017 05:39 PM, Thakrar, Jayesh wrote:
Surbhi and Fay,
I agree we have plenty of RAM to spare.
Hi
At the very beginning of system.log there is a
INFO [CompactionExecutor:487] 2017-08-21 23:21:01,684
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate
0.0
AND speculative_retry = '99PERCENTILE';
From: "Fay Hou [Storage Service] "
Date: Tuesday, August 22, 2017 at 10:52 AM
To: "Thakrar, Jayesh"
Cc: "user@cassandra.apache.org" , Surbhi Gupta
Subject: Re: Cassandra crashes
what kind compactio
ed cassandra-gc.log.*)
Thanks for the quick replies!
Jayesh
*From: *Surbhi Gupta
*Date: *Tuesday, August 22, 2017 at 10:19 AM
*To: *"Thakrar, Jayesh" , "
user@cassandra.apache.org"
*Subject: *Re: Cassandra crashes
16GB heap is too small for G1GC . Try at leas
16GB heap is too small for G1GC . Try at least 32GB of heap size
On Tue, Aug 22, 2017 at 7:58 AM Fay Hou [Storage Service] <
fay...@coupang.com> wrote:
> What errors do you see?
> 16gb of 256 GB . Heap is too small. I would give heap at least 160gb.
>
>
> On Aug 22, 2017 7:42 AM, "Thakrar, Jayes
What errors do you see?
16gb of 256 GB . Heap is too small. I would give heap at least 160gb.
On Aug 22, 2017 7:42 AM, "Thakrar, Jayesh"
wrote:
Hi All,
We are somewhat new users to Cassandra 3.10 on Linux and wanted to ping the
user group for their experiences.
Our usage profile is batch
You typically don't want to set the eden space when you're using G1
--
Jeff Jirsa
> On Aug 22, 2017, at 7:42 AM, Thakrar, Jayesh
> wrote:
>
> Hi All,
>
> We are somewhat new users to Cassandra 3.10 on Linux and wanted to ping the
> user group for their experiences.
>
> Our usage profile
It could be the linux kernel killing Cassandra b/c of memory usage. When
this happens, nothing is logged in Cassandra. Check the system
logs: /var/log/messages Look for a message saying "Out of Memory"... "kill
process"...
On Mon, Jun 8, 2015 at 1:37 PM, Paulo Motta
wrote:
> try checking your s
try checking your system logs (generally /var/log/syslog) to check if the
cassandra process was killed by the OS oom-killer
2015-06-06 15:39 GMT-03:00 Brian Sam-Bodden :
> Berk,
>1 GB is not enough to run C*, the minimum memory we use on Digital
> Ocean is 4GB.
>
> Cheers,
> Brian
> http://in
Berk,
1 GB is not enough to run C*, the minimum memory we use on Digital Ocean
is 4GB.
Cheers,
Brian
http://integrallis.com
On Sat, Jun 6, 2015 at 10:50 AM, wrote:
> Hi all,
>
> I've installed Cassandra on a test server hosted on Digital Ocean. The
> server has 1GB RAM, and is running a sing
Hi John,
On 10.09.2013, at 01:06, John Sanda wrote:
> Check your file limits -
> http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html?pagename=docs&version=1.2&file=#cassandra/troubleshooting/trblshootInsufficientResources_r.html
Did that already - without success.
Meanwhil
Check your file limits -
http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html?pagename=docs&version=1.2&file=#cassandra/troubleshooting/trblshootInsufficientResources_r.html
On Friday, September 6, 2013, Jan Algermissen wrote:
>
> On 06.09.2013, at 13:12, Alex Major >
> wrote:
>
On 06.09.2013, at 17:07, Jan Algermissen wrote:
>
> On 06.09.2013, at 13:12, Alex Major wrote:
>
>> Have you changed the appropriate config settings so that Cassandra will run
>> with only 2GB RAM? You shouldn't find the nodes go down.
>>
>> Check out this blog post
>> http://www.opensourc
On 06.09.2013, at 13:12, Alex Major wrote:
> Have you changed the appropriate config settings so that Cassandra will run
> with only 2GB RAM? You shouldn't find the nodes go down.
>
> Check out this blog post
> http://www.opensourceconnections.com/2013/08/31/building-the-perfect-cassandra-tes
Have you changed the appropriate config settings so that Cassandra will run
with only 2GB RAM? You shouldn't find the nodes go down.
Check out this blog post
http://www.opensourceconnections.com/2013/08/31/building-the-perfect-cassandra-test-environment/,
it outlines the configuration settings nee
> I'm sorry for the lack of information
> I'm using 0.6.3.
> The move was moving the data dir and the commitlog dir
> But i now removed them and let the system bootstrap from the ring.
> i know i'm lacking in information here.. but i thought i needed to be
> mentioned overhere this could happen.
D
Hi,
I'm sorry for the lack of information
I'm using 0.6.3.
The move was moving the data dir and the commitlog dir
But i now removed them and let the system bootstrap from the ring.
i know i'm lacking in information here.. but i thought i needed to be
mentioned overhere this could happen.
Pieter
> I've moved my cassandra to another machine, started it up again, but got
> this error
Which version of Cassandra exactly? (So that one can look at matching
source code)
Also, were you running the exact same version of Cassandra on both
servers (i.e., both the "source" and the "destination")?
W
25 matches
Mail list logo