> My gut feel: Maybe, if the slowness/timeouts reported by the OP are
> intermixed with periods of activity to indicate compacting full gc.
But even then, after taking a single full GC the behavior should
disappear since there should be no left-overs from the smaller columns
causing fragmentation
> Could this be related as well to
> https://issues.apache.org/jira/browse/CASSANDRA-2463?
My gut feel: Maybe, if the slowness/timeouts reported by the OP are
intermixed with periods of activity to indicate compacting full gc.
OP: Check if cassandra is going into 100% (not less, not more) CPU
usa
e exercise
> with very different data sizes.
> >>
> >> Also, you probably know this, but when setting your memory usage ceiling
> or heap size, make sure to leave a few hundred MBs for GC.
> >>
> >> From: Shu Zhang [szh...@med
>> with very different data sizes.
>>
>> Also, you probably know this, but when setting your memory usage ceiling or
>> heap size, make sure to leave a few hundred MBs for GC.
>> ________
>> From: Shu Zhang [szh...@mediosystems.c
assandra to guard against OOM, you must configure
> nodes such that the max memory usage on each node, that is max size all your
> caches and memtables can potentially grow to, is less than your heap size.
> ________________
> From: Nikolay Kоvshov [nkovs...@
Zhang [szh...@mediosystems.com]
Sent: Monday, April 25, 2011 12:55 PM
To: user@cassandra.apache.org
Subject: RE: OOM on heavy write load
How large are your rows? binary_memtable_throughput_in_
mb only tracks size of data, but there is an overhead associated with each row
on the order of magnitude of
o, is less than your heap size.
From: Nikolay Kоvshov [nkovs...@yandex.ru]
Sent: Monday, April 25, 2011 5:21 AM
To: user@cassandra.apache.org
Subject: Re: OOM on heavy write load
I assume if I turn off swap it will just die earlier, no ? What is the
mechanism o
I assume if I turn off swap it will just die earlier, no ? What is the
mechanism of dying ?
>From the link you provided
# Row cache is too large, or is caching large rows
my row_cache is 0
# The memtable sizes are too large for the amount of heap allocated to the JVM
Is my memtable size too la
(0) turn off swap
(1)
http://www.datastax.com/docs/0.7/troubleshooting/index#nodes-are-dying-with-oom-errors
On Fri, Apr 22, 2011 at 8:00 AM, Nikolay Kоvshov wrote:
> I am using Cassandra 0.7.0 with following settings
>
> binary_memtable_throughput_in_mb: 64
> in_memory_compaction_limit_in_mb: 6
I am using Cassandra 0.7.0 with following settings
binary_memtable_throughput_in_mb: 64
in_memory_compaction_limit_in_mb: 64
keys_cached 1 million
rows_cached 0
RAM for Cassandra 2 GB
I run very simple test
1 Node with 4 HDDs (1 HDD - commitlog and caches, 3 HDDs - data)
1 KS => 1 CF => 1 Colum
10 matches
Mail list logo