Re: OOM on heavy write load

2011-04-28 Thread Peter Schuller
> My gut feel: Maybe, if the slowness/timeouts reported by the OP are > intermixed with periods of activity to indicate compacting full gc. But even then, after taking a single full GC the behavior should disappear since there should be no left-overs from the smaller columns causing fragmentation

Re: OOM on heavy write load

2011-04-28 Thread Peter Schuller
> Could this be related as well to > https://issues.apache.org/jira/browse/CASSANDRA-2463? My gut feel: Maybe, if the slowness/timeouts reported by the OP are intermixed with periods of activity to indicate compacting full gc. OP: Check if cassandra is going into 100% (not less, not more) CPU usa

Re: OOM on heavy write load

2011-04-28 Thread Thibaut Britz
e exercise > with very different data sizes. > >> > >> Also, you probably know this, but when setting your memory usage ceiling > or heap size, make sure to leave a few hundred MBs for GC. > >> > >> From: Shu Zhang [szh...@med

Re: OOM on heavy write load

2011-04-27 Thread Aaron Morton
>> with very different data sizes. >> >> Also, you probably know this, but when setting your memory usage ceiling or >> heap size, make sure to leave a few hundred MBs for GC. >> ________ >> From: Shu Zhang [szh...@mediosystems.c

Re: OOM on heavy write load

2011-04-27 Thread Nikolay Kоvshov
assandra to guard against OOM, you must configure > nodes such that the max memory usage on each node, that is max size all your > caches and memtables can potentially grow to, is less than your heap size. > ________________ > From: Nikolay Kоvshov [nkovs...@

RE: OOM on heavy write load

2011-04-25 Thread Shu Zhang
Zhang [szh...@mediosystems.com] Sent: Monday, April 25, 2011 12:55 PM To: user@cassandra.apache.org Subject: RE: OOM on heavy write load How large are your rows? binary_memtable_throughput_in_ mb only tracks size of data, but there is an overhead associated with each row on the order of magnitude of

RE: OOM on heavy write load

2011-04-25 Thread Shu Zhang
o, is less than your heap size. From: Nikolay Kоvshov [nkovs...@yandex.ru] Sent: Monday, April 25, 2011 5:21 AM To: user@cassandra.apache.org Subject: Re: OOM on heavy write load I assume if I turn off swap it will just die earlier, no ? What is the mechanism o

Re: OOM on heavy write load

2011-04-25 Thread Nikolay Kоvshov
I assume if I turn off swap it will just die earlier, no ? What is the mechanism of dying ? >From the link you provided # Row cache is too large, or is caching large rows my row_cache is 0 # The memtable sizes are too large for the amount of heap allocated to the JVM Is my memtable size too la

Re: OOM on heavy write load

2011-04-22 Thread Jonathan Ellis
(0) turn off swap (1) http://www.datastax.com/docs/0.7/troubleshooting/index#nodes-are-dying-with-oom-errors On Fri, Apr 22, 2011 at 8:00 AM, Nikolay Kоvshov wrote: > I am using Cassandra 0.7.0 with following settings > > binary_memtable_throughput_in_mb: 64 > in_memory_compaction_limit_in_mb: 6

OOM on heavy write load

2011-04-22 Thread Nikolay Kоvshov
I am using Cassandra 0.7.0 with following settings binary_memtable_throughput_in_mb: 64 in_memory_compaction_limit_in_mb: 64 keys_cached 1 million rows_cached 0 RAM for Cassandra 2 GB I run very simple test 1 Node with 4 HDDs (1 HDD - commitlog and caches, 3 HDDs - data) 1 KS => 1 CF => 1 Colum