added key to in_memory_compaction_limit threshold log:

            logger.info(String.format("Compacting large row %s (%d
bytes) incrementally",

FBUtilities.bytesToHex(rows.get(0).getKey().key), rowSize));

On Wed, Aug 11, 2010 at 4:11 PM, Edward Capriolo <edlinuxg...@gmail.com> wrote:
> Hello all,
>
> I recently posted on list about a situation where two of my nodes from
> my 16 node were garbage collecting and at ooming. I was able to move
> my xmx from 9gb to 11gb to see that rather then my memory saw tooth. I
> would saw tooth around 4 gb before memory shot up like a rocket.
>
> After digging around I noticed the jmx row stats on that node said
> maxrowcompacted size = 128 mb. While the mean row size was 2000 byes.
>
> At the time I was unaware of the setting that warns of large rows.
> During compaction. Unfortunately this setting is too high by default.
> 512 mb, since I have been using rowcache.
>
> When something get this key extreme memory pressure is put on the
> system to get it in and out of row cache.
>
> I wa able to lower this setting to 10 mb and a got printed nice
> warnings showing me the offending keys. I do not know how this got
> their. My guess is  null is getting encoded into this key and this key
> becomes the graveyard for bad data.
>
> Until the rowcache can handle the large keys better I find it
> imperitive to keep the setting and the warnings. As making a program
> to range scan all the data to find one big. Key is very intensive.
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Reply via email to