Hi,

yes, the heap size is set to 2GB on all nodes. Without any activity, the heap 
usage is less than 1GB. Does this include the bloom filters?

From the logs I can see that at the beginning of the test the GC is able to 
free enough memory to push the heap usage to 1GB or less. However, then a lot 
of ParNew collections kick in and over time the full collections are able to 
free less and less memory while more ReadStage's are pending. Here are some 
excerpts:

INFO 09:12:59,425 GC for ConcurrentMarkSweep: 2218 ms for 1 collections, 
1313717752 used; max is 2120679424

INFO 09:13:23,461 GC for ParNew: 408 ms for 2 collections, 1618154920 used; max 
is 2120679424
 INFO 09:13:24,463 GC for ParNew: 543 ms for 1 collections, 1754222312 used; 
max is 2120679424
 INFO 09:13:26,318 GC for ConcurrentMarkSweep: 1332 ms for 1 collections, 
732093216 used; max is 2120679424
 INFO 09:13:26,318 Pool Name                    Active   Pending   Blocked
 INFO 09:13:26,319 ReadStage                         1         1         0

INFO 09:13:44,490 GC for ParNew: 479 ms for 2 collections, 1748301688 used; max 
is 2120679424
 INFO 09:13:48,314 GC for ConcurrentMarkSweep: 3596 ms for 1 collections, 
1818290728 used; max is 2120679424
 INFO 09:13:48,314 Pool Name                    Active   Pending   Blocked
 INFO 09:13:48,315 ReadStage                         5         5         0

 WARN 09:13:48,351 Heap is 0.8574095204688514 full.  You may need to reduce 
memtable and/or cache sizes.  Cassandra will now flush up to the two largest 
memtables to free up memory.  Adjust flush_largest_memtables_at threshold in 
cassandra.yaml if you don't want Cassandra to do this automatically
 INFO 09:13:48,352 Unable to reduce heap usage since there are no dirty column 
families
 INFO 09:13:53,277 GC for ConcurrentMarkSweep: 4530 ms for 1 collections, 
2013473672 used; max is 2120679424
 INFO 09:13:53,277 Pool Name                    Active   Pending   Blocked
 INFO 09:13:53,277 ReadStage                         7         7         0

This was with multiple concurrent reads from the same row. 

I'm not doing any writes right now, so changing the memtable size won't have 
any effect.

Cheers,
Günter

On 09.11.2011, at 23:02, Peter Schuller wrote:

> Ah, you have two CF:s. And my mistake was that I accidentally treated
> bits as bytes ;)
> 
> My calc is that the bloom filter sizes per node for you should be
> about 1.8-1.9 GB. If you haven't touched heap size, IIRC the default
> is still going to be 2GB for your 4 GB machine (not sure, please
> confirm if it matters). That might be consistent with what you're
> seeing; BF:s taking almost all of available heap size, so pretty easy
> to cause OOM:s by throwing traffic on it.
> 
> -- 
> / Peter Schuller (@scode, http://worldmodscode.wordpress.com)

--  
Dipl.-Inform. Günter Ladwig

Karlsruhe Institute of Technology (KIT)
Institute AIFB

Englerstraße 11 (Building 11.40, Room 250)
76131 Karlsruhe, Germany
Phone: +49 721 608-47946
Email: guenter.lad...@kit.edu
Web: www.aifb.kit.edu

KIT – University of the State of Baden-Württemberg and National Large-scale 
Research Center of the Helmholtz Association

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to