Both caches involve several objects per entry (What do we want? Packed objects. When do we want them? Now!). The "size" is an estimate of the off heap values only and not the total size nor number of entries.

An acceptable size will depend on your data and access patterns. In one case we had a cluster that at 512mb would go into a GC death spiral despite plenty of free heap (presumably just due to the number of objects) while empirically the cluster runs smoothly at 384mb.

Your caches appear on the larger size, I suggest trying smaller values and only increase when it produces measurable sustained gains.

On 11/05/2013 04:04 AM, Jiri Horky wrote:
Hi there,

we are seeing extensive memory allocation leading to quite long and
frequent GC pauses when using row cache. This is on cassandra 2.0.0
cluster with JNA 4.0 library with following settings:

key_cache_size_in_mb: 300
key_cache_save_period: 14400
row_cache_size_in_mb: 1024
row_cache_save_period: 14400
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32

-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms10G -Xmx10G
-Xmn1024M -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/data2/cassandra-work/instance-1/cassandra-1383566283-pid1893.hprof
-Xss180k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB -XX:+UseCondCardMark

We have disabled row cache on one node to see  the  difference. Please
see attached plots from visual VM, I think that the effect is quite
visible. I have also taken 10x "jmap -histo" after 5s on a affected
server and plotted the result, attached as well.

I have taken a dump of the application when the heap size was 10GB, most
of the memory was unreachable, which was expected. The majority was used
by 55-59M objects of HeapByteBuffer, byte[] and
org.apache.cassandra.db.Column classes. I also include a list of inbound
references to the HeapByteBuffer objects from which it should be visible
where they are being allocated. This was acquired using Eclipse MAT.

Here is the comparison of GC times when row cache enabled and disabled:

prg01 - row cache enabled
       - uptime 20h45m
       - ConcurrentMarkSweep - 11494686ms
       - ParNew - 14690885 ms
       - time spent in GC: 35%
prg02 - row cache disabled
       - uptime 23h45m
       - ConcurrentMarkSweep - 251ms
       - ParNew - 230791 ms
       - time spent in GC: 0.27%

I would be grateful for any hints. Please let me know if you need any
further information. For now, we are going to disable the row cache.

Regards
Jiri Horky


Reply via email to