Wei, i'm using the off-heap ( serialised ) row cache and front the entire thing with memcached in the middle layer ( to prevent the most actively requested rows from pressuring the Cassandra heap ). If you ask how much the pointers to the off-heap memory will take.. time will tell ( it should be easy to calculate though ) It's been working out great.
Andras Szerdahelyi Solutions Architect, IgnitionOne | 1831 Diegem E.Mommaertslaan 20A M: +32 493 05 50 88 | Skype: sandrew84 [cid:7BDF7228-D831-4D98-967A-BE04FEB17544] On 19 Nov 2012, at 22:23, Wei Zhu <wz1...@yahoo.com<mailto:wz1...@yahoo.com>> wrote: Just curious Andras, how can you manage such a big row cache (10-15GB currently)? By recommendation, you will have 10% of your heap as row cache, so your heap is over 100G?? The largest datastax recommends is 8GB and it seems to be a hardcoded limit in cassandra-env.sh ( # calculate 1/4 ram and cap to 8192MB). Does you GC hold up with such a big heap? In my experience, full GC could take over 20 seconds for such a big heap.
<<inline: C4798BB9-9092-4145-880B-A72C6B7AF9A4[41].png>>