Twitter engineers reported a similar experience [1] (slide 32). They managed to reduce by 45% memory usage with cache provider backed by Memcached. Lately I've been worrying a lot with the swelling of Java objects. In 64-bit servers are tried using the JVM option -XX:+UseCompressedOops? This presentation [2] made me more worried. But let us know any progress in your experience. :-)
[1] http://www.scribd.com/doc/59830692/Cassandra-at-Twitter [2] http://www.cs.virginia.edu/kim/publicity/pldi09tutorials/memory-efficient-java-tutorial.pdf -- Bruno Leonardo Gonçalves On Thu, Jan 12, 2012 at 22:07, Todd Burruss <bburr...@expedia.com> wrote: > I'm using ConcurrentLinkedHashCacheProvider and my data on disk is about > 4gb, but the RAM used by the cache is around 25gb. I have 70k columns per > row, and only about 2500 rows – so a lot more columns than rows. has there > been any discussion or JIRAs discussing reducing the size of the cache? I > can understand the overhead for column names, etc, but the ratio seems a > bit distorted. > > I'm tracing through the code, so any pointers to help me understand is > appreciated > > thx >