Interesting. I'm not sure what to do with that information, but interesting. :)
2012/1/16 Todd Burruss :
> I did a little more digging and a lot of the "overhead" I see in the cache
> is from the usage of ByteBuffer. Each ByteBuffer takes 48 bytes,
> regardless of the data it represents. so for
I did a little more digging and a lot of the "overhead" I see in the cache
is from the usage of ByteBuffer. Each ByteBuffer takes 48 bytes,
regardless of the data it represents. so for a single IColumn stored in
the cache, 96 bytes (one for name, one for value) are for ByteBuffer's
needs.
conver
The serializing cache is basically optimal. Your problem is really
that row cache is not designed for wide rows at all. See
https://issues.apache.org/jira/browse/CASSANDRA-1956
On Thu, Jan 12, 2012 at 10:46 PM, Todd Burruss wrote:
> after looking through the code it seems fairly straight forwar
after looking through the code it seems fairly straight forward to create
some different cache providers and try some things.
has anyone tried ehcache w/o persistence? I see this JIRA
https://issues.apache.org/jira/browse/CASSANDRA-1945 but the main
complaint was the disk serialization, which I d
eived: Thursday, 12 Jan 2012, 6:18pm
To: dev@cassandra.apache.org [dev@cassandra.apache.org]
Subject: Re: Cache Row Size
8x is pretty normal for JVM and bookkeeping overhead with the CLHCP.
The SerializedCacheProvider is the default in 1.0 and is much lighter-weight.
On Thu, Jan 12, 2012 at 6:
thx for the info. I'm a bit leary on the memcached (or any out-of-process
cache) because of coherency issues:
https://issues.apache.org/jira/browse/CASSANDRA-2701
On 1/12/12 5:50 PM, "Bruno Leonardo Gonçalves" wrote:
>Twitter engineers reported a similar experience [1] (slide 32). They
>mana
8x is pretty normal for JVM and bookkeeping overhead with the CLHCP.
The SerializedCacheProvider is the default in 1.0 and is much lighter-weight.
On Thu, Jan 12, 2012 at 6:07 PM, Todd Burruss wrote:
> I'm using ConcurrentLinkedHashCacheProvider and my data on disk is about 4gb,
> but the RAM u
Twitter engineers reported a similar experience [1] (slide 32). They
managed to reduce by 45% memory usage with cache provider backed by
Memcached. Lately I've been worrying a lot with the swelling of Java
objects. In 64-bit servers are tried using the JVM option
-XX:+UseCompressedOops? This presen
I'm using ConcurrentLinkedHashCacheProvider and my data on disk is about 4gb,
but the RAM used by the cache is around 25gb. I have 70k columns per row, and
only about 2500 rows – so a lot more columns than rows. has there been any
discussion or JIRAs discussing reducing the size of the cache?