I can't emphasise enough testing row caching against your workload for sustained periods and comparing results to just leveraging the filesystem cache and/or ssds. That said. The default off-heap cache can work for structures that don't mutate frequently, and whose rows are not very wide such that the in-and-out-of heap serialization overhead is minimised (I've seen the off-heap cache slow a system down because of serialization costs). The on-heap can do update in place, which is nice for more frequently changing structures, and for larger structures because it dodges the off-heap's serialization overhead. One problem I've experienced with the on-heap cache is the cache working set exceeding allocated space, resulting in GC pressure from sustained thrash/evictions.

Neither cache seems suitable for wide row + slicing usecases, eg time series data or CQL tables whose compound keys create wide rows under the hood.

Bill


On 2013/08/23 17:30, Robert Coli wrote:
On Thu, Aug 22, 2013 at 7:53 PM, Faraaz Sareshwala
<fsareshw...@quantcast.com <mailto:fsareshw...@quantcast.com>> wrote:

    According to the datastax documentation [1], there are two types of
    row cache providers:

...

    The off-heap row cache provider does indeed invalidate rows. We're
    going to look into using the ConcurrentLinkedHashCacheProvider. Time
    to read some source code! :)


Thanks for the follow up... I'm used to thinking of the
ConcurrentLinkedHashCacheProvider as "the row cache" and forgot that
SerializingCacheProvider might have different invalidation behavior.
Invalidating the whole row on write seems highly likely to reduce the
overall performance of such a row cache. :)

The criteria for use of row cache mentioned up-thread remain relevant.
In most cases, you probably don't actually want to use the row cache.
Especially if you're using ConcurrentLinkedHashCacheProvider and
creating long lived, on heap objects.

=Rob

Reply via email to