I am not sure if there is a ticket on this but I have always thought the
row cache should not bother caching an entry bigger then n columns.
Murmurs of a slice cache might help as well.
On Saturday, December 10, 2011, Peter Schuller
wrote:
>> After re-reading my post, what I meant to say is that
> After re-reading my post, what I meant to say is that I switched from
> Serializing cache provider to ConcurrentLinkedHash cache provider and then
> saw better performance, but still far worse than no caching at all:
>
> - no caching at all : 25-30ms
> - with Serializing provider : 1300+ms
> - wi
2011/11/18 Todd Burruss :
> After re-reading my post, what I meant to say is that I switched from
> Serializing cache provider to ConcurrentLinkedHash cache provider and then
> saw better performance, but still far worse than no caching at all:
>
> - no caching at all : 25-30ms
> - with Serializing
No i/o? No sstable counts going up in cfhistograms?
Is the heap so full you're experiencing GC pressure that way?
On Fri, Nov 18, 2011 at 3:46 PM, Todd Burruss wrote:
> Ok, I figured something like that. Switching to
> ConcurrentLinkedHashCacheProvider I see it is a lot better, but still
> ins
After re-reading my post, what I meant to say is that I switched from
Serializing cache provider to ConcurrentLinkedHash cache provider and then
saw better performance, but still far worse than no caching at all:
- no caching at all : 25-30ms
- with Serializing provider : 1300+ms
- with Concurrent
On Fri, Nov 18, 2011 at 1:46 PM, Todd Burruss wrote:
> Ok, I figured something like that. Switching to
> ConcurrentLinkedHashCacheProvider I see it is a lot better, but still
> instead of the 25-30ms response times I enjoyed with no caching, I'm
> seeing 500ms at 100% hit rate on the cache. No o
Ok, I figured something like that. Switching to
ConcurrentLinkedHashCacheProvider I see it is a lot better, but still
instead of the 25-30ms response times I enjoyed with no caching, I'm
seeing 500ms at 100% hit rate on the cache. No old gen pressure at all,
just ParNew crazy.
More info on my us
On Fri, Nov 18, 2011 at 9:42 AM, Sylvain Lebresne wrote:
> On Fri, Nov 18, 2011 at 6:31 PM, Mohit Anchlia wrote:
>> On Fri, Nov 18, 2011 at 7:47 AM, Sylvain Lebresne
>> wrote:
>>> On Fri, Nov 18, 2011 at 4:23 PM, Mohit Anchlia
>>> wrote:
On Fri, Nov 18, 2011 at 6:39 AM, Sylvain Lebresne
On Fri, Nov 18, 2011 at 6:31 PM, Mohit Anchlia wrote:
> On Fri, Nov 18, 2011 at 7:47 AM, Sylvain Lebresne
> wrote:
>> On Fri, Nov 18, 2011 at 4:23 PM, Mohit Anchlia
>> wrote:
>>> On Fri, Nov 18, 2011 at 6:39 AM, Sylvain Lebresne
>>> wrote:
On Fri, Nov 18, 2011 at 1:53 AM, Todd Burruss
On Fri, Nov 18, 2011 at 7:47 AM, Sylvain Lebresne wrote:
> On Fri, Nov 18, 2011 at 4:23 PM, Mohit Anchlia wrote:
>> On Fri, Nov 18, 2011 at 6:39 AM, Sylvain Lebresne
>> wrote:
>>> On Fri, Nov 18, 2011 at 1:53 AM, Todd Burruss wrote:
I'm using cassandra 1.0. Been doing some testing on usi
On Fri, Nov 18, 2011 at 4:23 PM, Mohit Anchlia wrote:
> On Fri, Nov 18, 2011 at 6:39 AM, Sylvain Lebresne
> wrote:
>> On Fri, Nov 18, 2011 at 1:53 AM, Todd Burruss wrote:
>>> I'm using cassandra 1.0. Been doing some testing on using cass's cache.
>>> When I turn it on (using the CLI) I see Pa
On Fri, Nov 18, 2011 at 6:39 AM, Sylvain Lebresne wrote:
> On Fri, Nov 18, 2011 at 1:53 AM, Todd Burruss wrote:
>> I'm using cassandra 1.0. Been doing some testing on using cass's cache.
>> When I turn it on (using the CLI) I see ParNew jump from 3-4ms to
>> 200-300ms. This really screws with
On Fri, Nov 18, 2011 at 1:53 AM, Todd Burruss wrote:
> I'm using cassandra 1.0. Been doing some testing on using cass's cache.
> When I turn it on (using the CLI) I see ParNew jump from 3-4ms to
> 200-300ms. This really screws with response times, which jump from ~25-30ms
> to 1300+ms. I've in
I'm using cassandra 1.0. Been doing some testing on using cass's cache. When
I turn it on (using the CLI) I see ParNew jump from 3-4ms to 200-300ms. This
really screws with response times, which jump from ~25-30ms to 1300+ms. I've
increase new gen and that helps, but still this is suprising
14 matches
Mail list logo