ng".
>
> The reason individual rows are "hard" is the same reason most things with
> Cassandra caching and consistency are "hard" - a clustering / row may not
> change, but it may be deleted by a range delete that deletes it and many
> other clusterings / rows
caching and consistency are "hard" - a clustering / row may not
change, but it may be deleted by a range delete that deletes it and many
other clusterings / rows, which makes maintaining correctness of an
individual row cache not that different from maintenance of the data around
it, which
Hello All,
Wondering if anyone has tried to modify the row-cache API to use both the
partition key and the clustering keys to convert the row-cache, which is
really a partition cache today, into a true row-cache? This might help with
broader adoption of row-cache for use-cases with large
Is there a JMX property somewhere that I could monitor to see how old the
oldest row cache item is?
I want to see how much churn there is.
Thanks in advance,
John...
is how I change
> > what N rows are put into memory. Looking at the code, it seems that’s the
> > case.
>
> So we agree that we row cache is storing only N rows from the beginning of
> the partition. So if only the last row in a partition is read, then it
> probably doesn’t
.
So we agree that we row cache is storing only N rows from the beginning of the
partition. So if only the last row in a partition is read, then it probably
doesn’t get cached assuming there are more than N rows in a partition?
> The language of the comment basically says that it h
I may be wrong, but what I’ve read and used in the past assumes that the
“first” N rows are cached and the clustering key design is how I change what N
rows are put into memory. Looking at the code, it seems that’s the case.
The language of the comment basically says that it holds in cache what
mailto:hkro...@gmail.com>> wrote:
>>>
>>> Hello,
>>>
>>> I am trying to verify and understand fully the functionality of row cache
>>> in Cassandra.
>>>
>>> I have been using mainly two different sources for
> > I am trying to verify and understand fully the functionality of row cache
> > in Cassandra.
> >
> > I have been using mainly two different sources for information:
> > https://github.com/apache/cassandra/blob/0db88242c66d3a7193a9ad836f9a515b3ac7f9fa/src/java/org/apac
Anyone?
> On 4 Mar 2018, at 20:45, Hannu Kröger wrote:
>
> Hello,
>
> I am trying to verify and understand fully the functionality of row cache in
> Cassandra.
>
> I have been using mainly two different sources for information:
> https://githu
Hello,
I am trying to verify and understand fully the functionality of row cache in
Cassandra.
I have been using mainly two different sources for information:
https://github.com/apache/cassandra/blob/0db88242c66d3a7193a9ad836f9a515b3ac7f9fa/src/java/org/apache/cassandra/db
Thanks All.
-- --
??: "Steinmaurer, Thomas";;
: 2017??9??20??(??) 1:38
??: "user@cassandra.apache.org";
: RE: Row Cache hit issue
Hi,
additionally, with saved (key) caches, we had some sort of
:06
To: cassandra
Subject: Re: Row Cache hit issue
Hi Peng,
C* periodically saves cache to disk, to solve cold start problem. If
row_cache_save_period=0, it means C* does not save cache to disk. But the cache
is still working, if it's enabled in table schema, just the cache will be empty
27 PM, Peng Xiao <2535...@qq.com> wrote:
> And we are using C* 2.1.18.
>
>
> -- Original --
> *From: * "我自己的邮箱";<2535...@qq.com>;
> *Date: * Wed, Sep 20, 2017 11:27 AM
> *To: * "user";
> *Subject: * Row Cache hi
And we are using C* 2.1.18.
-- Original --
From: "";<2535...@qq.com>;
Date: Wed, Sep 20, 2017 11:27 AM
To: "user";
Subject: Row Cache hit issue
Dear All,
The default row_cache_save_period=0,looks Row Cache does n
Dear All,
The default row_cache_save_period=0,looks Row Cache does not work in this
situation?
but we can still see the row cache hit.
Row Cache : entries 202787, size 100 MB, capacity 100 MB, 3095293
hits, 6796801 requests, 0.455 recent hit rate, 0 save period in seconds
:32 +0530 preetika tyagi
<preetikaty...@gmail.com> wrote
I see. Thanks, Arvydas!
In terms of eviction policy in the row cache, does a write operation
invalidates only the row(s) which are going be modified or the whole partition?
In older version of Cassandra, I believe the
I see. Thanks, Arvydas!
In terms of eviction policy in the row cache, does a write operation
invalidates only the row(s) which are going be modified or the whole
partition? In older version of Cassandra, I believe the whole partition
gets invalidated even if only one row is modified. Is that
I don't really have a use case in particular, however, what I'm trying to
> do is to figure out how the Cassandra performance can be leveraged by using
> different caching mechanisms, such as row cache, key cache, partition
> summary etc. Of course, it will also heavily depend
Thanks, Matija! That was insightful.
I don't really have a use case in particular, however, what I'm trying to
do is to figure out how the Cassandra performance can be leveraged by using
different caching mechanisms, such as row cache, key cache, partition
summary etc. Of course, it
Hi,
In 99% of use cases Cassandra's row cache is not something you should look
into. Leveraging page cache yields good results and if accounted for can
provide you with performance increase on read side.
I'm not a fan of a default row cache implementation and its invalidation
mechanism
Hi,
I'm new to Cassandra and trying to get a better understanding on how the
row cache can be tuned to optimize the performance.
I came across think this article:
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsConfiguringCaches.html
And it suggests not to even touc
t;
Subject: Re: Row cache not working
If I remember correctly row cache caches only N rows from the beginning of the
partition. N being some configurable number.
See this link which is suggesting that:
http://www.datastax.com/dev/blog/row-caching-in-cassandra-2-1
Br,
Hannu
On
If I remember correctly row cache caches only N rows from the beginning of the
partition. N being some configurable number.
See this link which is suggesting that:
http://www.datastax.com/dev/blog/row-caching-in-cassandra-2-1
Br,
Hannu
> On 4 Oct 2016, at 1.32, Edward Capriolo wr
xplain if/that it’s working as
> intended – the row cache is probably missing because trace indicates the
> read isn’t cacheable, but I suspect it should be cacheable).
>
>
>
>
>
>
> Do note, though, that setting rows_per_partition to ALL can be very very
> very dangero
Seems like it’s probably worth opening a jira issue to track it (either to
confirm it’s a bug, or to be able to better explain if/that it’s working as
intended – the row cache is probably missing because trace indicates the read
isn’t cacheable, but I suspect it should be cacheable
version would be useful)?
>
>
>
> *From: *Abhinav Solan
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Monday, October 3, 2016 at 11:35 AM
> *To: *"user@cassandra.apache.org"
> *Subject: *Re: Row cache not working
>
>
>
> Hi, can anyone
Which version of Cassandra are you running (I can tell it’s newer than 2.1, but
exact version would be useful)?
From: Abhinav Solan
Reply-To: "user@cassandra.apache.org"
Date: Monday, October 3, 2016 at 11:35 AM
To: "user@cassandra.apache.org"
Subject: Re: Row cach
ssertTrace("Preparing statement").then("Row cache
hit").then("Request complete");
This would be a pretty awesome way to verify things without mock/mockito.
On Mon, Oct 3, 2016 at 2:35 PM, Abhinav Solan
wrote:
> Hi, can anyone please help me with this
>
> Than
y = '99PERCENTILE';
>
> Have set up the C* nodes with
> row_cache_size_in_mb: 1024
> row_cache_save_period: 14400
>
> and I am making this query
> select svc_pt_id, meas_type_id, read_time, value FROM
> cts
28
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
Have set up the C* nodes with
row_cache_size_in_mb: 1024
row_cache_save_period: 14400
and I am making this query
select svc_pt_id, meas_type_id, read_time, value FROM
cts_svc_pt_latest_int_read where svc_pt_id
Hi,
I am having to tune a legacy app to use row caching (the why is unimportant). I
know Thrift is EOL etc.. However, I have to do it.
I am unable to work out what the values to set on the column family are now
with the changes in Caching (i.e. rows_per_partition). Previously you would set
the
The row cache saves partition data off-heap, which means that every cache
hit require copying/deserializing the cached partition into the heap, and
the more rows per partition you cache, the long it will take. Which is why
it's currently not a good cache too much rows per partition (unles
Hi,
With two different families when I do a read, row cache hit is almost
15x costlier with larger rows (1 rows per partition), in
comparison to partition with only 100 rows.
Difference in two column families is one is having 100 rows per
partition another 1 rows per partition. Schema
On Mon, Jan 19, 2015 at 11:57 PM, nitin padalia
wrote:
> If I've enable row cache for some column family, when I request some
> row which is not from the begining of the partition, then cassandra
> doesn't populate, row cache.
>
> Why it is so? For older version I t
Hi,
If I've enable row cache for some column family, when I request some
row which is not from the begining of the partition, then cassandra
doesn't populate, row cache.
Why it is so? For older version I think it was because we're saying
the its caching complete merged partition
e to hear about similar use cases.
>
There's caches and there's caches. I submit that, thus far, the usage of
the term "cache" in this conversation has not been specific enough to
enhance understanding.
I continue to assert, in a very limited scope, that 6GB of row cache in
Ca
Apparently Apple is using Cassandra as a massive multi-DC cache, as per
their announcement during the summit, but probably DSE with in-memory
enabled option. Would love to hear about similar use cases.
On Fri, Sep 12, 2014 at 12:20 PM, Ken Hancock
wrote:
> +1 for Redis.
>
> It's really nice, goo
+1 for Redis.
It's really nice, good primitives, and then you can do some really cool
stuff chaining multiple atomic operations to create larger atomics through
the lua scripting.
On Thu, Sep 11, 2014 at 12:26 PM, Robert Coli wrote:
> On Thu, Sep 11, 2014 at 8:30 AM, Danny Chan wrote:
>
>> Wha
On Thu, Sep 11, 2014 at 8:30 AM, Danny Chan wrote:
> What are you referring to when you say memory store?
>
> RAM disk? memcached?
>
In 2014, probably Redis?
=Rob
usly. Putting
> 6GB into row cache is another one.
>
>
> On Tue, Sep 9, 2014 at 9:21 PM, Robert Coli wrote:
>>
>> On Tue, Sep 9, 2014 at 12:10 PM, Danny Chan wrote:
>>>
>>> Is there a method to quickly load a large dataset into the row cache?
>&
Rob Coli strikes again, you're Doing It Wrong, and he's right :D
Using Cassandra as an distributed cache is a bad idea, seriously. Putting
6GB into row cache is another one.
On Tue, Sep 9, 2014 at 9:21 PM, Robert Coli wrote:
> On Tue, Sep 9, 2014 at 12:10 PM, Danny Chan wrote:
On Tue, Sep 9, 2014 at 12:10 PM, Danny Chan wrote:
> Is there a method to quickly load a large dataset into the row cache?
> I use row caching as I want the entire dataset to be in memory.
>
You're doing it wrong. Use a memory store.
=Rob
Hello all,
Is there a method to quickly load a large dataset into the row cache?
I use row caching as I want the entire dataset to be in memory.
I'm running a Cassandra-1.2 database server with a dataset of 555
records (6GB size) and a row cache of 6GB. Key caching is disabled and
I am
On Jul 1, 2014, at 10:40 PM, Kevin Burton wrote:
>
> WOW.. so based on your advice, and a test, I disabled the row cache for the
> table.
>
> The query was instantly 20x faster.
>
> so this is definitely an anti-pattern.. I suspect cassandra just tries to
> read in they ent
WOW.. so based on your advice, and a test, I disabled the row cache for the
table.
The query was instantly 20x faster.
so this is definitely an anti-pattern.. I suspect cassandra just tries to
read in they entire physical row into memory and since my physical row is
rather big.. ha. Well that
know.. one thing I failed to mention.. .is that this is going into a
>>> "bucket" and while it's a logical row, the physical row is like 500MB …
>>> according to compaction logs.
>>>
>>> is the ENTIRE physical row going into the cache as one unit?
ike 500MB …
>> according to compaction logs.
>>
>> is the ENTIRE physical row going into the cache as one unit? That's
>> definitely going to be a problem in this model. 500MB is a big atomic unit.
>>
>
> Yes, the row cache is a row cache. It caches what the
ing into the cache as one unit? That's
> definitely going to be a problem in this model. 500MB is a big atomic unit.
>
Yes, the row cache is a row cache. It caches what the storage engine calls
rows, which CQL calls "partitions." [1] Rows have to be assembled from all
of th
gt;
> I'm fetching ONE row which is one cell in my table.
>
> The row is in the row cache, sitting in memory.
>
> SELECT sequence from content where bucket=98 AND sequence =
> 140348149405742;
>
> And look at the trace below… it's taking 1000ms to go to the row
I'm really perplexed on this one.. I think this must be a bug or some
misconfiguration somewhere.
I'm fetching ONE row which is one cell in my table.
The row is in the row cache, sitting in memory.
SELECT sequence from content where bucket=98 AND sequence =
140348149405742;
A
Heya!
I’ve been observing some strange and worrying behaviour all this week with row
cache hits taking hundreds of milliseconds.
Cassandra 1.2.15, Datastax CQL driver 1.0.4.
EC2 m1.xlarge instances
RF=3, N=4
vnodes in use
key cache: 200M
row cache: 200M
row_cache_provider
on't see any evidence that writes end up in the
> cache--that it takes at least one read to get it into the cache. I also
> realize that, assuming I don't cause SSTable writes due to sheer quantity,
> that the data would be in memory anyway.
>
> Has anyone done anything simi
On Mar 31, 2014 12:38 PM, "Wayne Schroeder"
wrote:
> I found a lot of documentation about the read path for key and row caches,
> but I haven't found anything in regard to the write path. My app has the
> need to record a large quantity of very short lived temporal data that will
> expire within
Perhaps I should clarify my question. Is this possible / how might I
accomplish this with cassandra?
Wayne
On Mar 31, 2014, at 12:58 PM, Robert Coli
mailto:rc...@eventbrite.com>>
wrote:
On Mon, Mar 31, 2014 at 9:37 AM, Wayne Schroeder
mailto:wschroe...@pinsightmedia.com>> wrote:
I found a
On Mon, Mar 31, 2014 at 9:37 AM, Wayne Schroeder <
wschroe...@pinsightmedia.com> wrote:
> I found a lot of documentation about the read path for key and row caches,
> but I haven't found anything in regard to the write path. My app has the
> need to record a large quantity of very short lived tem
I found a lot of documentation about the read path for key and row caches, but
I haven't found anything in regard to the write path. My app has the need to
record a large quantity of very short lived temporal data that will expire
within seconds and only have a small percentage of the rows acce
Thank you everyone for your input.
My dataset is ~100G of size with 1 or 2 read intensive column families. The
cluster has plenty of RAM. I'll start off small with 4G of row cache and
monitor the success rate.
Katriel
On Thu, Jan 23, 2014 at 9:17 PM, Robert Coli wrote:
> On Wed, Jan
On Wed, Jan 22, 2014 at 11:13 PM, Katriel Traum wrote:
> I was if anyone has any pointers or some advise regarding using row cache
> vs leaving it up to the OS buffer cache.
>
> I run cassandra 1.1 and 1.2 with JNA, so off-heap row cache is an option.
>
Many people have had bad e
My experience has been that the row cache is much more effective.
However, reasonable row cache sizes are so small relative to RAM that I
don't see it as a significant trade-off unless it's in a very memory
constrained environment. If you want to enable the row cache (a big if)
yo
Our experience is that you want to have all your very hot data fit in the row
cache (assuming you don’t have very large rows), and leave the rest for the OS.
Unfortunately, it completely depends on your access patterns and data what is
the right size for the cache - zero makes sense for a lot
Hello list,
I was if anyone has any pointers or some advise regarding using row cache
vs leaving it up to the OS buffer cache.
I run cassandra 1.1 and 1.2 with JNA, so off-heap row cache is an option.
Any input appreciated.
Katriel
I agree. We've had similar experience.
Sent from my iPhone
On Sep 7, 2013, at 6:05 PM, Edward Capriolo wrote:
> I have found row cache to be more trouble then bene.
>
> The term fools gold comes to mind.
>
> Using key cache and leaving more free main memory seems stable
I have found row cache to be more trouble then bene.
The term fools gold comes to mind.
Using key cache and leaving more free main memory seems stable and does not
have as many complications.
On Wednesday, September 4, 2013, S C wrote:
> Thank you all for your valuable comments and informat
Thank you all for your valuable comments and information.
-SC
> Date: Tue, 3 Sep 2013 12:01:59 -0400
> From: chris.burrou...@gmail.com
> To: user@cassandra.apache.org
> CC: fsareshw...@quantcast.com
> Subject: Re: row cache
>
> On 09/01/2013 03:06 PM, Faraaz Sareshwala wr
On 09/01/2013 03:06 PM, Faraaz Sareshwala wrote:
Yes, that is correct.
The SerializingCacheProvider stores row cache contents off heap. I believe you
need JNA enabled for this though. Someone please correct me if I am wrong here.
The ConcurrentLinkedHashCacheProvider stores row cache contents
Yes, that is correct.
The SerializingCacheProvider stores row cache contents off heap. I believe you
need JNA enabled for this though. Someone please correct me if I am wrong here.
The ConcurrentLinkedHashCacheProvider stores row cache contents on the java heap
itself.
Each cache provider has
It is my understanding that row cache is on the memory (Not on disk). It could
live on heap or native memory depending on the cache provider? Is that right?
-SC
> Date: Fri, 23 Aug 2013 18:58:07 +0100
> From: b...@dehora.net
> To: user@cassandra.apache.org
> Subject: Re: row
gt;>I'm running one Cassandra node -version 1.2.6- and I *enabled* the *row
>> cache* with *1GB*.
>>
>> But looking the Cassandra metrics on JConsole, *Row Cache Requests* are
>> very *low* after a high number of queries (about 12 requests).
>>
>> RowC
9/12 = .75
It's a rate, not a percentage.
On Sat, Aug 31, 2013 at 2:21 PM, Sávio Teles
wrote:
> I'm running one Cassandra node -version 1.2.6- and I *enabled* the *row
> cache* with *1GB*.
> But looking the Cassandra metrics on JConsole, *Row Cache Reques
I'm running one Cassandra node -version 1.2.6- and I *enabled* the *row
cache* with *1GB*.
But looking the Cassandra metrics on JConsole, *Row Cache Requests* are
very *low* after a high number of queries (about 12 requests).
RowCache metrics:
*Capacity: 1GB*
*Entries: 3
*
*HitRate:
data or CQL tables whose compound keys create wide rows under the
hood.
Bill
On 2013/08/23 17:30, Robert Coli wrote:
On Thu, Aug 22, 2013 at 7:53 PM, Faraaz Sareshwala
mailto:fsareshw...@quantcast.com>> wrote:
According to the datastax documentation [1], there are two types of
row cac
On Thu, Aug 22, 2013 at 7:53 PM, Faraaz Sareshwala <
fsareshw...@quantcast.com> wrote:
> According to the datastax documentation [1], there are two types of row
> cache providers:
>
...
> The off-heap row cache provider does indeed invalidate rows. We're going
&
After a bit of searching, I think I've found the answer I've been looking for.
I guess I didn't search hard enough before sending out this email. Thank you
all for the responses.
According to the datastax documentation [1], there are two types of row cache
providers:
row
If you are using off-heap memory for row cache, "all writes invalidate the
entire row" should be correct.
Boris
On Fri, Aug 23, 2013 at 8:32 AM, Robert Coli wrote:
> On Wed, Aug 14, 2013 at 10:56 PM, Faraaz Sareshwala <
> fsareshw...@quantcast.com> wrote:
>
>>
On Wed, Aug 14, 2013 at 10:56 PM, Faraaz Sareshwala <
fsareshw...@quantcast.com> wrote:
>
>- All writes invalidate the entire row (updates thrown out the cached
>row)
>
> This is not correct. Writes are added to the row, if it is in the row
cache. If it's not in
At the Cassandra 2013 conference, Axel Liljencrantz from Spotify discussed
various cassandra gotchas in his talk on "How Not to Use Cassandra." One of the
sections of his talk was on the row cache. If you weren't at the talk, or don't
remember it, the video is up on youtube
I saw in the mailing list
> ?
>
>
> 2013/3/14 aaron morton
> > No, I didn't. I used the nodetool setcachecapacity and didn't restart the
> > node.
> ok.
>
> > I find them hudge, and just happened on the node in which I had enabled row
> >
. I used the nodetool setcachecapacity and didn't restart
> the node.
> ok.
>
> > I find them hudge, and just happened on the node in which I had enabled
> row cache. I just enabled it on .164 node from 10:45 to 10:48 and the heap
> size doubled from 3.5GB to 7GB (out of 8,
> No, I didn't. I used the nodetool setcachecapacity and didn't restart the
> node.
ok.
> I find them hudge, and just happened on the node in which I had enabled row
> cache. I just enabled it on .164 node from 10:45 to 10:48 and the heap size
> doubled from 3.5G
udge, and just happened on the node in which I had enabled row
cache. I just enabled it on .164 node from 10:45 to 10:48 and the heap size
doubled from 3.5GB to 7GB (out of 8, which induced memory pressure). About
GC, all the collections increased a lot compare to the other nodes with row
caching disab
What version are you using?
Sounds like you have configured it correctly. Did you restart the node after
changing the row_cache_size_in_mb ?
The changes in GC activity are not huge and may not be due to cache activity.
Have they continued after you enabled the row cache?
What is the output
I have the same problem!
2013/3/11 Alain RODRIGUEZ
> I can add that I have JNA corectly loaded, from the logs: "JNA mlockall
> successful"
>
>
> 2013/3/11 Alain RODRIGUEZ
>
>> Any clue on this ?
>>
>> Row cache well configured could avoid us a
I can add that I have JNA corectly loaded, from the logs: "JNA mlockall
successful"
2013/3/11 Alain RODRIGUEZ
> Any clue on this ?
>
> Row cache well configured could avoid us a lot of disk read, and IO
> is definitely our bottleneck... If someone could explain why the r
Any clue on this ?
Row cache well configured could avoid us a lot of disk read, and IO
is definitely our bottleneck... If someone could explain why the row cache
has so much impact on my JVM and how to avoid it, it would be appreciated
:).
2013/3/8 Alain RODRIGUEZ
> Hi,
>
> We have s
Hi,
We have some issue having a high read throughput. I wanted to alleviate
things by turning the row cache ON.
I set the row cache to 200 on one node and enable caching 'ALL' on the 3
most read CF. There is the effect this operation had on my JVM:
http://img692.imageshack.us/i
I tried to run with tracing, but it says 'Scanned 0 rows and matched 0'.
I found existing issue on this bug
https://issues.apache.org/jira/browse/CASSANDRA-4973
I made a d-test for reproducing it and attached to the ticket.
Alexei
On 2 February 2013 23:00, aaron morton wrote:
> Can you run the s
Can you run the select in cqlsh and enabling tracing (see the cqlsh online
help).
If you can replicate it then place raise a ticket on
https://issues.apache.org/jira/browse/CASSANDRA and update email thread.
Thanks
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aa
Hello,
I've found a combination that doesn't work:
A column family that have a secondary index and caching='ALL' with
data in two datacenters and I do a restart of the nodes, then my
secondary index queries start returning 0 rows.
It happens when amount of data goes over a certain threshold, so I
The first thing I look for with timeouts like that is a flush storm causing
blocking in the write path (due to the internal "switch lock").
Take a look in the logs, for a number of messages such as "enqueuing CF…" and
"writing cf..". Look for a pattern of enqueuing cf messages that occur
immed
Does anyone see anything wrong in these settings? Anything to account for a 8s
timeout during a counter increment?
Thanks,
André
On 31/12/2012, at 14:35, André Cruz wrote:
> On Dec 29, 2012, at 8:53 PM, Mohit Anchlia wrote:
>
>> Can you post gc settings? Also check logs and see what it says
On Dec 29, 2012, at 8:53 PM, Mohit Anchlia wrote:
> Can you post gc settings? Also check logs and see what it says
These are the relevant jam settings:
-home /usr/lib/jvm/j2re1.6-oracle/bin/../
-ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar
-XX:+UseThreadPriorities
-XX:ThreadPriorityP
Can you post gc settings? Also check logs and see what it says
Also post how many writes and reads along with avg row size
Sent from my iPhone
On Dec 29, 2012, at 12:28 PM, rohit bhatia wrote:
> i assume u mean 8 seconds and not 8ms..
> thats pretty huge to be caused by gc. Is there lot of lo
i assume u mean 8 seconds and not 8ms..
thats pretty huge to be caused by gc. Is there lot of load on your servers?
You might also need to check for memory contention
Regarding GC, since its parnew all u can really do is increase heap and
young gen size, or modify tenuring rate. But that can't be
On 29/12/2012, at 16:59, rohit bhatia wrote:
> Reads during a write still occur during a counter increment with CL ONE, but
> that latency is not counted in the request latency for the write. Your local
> node write latency of 45 microseconds is pretty quick. what is your timeout
> and the wri
issues and we could trace the timeouts to parnew gc collections which
were quite frequent. You might just want to take a look there too.
On Sat, Dec 29, 2012 at 4:44 PM, André Cruz wrote:
> Hello.
>
> I recently was having some timeout issues while updating counters and
> turned
Hello.
I recently was having some timeout issues while updating counters and turned on
row cache for that particular CF. This is its stats:
Column Family: UserQuotas
SSTable count: 3
Space used (live): 2687239
Space used (total
Got it. Thanks again, Aaron.
-- Y.
On Tue, Dec 4, 2012 at 3:07 PM, aaron morton wrote:
> Does this mean we should not enable row caches until we are absolutely
> sure about what's hot (I think there is a reason why row caches are
> disabled by default) ?
>
> Yes and Ye
> Does this mean we should not enable row caches until we are absolutely sure
> about what's hot (I think there is a reason why row caches are disabled by
> default) ?
Yes and Yes.
Row cache takes memory and CPU, unless you know you are getting a benefit from
it leave it off.
Hi Aaron,
Thank you,and your explanation makes sense. At the time, I thought having
1GB of row cache on each node was plenty enough, because there was an
aggregated 6GB cache, but you are right, with each row in 10's of MBs, some
of the nodes can go into a constant load and evict cycle and
1 - 100 of 183 matches
Mail list logo