y-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Thursday, April 4, 2013 5:49 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Cassand
to
> handle both the data and GC. Increasing the number of CPU cores made
> everything run smoothly at the same load.
>
>
> 2013/3/21 Andras Szerdahelyi
>
>> Neat!
>> Thanks.
>>
>> From: Sylvain Lebresne
>> Reply-To: "user@cassandra.apache.org&
> Reply-To: "user@cassandra.apache.org"
> Date: Thursday 21 March 2013 10:10
> To: "user@cassandra.apache.org"
> Subject: Re: Cassandra freezes
>
> Prior to 1.2 the index summaries were not saved on disk, and were thus
> computed on startup while the sstable was l
@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Cassandra freezes
Prior to 1.2 the index summaries were not saved on disk, and were thus computed
on startup while the sstable was loaded. In 1.2 they now are saved on disk to
make startup faster (https://issues.apache
Prior to 1.2 the index summaries were not saved on disk, and were thus
computed on startup while the sstable was loaded. In 1.2 they now are saved
on disk to make startup faster (
https://issues.apache.org/jira/browse/CASSANDRA-2392). That being said, if
the index_interval value used by a summary s
OK, I took a look at the source code and for now it seems to me that we
both are partially right ( ;-) ), but changing index_interval does NOT
require rebuilding SSTables:
Yes, index sample file can be persisted (see
io/sstable/IndexSummary.java, serialize/deserialize methods +
io/sstable/SST
I can not find the reference that notes having to upgradesstables when you
change this. I really hope such complex assumptions are not formulating in
my head just on their own and there actually exists some kind of reliable
reference that clears this up :-) but,
# index_interval controls the samp
About index_interval:
1) you have to rebuild stables ( not an issue if you are evaluating, doing
test writes.. Etc, not so much in production )
Are you sure of this? As I understand indexes, it's not required because
this parameter defines an interval of in-memory index sample, which is
crea
Yup, we are rolling it slowly. In production, we have 2 nodes out of 6
switched already and so far have no website degredation at all. We have
narrow rows as well and as the comment says in the props file, "if you
have narrow rows going to 512 sometimes has no impact on performance" and
in our ca
2. Upping index_interval from 128 to 512 (this seemed to reduce our memory
usage significantly!!!)
I'd be very careful with that as a one-stop improvement solution for two
reasons AFAIK
1) you have to rebuild stables ( not an issue if you are evaluating, doing
test writes.. Etc, not so much in pr
Also, look at the cassandra logs. I bet you see the typicalÅ blah blah is
at 0.85, doing memory cleanup which is not exactly GC but cassandra memory
managementÅ ..and of course, you have GC on top of that.
If you need to get your memory down, there are multiple ways
1. Switching size tiered compact
What is in your Cassandra log right before and after that freeze?
-Tupshin
On Mar 20, 2013 8:06 AM, "Joel Samuelsson"
wrote:
> Hello,
>
> I've been trying to load test a one node cassandra cluster. When I add
> lots of data, the Cassandra node freezes for 4-5 minutes during which
> neither reads
I'd say GC. Please fill in form CASS-FREEZE-001 below and get back to us
:-) ( sorry )
How big is your JVM heap ? How many CPUs ?
Garbage collection taking long ? ( look for log lines from GCInspector)
Running out of heap ? ( "heap is .. full" log lines )
Any tasks backing up / being dropped ? (
Unfortunately, the previous AMI we used to provision the 7.5 version is no
longer available. More unfortunately, the two test nodes we spun up in each
AZ did not get Nehalem architectures so the only things I can say for
certain after running Mike's test 10x on each test node are:
1) I could not r
That's interesting. For us, the 7.5 version of libc was causing problems.
Either way, I'm looking forward to hearing about anything you find.
Mike
On Thu, Jan 13, 2011 at 11:47 PM, Erik Onnen wrote:
> Too similar to be a coincidence I'd say:
>
> Good node (old AZ): 2.11.1-0ubuntu7.5
> Bad node
Too similar to be a coincidence I'd say:
Good node (old AZ): 2.11.1-0ubuntu7.5
Bad node (new AZ): 2.11.1-0ubuntu7.6
You beat me to the punch with the test program. I was working on something
similar to test it out and got side tracked.
I'll try the test app tomorrow and verify the versions of t
Erik, the scenario you're describing is almost identical to what we've been
experiencing. Sounds like you've been pulling your hair out too! You're also
running the same distro and kernel as us. And we also run without swap.
Which begs the question... what version of libc6 are you running!? Here's
Forgot one critical point, we use zero swap on any of these hosts.
May or may not be related but I thought I'd recount a similar experience we
had in EC2 in hopes it helps someone else.
As background, we had been running several servers in a 0.6.8 ring with no
Cassandra issues (some EC2 issues, but none related to Cassandra) on
multiple EC2 XL instances in a sing
19 matches
Mail list logo