I already had this kind of trouble while repairing a month ago. I have
problems that I am the only one to have. I guess I have something
wrong either in the configuration of my nodes or in my data that makes
them wrong after a restart/repair.
I am planning to try deploying an EC2 cluster with data
I've not heard of anything like that in the recent versions. There were some
issues in the early 0.8
https://github.com/apache/cassandra/blob/trunk/NEWS.txt#L383
If you are on a recent version can you please create a jira ticket
https://issues.apache.org/jira/browse/CASSANDRA describing what yo
"not sure what you mean by
And after restarting the second one I have lost all the consistency of
my data. All my statistics since September are totally false now in
production
Can you give some examples?"
After restarting my 2 nodes (one after the other), All my counters
have become wrong. The c
not sure what you mean by
> And after restarting the second one I have lost all the consistency of
> my data. All my statistics since September are totally false now in
> production
Can you give some examples?
Counter are not idempotent so if the client app retries TimedOut requests you
can get
Hi Aaron.
I wanted to try the new config. After doing a rolling restart I have
all my counters false, with wrong values. I stopped my servers with
the following :
nodetool -h localhost disablegossip
nodetool -h localhost disablethrift
nodetool -h localhost drain
kill cassandra sigterm (15) via ht
> What is the the benefit of having more memory ? I mean, I don't
> understand why having 1, 2, 4, 8 or 16 GB of memory is so different.
Less frequent and less aggressive garbage collection frees up CPU resources to
run the database.
Less memory results in frequent and aggressive (i.e. stop the
Using c1.medium, we are currently able to deliver the service.
What is the the benefit of having more memory ? I mean, I don't
understand why having 1, 2, 4, 8 or 16 GB of memory is so different.
In my mind, Cassandra will fill the heap and from then, start to flush
and compact to avoid OOMing and
> 1 - I got this kind of message quite often (let's say every 30 seconds) :
You are running out of memory. Depending on the size of your schema and the
work load you will want to start with 4 or 8 GB machines. But most people get
the best results with 16Gb.
On AWS the common setup is to use m1.x
Hi,
I'm using a 2 node cluster in production ( 2 EC2 c1.medium, CL.ONE, RF
= 2, using RP)
1 - I got this kind of message quite often (let's say every 30 seconds) :
WARN [ScheduledTasks:1] 2012-05-15 15:44:53,083 GCInspector.java (line
145) Heap is 0.8081418550931491 full. You may need to reduce