Hi Paulo,
I just completed a migration from 1.1.10 to 1.2.10 and it was surprisingly
painless.
The course of action that I took:
1) describe cluster - make sure all nodes are on the same schema
2) shutoff all maintenance tasks; i.e. make sure no scheduled repair is
going to kick off in the middle
I second these questions: we've been looking into changing some of our CFs
to use leveled compaction as well. If anybody here has the wisdom to answer
them it would be of wonderful help.
Thanks
Charles
On Wed, Feb 13, 2013 at 7:50 AM, Mike wrote:
> Hello,
>
> I'm investigating the transition of
settle" or some such.
Thanks
Charles
On Fri, Sep 28, 2012 at 7:20 AM, Charles Brophy wrote:
> Odd indeed.
>
> 1) It is observable after the compactions are through and the system has
> "settled"
> 2) We're using SizeTiered strategy
> 3) CentOS 6 & Oracle
rom cf stats ?
>
> Do you have some read latency numbers from cfstats ?
> Also, could you take a look at cfhistograms ?
>
> Cheers
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 26/09
There are settings in cassandra.yaml that will _gradually_ reduce the
available cache to zero if you are under constant memory pressure:
# Set to 1.0 to disable.
reduce_cache_sizes_at: *
reduce_cache_capacity_to: *
My experience is that the cache size will not return to the configured size
unt
Hey guys,
I've begun to notice that read operations take a performance nose-dive
after a standard (full) repair of a fairly large column family: ~11 million
records. Interestingly, I've then noticed that read performance returns to
normal after a full scrub of the column family. Is it possible tha
I have a very reliable repro case on our cluster involving nodetool repair.
I posted a summary in a comment on the issue. Let me know if more details
are needed.
Charles
On Fri, Sep 7, 2012 at 8:35 AM, Sylvain Lebresne wrote:
> > Is there a way to fix this error ? What is its impact on my data ?
Hi guys,
Cassandra: 1.1.1
Size: 6, even token distribution, random partitioner
JVM 1.6.0_31
kernel: 2.6.32-71.el6.x86_64
24 cores, 96 GB RAM each
We're seeing something pretty distressing in our cluster. When a node is
brought down using "nodetool drain" and then brought back up, some of our
coun
Hi guys,
We're running a three node cluster of cassandra 1.1 servers, originally
1.0.7 and immediately after the upgrade the error logs of all three servers
began filling up with the following message:
ERROR [ReplicateOnWriteStage:177] 2012-05-31 08:17:02,236
CounterContext.java (line 381) invali