>
> Did you try what it says to do first? "You need to restart this node
> with -Dcassandra.renew_counter_id=true to fix."
>
Yes I did and it still logged that error upon restarting.
I'm loath to removing the SSTable as every single repair I run on any node
is streaming data because of out of sync nodes.

P

>
> On Sun, Aug 14, 2011 at 12:28 PM, Philippe <watche...@gmail.com> wrote:
> > Hi I'm getting the following at startup on one of the nodes on my 3 node
> > cluster with RF=3.
> > I have 6 keyspaces each with 10 column families that contain supercolumns
> > that contain only counter columns.
> > Looking
> > at
> http://www.datastax.com/dev/blog/whats-new-in-cassandra-0-8-part-2-counters
> > I see that I am supposed to "remove all data for that column family".
> > Does looking at the previous line for the same thread tell me which
> column
> > family this is happening to ?
> > How do I "remove the data" on that node ?
> > Thanks
> > ERROR [CompactionExecutor:6] 2011-08-14 19:02:55,117
> > AbstractCassandraDaemon.java (line 134) Fatal exception in thread
> > Thread[CompactionExecutor:6,1,main]
> > java.lang.RuntimeException: Merged counter shard with a count != 0
> (likely
> > due to #2968). You need to restart this node with
> > -Dcassandra.renew_counter_id=true to fix.
> >         at
> >
> org.apache.cassandra.db.context.CounterContext.removeOldShards(CounterContext.java:633)
> >         at
> >
> org.apache.cassandra.db.CounterColumn.removeOldShards(CounterColumn.java:237)
> >         at
> >
> org.apache.cassandra.db.CounterColumn.removeOldShards(CounterColumn.java:273)
> >         at
> >
> org.apache.cassandra.db.compaction.PrecompactedRow.removeDeletedAndOldShards(PrecompactedRow.java:67)
> >         at
> >
> org.apache.cassandra.db.compaction.PrecompactedRow.removeDeletedAndOldShards(PrecompactedRow.java:60)
> >         at
> >
> org.apache.cassandra.db.compaction.PrecompactedRow.<init>(PrecompactedRow.java:75)
> >         at
> >
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:140)
> >         at
> >
> org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:123)
> >         at
> >
> org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:43)
> >         at
> >
> org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:74)
> >         at
> >
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> >         at
> >
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> >         at
> >
> org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
> >         at
> >
> org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
> >         at
> >
> org.apache.cassandra.db.compaction.CompactionManager.doCompactionWithoutSizeEstimation(CompactionManager.java:569)
> >         at
> >
> org.apache.cassandra.db.compaction.CompactionManager.doCompaction(CompactionManager.java:506)
> >         at
> >
> org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:141)
> >         at
> >
> org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:107)
> >         at
> > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >         at java.lang.Thread.run(Thread.java:662)
> >
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>

Reply via email to