Looks a bit like https://issues.apache.org/jira/browse/CASSANDRA-3579 but that 
was fixed in 1.0.7

Is this still an issue ? Are you able to reproduce the fault ? 

Cheers


-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 27/04/2012, at 6:56 PM, Patrik Modesto wrote:

> Hi,
> 
> I've 4 node cluster of Cassandra 1.0.9. There is a rfTest3 keyspace
> with RF=3 and one CF with two secondary indexes. I'm importing data
> into this CF using Hadoop Mapreduce job, each row has less than 10
> colkumns. From JMX:
> MaxRowSize:  1597
> MeanRowSize: 369
> 
> And there are some tens of millions of rows.
> 
> It's write-heavy usage and there is a big pressure on each node, there
> are quite some dropped mutations on each node. After ~12 hours of
> inserting I see these assertion exceptiona on 3 out of four nodes:
> 
> ERROR 06:25:40,124 Fatal exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException:
> java.lang.AssertionError: originally calculated column size of
> 629444349 but now it is 588008950
>        at 
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:388)
>        at 
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:256)
>        at 
> org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:84)
>        at 
> org.apache.cassandra.db.HintedHandOffManager$3.runMayThrow(HintedHandOffManager.java:437)
>        at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>        at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException:
> java.lang.AssertionError: originally calculated column size of
> 629444349 but now it is 588008950
>        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>        at 
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:384)
>        ... 7 more
> Caused by: java.lang.AssertionError: originally calculated column size
> of 629444349 but now it is 588008950
>        at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:124)
>        at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
>        at 
> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:161)
>        at 
> org.apache.cassandra.db.compaction.CompactionManager$7.call(CompactionManager.java:380)
>        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>        ... 3 more
> 
> 
> Few lines regarding Hints from the output.log:
> 
> INFO 06:21:26,202 Compacting large row
> system/HintsColumnFamily:70000000000000000000000000000000 (1712834057
> bytes) incrementally
> INFO 06:22:52,610 Compacting large row
> system/HintsColumnFamily:10000000000000000000000000000000 (2616073981
> bytes) incrementally
> INFO 06:22:59,111 flushing high-traffic column family
> CFS(Keyspace='system', ColumnFamily='HintsColumnFamily') (estimated
> 305147360 bytes)
> INFO 06:22:59,813 Enqueuing flush of
> Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live
> bytes, 7452 ops)
> INFO 06:22:59,814 Writing
> Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live
> bytes, 7452 ops)

Reply via email to