It appears we have several unserializable or unreadable rows.  These were not 
fixed even after doing a "scrub"  on all nodes -  even though the scrub seemed 
to have completed successfully.

I trying to fix these by doing a "repair", but these exceptions are thrown 
exactly when doing a repair.   Anyone run into this issue?  What's the best way 
to fix this?  

I was thinking that flushing and reloading the data with a move (reusing the 
same token) might be a way to get out of this.


Exception seem multiple times for different keys during a repair:

ERROR [CompactionExecutor:1] 2011-04-10 14:05:55,528 PrecompactedRow.java (line 
82) Skipping row DecoratedKey(58054163627659284217684165071269705317, 
64396663313763662d383432622d343439652d623761312d643164663936333738306565) in 
/var/lib/cassandra/data/DFS/main-f-232-Data.db
java.io.EOFException
        at java.io.RandomAccessFile.readFully(RandomAccessFile.java:383)
        at java.io.RandomAccessFile.readFully(RandomAccessFile.java:361)
        at 
org.apache.cassandra.io.util.BufferedRandomAccessFile.readBytes(BufferedRandomAccessFile.java:268)
        at 
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:310)
        at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:267)
        at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:94)
        at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:35)
        at 
org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:129)
        at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:176)
        at 
org.apache.cassandra.io.PrecompactedRow.<init>(PrecompactedRow.java:78)
        at 
org.apache.cassandra.io.CompactionIterator.getCompactedRow(CompactionIterator.java:139)
        at 
org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:108)
        at 
org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:43)
        at 
org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:73)
        at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136)
        at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131)
        at 
org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
        at 
org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
        at 
org.apache.cassandra.db.CompactionManager.doValidationCompaction(CompactionManager.java:803)
        at 
org.apache.cassandra.db.CompactionManager.access$800(CompactionManager.java:56)
        at 
org.apache.cassandra.db.CompactionManager$6.call(CompactionManager.java:358)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)


This WARN also seems to come up often during a repair.  Not sure if it related 
to this problem:

 WARN [ScheduledTasks:1] 2011-04-10 14:10:24,991 GCInspector.java (line 149) 
Heap is 0.8675910480028087 full.  You may need to reduce memtable and/or cache 
sizes.  Cassandra will now flush up to the two largest memtables to free up 
memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you 
don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2011-04-10 14:10:24,992 StorageService.java (line 
2206) Flushing ColumnFamilyStore(table='DFS', columnFamily='main') to relieve 
memory pressure
 INFO [ScheduledTasks:1] 2011-04-10 14:10:24,992 ColumnFamilyStore.java (line 
695) switching in a fresh Memtable for main at 
CommitLogContext(file='/var/lib/cassandra/commitlog/CommitLog-1302435708131.log',
 position=28257053)

Reply via email to