AFAIK the node will not announce itself in the ring until the log replay is 
complete, so it will not get the schema update until after log replay. If 
possible i'd avoid making the schema change until you have solved this problem.

My theory on OOM during log replay is that the high speed inserts are a good 
way of finding out if the maximum memory required by the schema is too big to 
fit in the JVM. How big is the max JVM Heap SIze and do you have a lot of CF's?

The simple solution it to either (temporarily) increase the JVM Heap Size or 
move the log files so that the server can process only one at a time. The JVM 
option D.cassandra_ring=false will stop the node from joining the cluster and 
stop other nodes sending requests to it until you have sorted it out. 

Hope that helps. 
  
 
-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 21 Jun 2011, at 10:24, Gabriel Ki wrote:

> Hi,
> 
> Cassandra: 7.6-2
> I was restarting a node and ran into OOM while replaying the commit log.  I 
> am not able to bring the node up again.
> 
> DEBUG 15:11:43,501 forceFlush requested but everything is clean      
> <--------  For this I don't know what to do.
> java.lang.OutOfMemoryError: Java heap space
>     at 
> org.apache.cassandra.io.util.BufferedRandomAccessFile.<init>(BufferedRandomAccessFile.java:123)
>     at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.<init>(SSTableWriter.java:395)
>     at 
> org.apache.cassandra.io.sstable.SSTableWriter.<init>(SSTableWriter.java:76)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.createFlushWriter(ColumnFamilyStore.java:2238)
>     at org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:166)
>     at org.apache.cassandra.db.Memtable.access$000(Memtable.java:49)
>     at org.apache.cassandra.db.Memtable$1.runMayThrow(Memtable.java:189)
>     at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> 
> Any help will be appreciated.   
> 
> If I update the schema while a node is down, the new schema is loaded before 
> the flushing when the node is brought up again, correct?  
> 
> Thanks,
> -gabe

Reply via email to