On Dec 22, 2010, at 16:20, Peter Schuller wrote:

>> And the data could be more evenly balanced, obviously. However the nodes 
>> fails to startup because due of lacking disk space (instead of starting up 
>> and denies further writes it appears to try to process the [6.6G!] commit 
>> logs). So, I cannot perform any actions on it no more like re-balancing the 
>> ring or reading old data from it and rotating it somewhere else. So, what to 
>> do now?
> 
> So even given deletion of obsolete sstables on start-up, it goes out
> of disk just from the commit log replay of only 6 gig? Sounds like
> you're very, very full.

Answer:

$ time cassandra -f
 INFO 16:30:09,486 Heap size: 2143158272/2143158272
log4j:ERROR Failed to flush writer,
java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
        at 
org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:276)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:347)
        at 
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:73)
        at 
org.apache.cassandra.thrift.CassandraDaemon.setup(CassandraDaemon.java:55)
        at 
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:216)
        at 
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:134)
 INFO 16:30:09,495 JNA not found. Native methods will be disabled.
 INFO 16:30:09,504 Loading settings from 
file:/home/dev/cassandra.git/conf/cassandra.yaml
 INFO 16:30:09,774 DiskAccessMode 'auto' determined to be mmap, indexAccessMode 
is mmap
 INFO 16:30:09,849 Creating new commitlog segment 
/var/lib/cassandra/commitlog/CommitLog-1293031809849.log
ERROR 16:30:09,853 Exception encountered during startup.
java.io.IOError: java.io.IOException: No space left on device
        at 
org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:59)
        at 
org.apache.cassandra.db.commitlog.CommitLog.<init>(CommitLog.java:113)
        at 
org.apache.cassandra.db.commitlog.CommitLog.<clinit>(CommitLog.java:83)
        at 
org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:347)
        at 
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:76)
        at 
org.apache.cassandra.thrift.CassandraDaemon.setup(CassandraDaemon.java:55)
        at 
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:216)
        at 
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:134)
Caused by: java.io.IOException: No space left on device
        at java.io.FileOutputStream.write(Native Method)
        at java.io.DataOutputStream.writeInt(DataOutputStream.java:180)
        at 
org.apache.cassandra.db.commitlog.CommitLogHeader$CommitLogHeaderSerializer.serialize(CommitLogHeader.java:157)
        at 
org.apache.cassandra.db.commitlog.CommitLogHeader.writeCommitLogHeader(CommitLogHeader.java:121)
        at 
org.apache.cassandra.db.commitlog.CommitLogSegment.writeHeader(CommitLogSegment.java:70)
        at 
org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:55)
        ... 7 more
Exception encountered during startup.
java.io.IOError: java.io.IOException: No space left on device
        at 
org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:59)
        at 
org.apache.cassandra.db.commitlog.CommitLog.<init>(CommitLog.java:113)
        at 
org.apache.cassandra.db.commitlog.CommitLog.<clinit>(CommitLog.java:83)
        at 
org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:347)
        at 
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:76)
        at 
org.apache.cassandra.thrift.CassandraDaemon.setup(CassandraDaemon.java:55)
        at 
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:216)
        at 
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:134)
Caused by: java.io.IOException: No space left on device
        at java.io.FileOutputStream.write(Native Method)
        at java.io.DataOutputStream.writeInt(DataOutputStream.java:180)
        at 
org.apache.cassandra.db.commitlog.CommitLogHeader$CommitLogHeaderSerializer.serialize(CommitLogHeader.java:157)
        at 
org.apache.cassandra.db.commitlog.CommitLogHeader.writeCommitLogHeader(CommitLogHeader.java:121)
        at 
org.apache.cassandra.db.commitlog.CommitLogSegment.writeHeader(CommitLogSegment.java:70)
        at 
org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:55)
        ... 7 more

real    0m1.210s
user    0m0.600s
sys     0m0.100s

So it's instantly dead. I doesn't even attempt to delete any potentially old 
data. sstable2json dies instantly as well!

Actually 5% are still available but not accessible by non-root users (ext3). So 
the cassandra user literally cannot write a single byte to that volume no more.

> Some potential options may be:
> 
> * Replace the node completely with a new one with sufficient disk /
> another token location (but carefully, by adding the new node first).

That may not always be an option :)

> It strikes me that dealing with out-of-disk conditions is probably a
> good topic of an in-depth wiki page for what to do in various cases,
> depending on usage. The above three suggested options may or may not

Yes! :)

>> BTW what precisely does the Owns column mean?
> 
> The percentage of the token space owned by the node.

Precisely meaning what? :) On my ring of 5 machines, 3 own about 1/3 and 2 (own 
only 5% - and one of these contains 1/3 more data than the two largest in the 
cluster, it's actually the on that ran out of disk space).

Reply via email to