That sounds like to many sstables. 

Out of interest were you using multi threaded compaction ? Just wondering about 
this 
https://issues.apache.org/jira/browse/CASSANDRA-3711

Can you set the file handles to unlimited ? 

Can you provide some more info what your see in the data dir incase it is a bug 
in leveled compaction. 

Cheers

-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 14/01/2012, at 8:01 AM, Thorsten von Eicken wrote:

> I'm running a single node cassandra 1.0.6 server which hit a wall yesterday:
> 
> ERROR [CompactionExecutor:2918] 2012-01-12 20:37:06,327
> AbstractCassandraDaemon.java (line 133) Fatal exception in thread
> Thread[CompactionExecutor:2918,1,main] java.io.IOError:
> java.io.FileNotFoundException:
> /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many
> open files in system)
> 
> After that it stopped working and just say there with this error
> (undestandable). I did an lsof and saw that it had 98567 open files,
> yikes! An ls in the data directory shows 234011 files. After restarting
> it spent about 5 hours compacting, then quieted down. About 173k files
> left in the data directory. I'm using leveldb (with compression). I
> looked into the json of the two large CFs and gen 0 is empty, most
> sstables are gen 3 & 4. I have a total of about 150GB of data
> (compressed). Almost all the SStables are around 3MB in size. Aren't
> they supposed to get 10x bigger at higher gen's?
> 
> This situation can't be healthy, can it? Suggestions?

Reply via email to