Thanks Mark. Yes the 1024 is the limit. I haven't changed it as per the 
recommended production settings.

But I am wondering why does Cassandra need to keep 3000+ commit log segment 
files open?

Regards,
Bhaskar



On Tuesday, 8 July 2014 1:50 PM, Mark Reddy <mark.re...@boxever.com> wrote:
 


Hi Bhaskar,

Can you check your limits using 'ulimit -a'? The default is 1024, which needs 
to be increased if you have not done so already.

Here you will find a list of recommended production settings: 
http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installRecommendSettings.html


Mark


On Tue, Jul 8, 2014 at 5:30 AM, Bhaskar Singhal <bhaskarsing...@yahoo.com> 
wrote:

Hi,
>
>
>
>I am using Cassandra 2.0.7 (with default settings and 16GB heap on quad core 
>ubuntu server with 32gb ram) and trying to ingest 1MB values using 
>cassandra-stress. It works fine for a while(1600secs) but after ingesting 
>around 120GB data, I start getting the following error:
>Operation [70668] retried 10 times - error inserting key 0070668 
>((TTransportException): java.net.SocketException: Broken pipe)
>
>
>
>The cassandra server is still running but in the system.log I see the below 
>mentioned errors.
>
>
>
>ERROR [COMMIT-LOG-ALLOCATOR] 2014-07-07 22:39:23,617 CassandraDaemon.java 
>(line 198) Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
>java.lang.NoClassDefFoundError: org/apache/cassandra/db/commitlog/CommitLog$4
>        at 
>org.apache.cassandra.db.commitlog.CommitLog.handleCommitError(CommitLog.java:374)
>        at 
>org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:116)
>        at 
>org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>        at java.lang.Thread.run(Thread.java:744)
>Caused by: java.lang.ClassNotFoundException: 
>org.apache.cassandra.db.commitlog.CommitLog$4
>        at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at
 java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>        ... 4 more
>Caused by: java.io.FileNotFoundException: 
>/path/2.0.7/cassandra/build/classes/main/org/apache/cassandra/db/commitlog/CommitLog$4.class
> (Too many open files)
>        at java.io.FileInputStream.open(Native Method)
>        at java.io.FileInputStream.<init>(FileInputStream.java:146)
>        at 
>sun.misc.URLClassPath$FileLoader$1.getInputStream(URLClassPath.java:1086)
>        at sun.misc.Resource.cachedInputStream(Resource.java:77)
>        at sun.misc.Resource.getByteBuffer(Resource.java:160)
>        at java.net.URLClassLoader.defineClass(URLClassLoader.java:436)
>        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>        ... 10 more
>ERROR [FlushWriter:7] 2014-07-07 22:39:24,924 CassandraDaemon.java (line 198) 
>Exception in thread
 Thread[FlushWriter:7,5,main]
>FSWriteError in 
>/cassandra/data4/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-593-Filter.db
>        at 
>org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:475)
>        at 
>org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:212)
>        at 
>org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:301)
>        at 
>org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:417)
>        at 
>org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:350)
>        at
 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>        at 
>org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>        at 
>java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>        at 
>java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>        at java.lang.Thread.run(Thread.java:744)
>Caused by: java.io.FileNotFoundException: 
>/cassandra/data4/Keyspace1/Standard1/Keyspace1-Standard1-tmp-jb-593-Filter.db 
>(Too many open files)
>        at java.io.FileOutputStream.open(Native Method)
>        at
 java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>        at java.io.FileOutputStream.<init>(FileOutputStream.java:110)
>        at 
>org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:466)
>        ... 9 more
>
>
>
>There are around 9685 open files by the Cassandra server process (using lsof), 
>3938 commit log segments in /cassandra/commitlog and around 572 commit log 
>segments deleted during the course of the test.
>
>
>I am wondering what is causing Cassandra to open so many files, is the 
>flushing slow? or something else?
>
>
>I tried increasing the flush writers, but that didn't help. 
>
>
>
>
>
>Regards,
>Bhaskar
>
>
>
>CREATE KEYSPACE "Keyspace1" WITH replication = {
>  'class': 'SimpleStrategy',
>  'replication_factor': '1'
>};
>
>CREATE TABLE "Standard1" (
>  key blob,
>  "C0" blob,
>  PRIMARY KEY (key)
>) WITH COMPACT STORAGE AND
>  bloom_filter_fp_chance=0.010000 AND
>  caching='KEYS_ONLY' AND
>  comment='' AND
>  dclocal_read_repair_chance=0.000000 AND
>  gc_grace_seconds=864000 AND
>  index_interval=128 AND
>  read_repair_chance=0.100000 AND
>  replicate_on_write='true' AND
>  populate_io_cache_on_flush='false' AND
>  default_time_to_live=0 AND
> 
 speculative_retry='NONE' AND
>  memtable_flush_period_in_ms=0 AND
>  compaction={'class': 'SizeTieredCompactionStrategy'} AND
>  compression={};
>
>

Reply via email to