Hi folks,

I am running into what appears to be a file handle leak in trunk during 
indexing.  It's not clear yet what the causative event is, although the 
indexing runs for more than an hour before it occurs.  The system is Ubuntu, 
and has 1024 file handles (per process).  This is on a trunk checkout from 
about 3 hours ago.

The exception I start seeing is:

     [java] Exception in thread "Lucene Merge Thread #0" 
org.apache.lucene.index.MergePolicy$MergeException: 
java.io.FileNotFoundException: 
/root/solr-dym/solr-dym/solr_home_v2/nose/data/index/_5l.fdx (Too many open 
files)
     [java]     at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:471)
     [java]     at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:435)
     [java] Caused by: java.io.FileNotFoundException: 
/root/solr-dym/solr-dym/solr_home_v2/nose/data/index/_5l.fdx (Too many open 
files)
     [java]     at java.io.RandomAccessFile.open(Native Method)
     [java]     at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
     [java]     at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.<init>(SimpleFSDirectory.java:69)
     [java]     at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.<init>(SimpleFSDirectory.java:90)
     [java]     at 
org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:91)
     [java]     at 
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:78)
     [java]     at 
org.apache.lucene.index.FieldsReader.<init>(FieldsReader.java:104)
     [java]     at 
org.apache.lucene.index.SegmentReader$CoreReaders.openDocStores(SegmentReader.java:243)
     [java]     at 
org.apache.lucene.index.SegmentReader.get(SegmentReader.java:538)
     [java]     at 
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:635)
     [java]     at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3976)
     [java]     at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3628)
     [java]     at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:339)
     [java]     at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:407)


Here's the ulimit -a output:

r...@duck6:~/solr-dym/solr-dym# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

The actual number of files in the index at this time is relatively low:

r...@duck6:~/solr-dym/solr-dym/solr_home_v2/nose/data/index# ls -1 | wc
    179     179    1532
r...@duck6:~/solr-dym/solr-dym/solr_home_v2/nose/data/index#

Anyone willing to work with me to narrow down the problem?

Thanks,
Karl


Reply via email to