Yep, you guys are correct, I’m supporting a slightly older version of our product based on Lucene 3. In my previous email I forgot to mention that I also bumped up the maximum allowable file handles per process to 16k, which had been working well. Here’s the ulimit -a output from our server:
core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 63611 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 16000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 63611 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Are there any other possible resource limitations for the MMapDirectory that I might have missed? Thanks again for you help! On Nov 7, 2014, at 2:24 PM, Uwe Schindler <u...@thetaphi.de> wrote: > Hi, > >> That error can also be thrown when the number of open files exceeds the >> given limit. "OutOfMemory" should really have been named >> "OutOfResources". > > This was changed already. Lucene no longer prints OOM (it removes the OOM > from stack trace). It also adds useful information. So I think the version of > Lucene that produced this exception is older (before 4.9): > https://issues.apache.org/jira/browse/LUCENE-5673 > > Uwe > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org > For additional commands, e-mail: java-user-h...@lucene.apache.org > --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org