bq: Our server runs many hundreds (soon to be thousands) of indexes simultaneously
This is actually kind of scary. How do you expect to fit "many thousands" of indexes into memory? Raising per-process virtual memory to unlimited still doesn't handle the amount of RAM the Solr process needs. It holds things like caches, (top-level and per-segment), sort lists, all that. How many G of indexes are we talking here? Note that this is not a great guide to RAM requirements, but I'm just trying to get a handle on the scale you're at. You're not, for instance, going to handle terabyte-scale indexes on a single machine satisfactorily IMO. If your usage pattern is a user signs on, works with their index for a while then signs off, you might get some joy out of the LotsOfCores option. That said, this option has NOT been validated on cloud setups, where I expect it'll have problems. FWIW, Erick On Fri, Nov 7, 2014 at 2:24 PM, Uwe Schindler <u...@thetaphi.de> wrote: > Hi, > >> That error can also be thrown when the number of open files exceeds the >> given limit. "OutOfMemory" should really have been named >> "OutOfResources". > > This was changed already. Lucene no longer prints OOM (it removes the OOM > from stack trace). It also adds useful information. So I think the version of > Lucene that produced this exception is older (before 4.9): > https://issues.apache.org/jira/browse/LUCENE-5673 > > Uwe > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org > For additional commands, e-mail: java-user-h...@lucene.apache.org > --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org