I’ll try bumping up the per-process file max and see if that fixes it. Thanks
for all your help and suggestions guys!
-Brian
On Nov 7, 2014, at 5:00 PM, Toke Eskildsen wrote:
> Brian Call [brian.c...@soterawireless.com] wrote:
>> Yep, you guys are correct, I’m supporting a slightly older versi
Brian Call [brian.c...@soterawireless.com] wrote:
> Yep, you guys are correct, I’m supporting a slightly older version of our
> product based
> on Lucene 3.
> In my previous email I forgot to mention that I also bumped up the maximum
> allowable
> file handles per process to 16k, which had been
Brian:
Forget what I wrote about LotsOfCores then, it was introduced in 4.2.
Erick
On Fri, Nov 7, 2014 at 4:39 PM, Brian Call
wrote:
> Half of those indexes max out at about 1.3G, the other half will always stay
> very small < 5m total. We keep an index for “raw” data and another index for
Half of those indexes max out at about 1.3G, the other half will always stay
very small < 5m total. We keep an index for “raw” data and another index for
events and “trended” data. Possible design changes may make this number go up
to 4-5G per index, but definitely no more than that.
We’re ind
Yep, you guys are correct, I’m supporting a slightly older version of our
product based on Lucene 3. In my previous email I forgot to mention that I also
bumped up the maximum allowable file handles per process to 16k, which had been
working well. Here’s the ulimit -a output from our server:
co
bq: Our server runs many hundreds (soon to be thousands) of indexes
simultaneously
This is actually kind of scary. How do you expect to fit "many
thousands" of indexes into
memory? Raising per-process virtual memory to unlimited still doesn't handle the
amount of RAM the Solr process needs. It hol
Hi,
> That error can also be thrown when the number of open files exceeds the
> given limit. "OutOfMemory" should really have been named
> "OutOfResources".
This was changed already. Lucene no longer prints OOM (it removes the OOM from
stack trace). It also adds useful information. So I think th
Brian Call [brian.c...@soterawireless.com] wrote:
[Hundreds of indexes]
> ...
>at java.lang.Thread.run(Thread.java:724)
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> at sun.nio.ch.File
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846