On 9/7/2021 4:00 AM, HariBabu kuruva wrote:
We are getting OOM errors in the solr logs for only specific solr stores.
And in the solr logs we see the below error. Is the OOM error could be
because of the below error.
There are precisely two ways to deal with OOME. One is to increase the
size of the resource that has been depleted. The other is to change
something so less of that resource is required. Very frequently it is
not possible to accomplish the second option. Increasing the resource
is very often the only solution.
Note that it might not actually be memory that is being depleted. Java
throws OOME for several different resource exhaustion scenarios. Some
examples of things that might run out before memory are max processes
per user or max open files.
You haven't shown us the OOME error, so we cannot advise you about what
you need to do. Assuming that it is actually memory that is depleted...
Out of the box, Solr's max heap defaults to 512MB. This is VERY small
and almost every user will need to increase it. We made the default
heap small so that Solr would start on just about any hardware without
changing the config.
It is very unlikely that the place in the code where the OOME occurred
will reveal anything useful. We just want to see it so we can see the
message logged at the beginning. Also, any other errors you are seeing
are likely unrelated to the OOME.
If you're running Solr on a non-windows system, the bin/solr script
starts Solr with a Java option that causes Solr to kill itself when OOME
occurs. It does this to protect itself -- Java program operation after
OOME is completely unpredictable and in the case of Solr/Lucene, could
corrupt the index. We haven't yet done this for Windows.
Thanks,
Shawn