On 1/10/22 1:49 AM, 123456780sss wrote:
Recenently, we start getting errors of "out of memory, cannot create native 
thread",


Even a small install can easily start enough threads to cause problems for an OS that is configured with defaults.  Most operating systems default to a limit of 1024.  I have run into this before, and those systems weren't running in cloud mode, which will probably create more threads than standalone mode.

You need to allow the user that is running Solr to have more processes/threads and more open files.  On Linux you can add lines like the following to /etc/security/limits.conf:|

solr hard nofile 8192||
solr soft nofile 8192||
solr hard nproc 8192||
solr soft nproc 8192|

Or if your system has the /etc/security/limits.d directory you could create /etc/security/limits.d/solr.conf and place the above in that file.

I would not expect a reboot to be necessary after increasing the limits on Linux, but it might be a good idea.

If you're not on Linux, I do not know how to increase the limits.

Note that if you're running on anything other than Windows, recent Solr versions start with an option that will kill the Solr process if an OutOfMemoryError is encountered.  This is done because program operation is completely unpredictable after OOME.  Anything might happen, including index corruption.

I see in a later message that you have added replicas.  This will mean that more threads are created.  SolrCloud has built in load balancing -- unless you are sending queries to a specific core (not collection) and include distrib=false on the URL, there is no guarantee that the query will be handled by the machine that receives the request.  It may get forwarded to another machine, and that is going to require an extra thread. If your collection is sharded, you do not want to use distrib=false, or it will only query one shard.

Thanks,
Shawn

Reply via email to