We've also experienced the exact same issue after upgrading from 9.6.1 to
9.7.0. We had tried setting
-Dorg.apache.lucene.store.MMapDirectory.enableMemorySegments=false after
reading a post from David Smiley and following that eventually through to
some project in GitHub, but that did nothing.
We
Sorry for such a long post.
We have a 4-node SolrCloud running Solr 8.11.1. There are 2 nodes in one
AWS region, and 2 nodes in another region. All nodes are in peered VPC.
All communications between the nodes are direct IP calls (no DNS). One
node in each region holds replicas of multiple coll
I'm late to the dance but FWIW, we also experienced some similar swap-like
issues when we upgraded from Centos 7.6 to Centos 7.9 (this was Solr 8.3) -
some of the Solr nodes would end up reading from disk like crazy, and query
response times would suffer accordingly. At one point we had 1/2 the no
fixed
set of names).
On Mon, Nov 8, 2021 at 1:07 PM Elaine Cario wrote:
> We've been trying to figure out ways to "migrate" existing SolrClouds to
> another ZK ensemble which will be built on different infrastructure than
> the current ensemble. Also, ZK will be upgrad
We've been trying to figure out ways to "migrate" existing SolrClouds to
another ZK ensemble which will be built on different infrastructure than
the current ensemble. Also, ZK will be upgraded from 3.4.13 (old ensemble)
to 3.6.3 (new ensemble). We're running Solr 8.10.1.
One option we are exper