Hi Shawn, I dont think it was heap size. We have assigned only 8 GB heap size. This was named as solr mapped total capacity in grafana dashboard . Heap size section was also there where i could see the heap usages based on 8 gb. But gc count and time for this server was high as well.
I guess it is something related to mmap directory implementation of index directory though I could be wrong. These 2 searchers are there since 11 am. And after every commit two more searchers are being opened again. Right now these are two searchers. - Searcher@479c8248[im-search-03-08-22_shard2_replica_p19] main - Searcher@6f3bd5b7[im-search-03-08-22_shard2_replica_p19] main Sharing the cache configuration as well. Total documents in this replica are 18 million.(25 mn maxdoc, 7 mn deleted doc. <filterCache class="solr.CaffeineCache" size="1000" initialSize="300" autowarmCount="100" /> <queryResultCache class="solr.CaffeineCache" size= "30000" initialSize="1000" autowarmCount="100" /> <documentCache class= "solr.CaffeineCache" size="25000" initialSize="512" autowarmCount="512" /> On Fri, 7 Oct 2022 at 6:32 PM, Shawn Heisey <apa...@elyograg.org> wrote: > On 10/7/22 06:23, Satya Nand wrote: > > Upon checking in the Solr Graphna Dashboard it was found that *Mapped > Total > > Capacity(Jvm Metrices->Buffer size section) *for this particular node was > > approx double of other servers 54GB vs 28 GB. > > > > Further checking in *CORE (Plugin/Stats) *for this particular server, > There > > were two searchers registered for this core. something like this > > Usually when there are multiple searchers, it's because there is an > existing searcher handling queries and at least one new searcher that is > being warmed as a replacement. When the new searcher is fully warmed, > the existing searcher will shut down as soon as all queries that are > using it are complete. > > 28GB of heap memory being assigned to the searcher seems extremely > excessive. Can you share the cache configuration in solrconfig.xml and > the max doc count in the core? > > Thanks, > Shawn > >