[
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16413869#comment-16413869
]
Andrzej Bialecki commented on SOLR-11882:
------------------------------------------
Here's the setup that I used to test and verify this issue:
* created {{core0/conf .. core9/conf}} dirs under {{server/solr/}} and copied
the {{_default}} configset to each of the conf dirs.
* created in each {{core0 .. core9}} dir a {{core.properties}} file containing
a single line: {{transient=true}}
* modified {{server/solr/solr.xml}} to contain {{<int
name="transientCacheSize">2</int>}} under {{solr}} element
* ran {{bin/solr start}} and issued a simple query request to each of the
cores, to force its loading (and unloading from the small cache)
After attaching a profiler I was able to verify that indeed 10 instances of
SolrCore exist, all strongly referenced, and forcing GC doesn't affect this.
I attached a possible patch - it associates each Gauge with the SolrInfoBean
that registered it, and then unregisters these gauge instances that correspond
to the bean that is being closed (whether it's SolrCore or other plugin).
There are a few things that I don't like about this patch, though: I used
{{WeakReference}} to tell JVM that it can garbage collect the lambdas as soon
as their parent object is unreferenced, and I had to explicitly call
unregistration in {{SolrCoreMetricManager.close()}}. Either one of these didn't
work on its own, although I think the unregistration step should - only when
used both I could see that indeed the references to old transient cores were
being released. So there's likely still some other factor at play here... but
at least the patch can be used as a workaround.
> SolrMetric registries retain references to SolrCores when closed
> ----------------------------------------------------------------
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: metrics, Server
> Affects Versions: 7.1
> Reporter: Eros Taborelli
> Assignee: Erick Erickson
> Priority: Major
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch,
> SOLR-11882.patch, create-cores.zip, solr-dump-full_Leak_Suspects.zip,
> solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand),
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide:
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no
> documents inside, the heap consumption went through the roof despite having
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager
> should be removed, in the same fashion the reporters for the core are also
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded,
> but it's misleading as the cores are never fully unloaded.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]