Try it with 100 and see if it runs out of heap. It if does not run out, then
the size of reRankDocs is the cause.
You can increase the heap if you want to, but if the reranker is moving
document 1000 places in the result list, I would look seriously at improving
the base relevance. You might in
Hi Wunder,
The base ranker takes care of matching and ranking docs based on qf, pf2
and pf3, the ltr re-ranker looks at bunch of user behavior fields/features
such as date(recency), popularity, favorited, saves and hence reranking 1k
presents better quality than top 100.
Thanks,
Rajani
On Thu
reRankDocs is set to 1000. I would try with a lower number, like 100. If the
best match is not in the top 100 documents, something is wrong with the base
relevance algorithm.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jan 4, 2024, at 9:28 AM, r
Thank you Shawn, that was very helpful. I have tried the G1HeapRegionSize
setting. I set it to 32m(XX:G1HeapRegionSize=32m) and replayed the same
query logs, but it didn't help, reproducing the same oom error.
I was able to capture the heap dump when the heap was almost full and have
the heap ana
On 1/3/24 13:33, rajani m wrote:
Solr query with LTR as a re-ranker is using full heap all of sudden and
triggering STW pause. Could you please take a look and let me know your
thoughts? What is causing this? The STW is putting nodes in an unhealthy
state causing nodes to restart and bringi
Hi Solr Users and Devs,
Solr query with LTR as a re-ranker is using full heap all of sudden and
triggering STW pause. Could you please take a look and let me know your
thoughts? What is causing this? The STW is putting nodes in an unhealthy
state causing nodes to restart and bringing the enti