Thanks, Deepak, We will try doing this.
But still, I am wondering what led to the increase in response time this
much from solr 6.5 to solr 8.7, keeping everything same.
We are facing an increase of 100-150 ms.
On Wed, Aug 11, 2021 at 11:55 AM Deepak Goel wrote:
> If I were you, then I would s
You will have to elaborate a bit on: "keeping everything same"
Deepak
"The greatness of a nation can be judged by the way its animals are treated
- Mahatma Gandhi"
+91 73500 12833
deic...@gmail.com
Facebook: https://www.facebook.com/deicool
LinkedIn: www.linkedin.com/in/deicool
"Plant a Tree,
Hi Deepak,
Heap Size
index size(Almost Same)
schema
solr config
We have only done one change, Earlier we using synonym_edismax parser. As
this parser was not available for solr 8.7 we replaced it with edismax +
synonym graph filter factory to handle multiword synonym.
Also, On solr 8.7 the differ
Hello,
Our organization has implemented Solr 8.9.0 for a production use case. We have
standardized on Prometheus for metrics collection and storage. We export
metrics from our Solr cluster by deploying the public Solr image for version
8.9.0 to an EC2 instance and using Docker to run the export
Good day,
and first of all, it's a pleasure to join you all. In my workplace a new
interesting dilemma appeared that we have been looking up and discussing
the last few days and I thought that you could be a good place to extend
our research on this topic.
Let's head to the heart of this. Some day
It happens because you use *-z zk-url *to connect to solr.
When you do that the prometheus-export assumes that it connects to a
SolrCloud environment and will collect the metrics from all nodes.
Given you have started 3 prometheus-exporters, each one of them will
collect all metrics from the cluste
On 8/10/2021 11:17 PM, Satya Nand wrote:
Thanks for explaining it so well. We will work on reducing the filter
cache size and auto warm count.
Though I have one question.
If your configured 4000 entry filterCache were to actually fill up, it
would require nearly 51 billion bytes, and t
Hi Shawn,
Please find the images.
*Filter cache stats:*
https://drive.google.com/file/d/19MHEzi9m3KS4s-M86BKFiwmnGkMh3DGx/view?usp=sharing
*Heap stats*
https://drive.google.com/file/d/1Q62ea-nFh9UjbcVcBJ39AECWym6nk2Yg/view?usp=sharing
I'm curious whether the 101 million document count is for one
On 8/11/2021 6:04 AM, Satya Nand wrote:
*Filter cache stats:*
https://drive.google.com/file/d/19MHEzi9m3KS4s-M86BKFiwmnGkMh3DGx/view?usp=sharing
This shows the current size as 3912, almost full.
There is an alternate format for filterCache entries, that just lists
the IDs of the matching doc
Mathieu,
We have changed our Prometheus configuration to scrape only from one pod in the
cluster, but we still see the error given below. Is there anything else we can
try?
On 2021/08/11 08:58:34, Mathieu Marie wrote:
> It happens because you use *-z zk-url *to connect to solr.>
> When you d
I hope you have success with TRAs!
You can delete some number of collections from the rear of the chain, but
you must first update the TRA to exclude these collections. This is
tested:
https://github.com/apache/solr/blob/f6c4f8a755603c3049e48eaf9511041252f2dbad/solr/core/src/test/org/apache/solr/
Thanks, Shawn,
This makes sense. Filter queries with high hit counts could be the trigger
for out-of-memory, That's why it is so infrequent.
We will try to relook filter queries and further try reducing filter cache
size.
one question though,
> There is an alternate format for filterCache entries
12 matches
Mail list logo