Thanks for the input Ilan.
On Thu, Aug 3, 2023 at 5:25 PM Ilan Ginzburg wrote:
> I don't think adding shards (even from 1 to 2) is the solution.
> You need enough replicas so all your nodes share the load, but with such
> small shards you likely don't need more than 1.
> If your nodes are sa
Thank you uyilmaz for the detailed explanation.
On Thu, Aug 3, 2023 at 6:21 PM ufuk yılmaz
wrote:
> My two cents, it took me some time to understand when to add shards or
> replicas when I first started using Solr,
>
> Speed of a single isolated query when system is idle -VS- total throughput
>
Hi everyone,
Using: Solr 8.11.2 with rhel9
Currently using "solr.NRTCachingDirectoryFactory" for a collection,
the collection has grown big in size, but don't want to add more RAM to
machine(AWS),
I can increase IOPS and througput for data volume.
Was thinking of using "solr.NIOFSDirectoryFactor
Also, here is the debug output for that workaround with fq I mentioned.
This debug output is not big.
"debug":{
"rawquerystring":"*:*",
"querystring":"*:*",
"parsedquery":"+(+MatchAllDocsQuery(*:*)) ()",
"parsedquery_toString":"+(+*:*) ()",
"json":{"params":{
"q":"*:*
On 8/3/23 22:45, Ayana Joby wrote:
Hello Team,
We are using following configuration for Japanese language, but synonym
search is not working using this configuration for japanese language
Only one of your attachments made it through to the list. But I have
seen them in Jira.
In Jira, I men
On 8/4/23 09:56, Jayesh Shende wrote:
Using: Solr 8.11.2 with rhel9
Currently using "solr.NRTCachingDirectoryFactory" for a collection,
the collection has grown big in size, but don't want to add more RAM to
machine(AWS),
I can increase IOPS and througput for data volume.
Was thinking of using
Hi Shawn,
Thanks for responding so quickly.
The server box is shared by multiple Solr nodes, each node is having more
than 100gb of disk usage (~2-4 replicas of different collections on one
Solr).
The NRTCachingDirectoryFactory is trying to cache as much segments as
possible into the memory, but
I was in a similar situation, our index was way too big compared to the RAM on
the nodes. I was seeing constant %100 disk read, query timeouts and dead nodes
because the default directory reader (nrtcaching) was trying to cache a
different part of the index in memory for every other request but
On 8/4/23 11:43, Jayesh Shende wrote:
The NRTCachingDirectoryFactory is trying to cache as much segments as
possible into the memory, but the queries are for different collections and
are varying (less of repetitive query terms), so thinking this cached
segments are not actually very useful here,