Hi,
With an increase and decrease in the value of facet.limit the facet counts
are changing. This is happening in grouped faceting.
I noticed when the facet limit is set to -1 or a higher number the grouped
facet counts are correct.
Kindly help me understand and resolve this issue to get the co
Hello
I've previously (SolR 9.1) been using this approach, inside own test docker
image, that boils down to:
echo "clientPortAddress=0.0.0.0" >> server/solr/zoo.cfg
./bin/solr start -f -c
This allows other containers to connect to the zookeeper and
update/query/stream from the SolR during int
Sorry I forgot to attach the log
From: Morten Bøgeskov
Sent: Wednesday, August 23, 2023 12:01
To: users@solr.apache.org
Subject: SolR 9.3.0 embeddedn zookeeper on wildcard address
[You don't often get email from m...@dbc.dk.invalid. Learn why this is
important
I’d suggest to take a look at the overrequest and overrefine parameters,
especially if running SolrCloud and collection in question has multiple shards
Sent from Mail for Windows
From: Modassar Ather
Sent: Wednesday, August 23, 2023 11:36 AM
To: users@solr.apache.org
Subject: Effect of facet.lim
Thanks for your response.
The Solr cluster is deployed on 48 shards and all the documents of a group
is on one shard only.
Can you please help me understand the reason behind this behaviour of
faceting?
Thanks,
Modassar
On Wed, 23 Aug 2023 at 3:55 PM, ufuk yılmaz
wrote:
> I’d suggest to take
Perhaps you are affected by this
https://issues.apache.org/jira/browse/SOLR-16926 which is recently fixed but
not released?
Jan
> 23. aug. 2023 kl. 12:18 skrev Morten Bøgeskov :
>
> Sorry I forgot to attach the log
>
>
> From: Morten Bøgeskov
> Sent: Wednesda
Most likely - yes... Thank you Jan.
Now why didn't I see that, when I searched the bug reports. (and why can't I
find it, when I have it in an adjacent tab?)
From: Jan Høydahl
Sent: Wednesday, August 23, 2023 13:46
To: users@solr.apache.org
Subject: Re: SolR 9.3
Thanks Tim & Walter.
Have managed to get it working with shingles and edge ngram. Initially it
did bring up a lot of false positives but managed to mitigate it tweaking
with the parameters and also by splitting this into a separate copy field
with lower boost than a normal match.
On Wed, Aug 16,
Cool - For now I'll either revert to HttpSolrClient or use a single client
(depending
on what I have to refactor)
My only concern with a shared client is if one calls close() "accidently",
i don't
see an easy way to query the client to see if it was closed so I can
destroy it
and create a new one
I don’t exactly remember the mechanism but roughly the main reason behind is,
lets say you wanted the top 10 results. By default solr considers say ~15 top
documents from each shard/replica as candidates for top 10. What if the 16th
result from a couple shards had many documents and would have s
I haven’t tried this, but one thought I had, could you deploy both your leader
and your follower as bin/solr start -c, but they each run their own ZK. Then,
you just configure your follow Solr to pull from your leader Solr like normal.
This might actually be worth a JIRA and some development
We may implement something with incremental backup and restore; but in the
meanwhile, if we could resolve the permission problem with Leader/Follower
replication, that would expedite our upgrade to Solr9
In hopes that the discrepancy mentioned below is a clue, I would like to ask it
again (copy
12 matches
Mail list logo