Hi everyone,
I'm one of the developers behind the Freesound website (
https://freesound.org, a sound sharing website), we use Solr as our search
engine and I'm currently experimenting with a new feature that I'd like to
implement using Solr. In summary, we have a Solr index with one document
per s
Hello Federic,
It sounds like blockParent domain change see.
https://solr.apache.org/guide/solr/latest/query-guide/json-faceting-domain-changes.html#block-join-domain-changes
On Thu, Jan 25, 2024 at 12:15 PM Frederic Font Corbera <
frederic.f...@upf.edu> wrote:
> Hi everyone,
>
> I'm one of the d
I am trying to upgrade solr cloud from version 8.11.1 to 9.4.0. Solr
service started but connection is getting refused.
See the below error in the solr.log. Any pointers.
ERROR (updateExecutor-8-thread-2-processing-hostname:8988_solr
rc2_addresses_shard1_replica_n1 rc2_addresses shard1 core_node3)
This same question was asked yesterday on this list. Answer is to set
SOLR_JETTY_HOST=0.0.0.0 to bind to any network interface, not only local.
Jan
> 25. jan. 2024 kl. 14:05 skrev Gummadi, Ramesh
> :
>
> I am trying to upgrade solr cloud from version 8.11.1 to 9.4.0. Solr
> service started but
On 1/24/24 01:27, uyil...@vivaldi.net.INVALID wrote:
Is there a general guideline to optimize Solr for very little number of
documents in the core and low memory? For example, let's say 2000 documents and
100mb of memory. It crashes often due to OOM error with the default
configuration.
Are t
Hi Mikhail,
Thanks a lot for your quick response! I did not know about that and this
seems to be exactly what I was looking for. I did some quick tests with the
JSON facets API (previously I was using the non-JSON faceting method) and
it allows me to query child document but facet by parents, just
Probably you are talking about searching parents and then roll over parents
to children via
https://solr.apache.org/guide/solr/latest/query-guide/document-transformers.html#child-childdoctransformerfactory
On Thu, Jan 25, 2024 at 7:16 PM Frederic Font Corbera
wrote:
> Hi Mikhail,
>
> Thanks a lo