since it's related to security, can I set it in a better way without
completely disabling it?
-ufuk yilmaz
We have lots of data in Solr (v8.4), which is divided into monthly
collections in order not to perform every search against entire data.
Each monthly collection is also split into 8 shards.
It means 4 years = 48 months = 384 shards which are causing some
problems.
Is it possible to merge mul
I'm using Docker with bitnami solr and official zookeeper images.
Starting up 3 zookeepers, pointing solr to them and starting solr
instances was very easy this way. In fact I don't know how to start a
standalone solr instance.
A small docker-compose example:
version: '2'
services:
zoo-1:
Solr version is 8.4
I'm trying to use the export handler through SolrJ:
CloudSolrClient cloudSolrClient = ...
SolrQuery q = new SolrQuery();
q.setParam("q", "ts:[1612368422911 TO 1612370422911]");
q.setParam("sort", "ts asc");
q.setParam("fl", "ts");
q.setRequestHandler("/export");
cloudSolrCli
When you send an expression to an alias named "myAlias" pointing to N
number of collections, each having M number of replicas, how does it
work? Is the same expression executed in all of the NxM machines at the
same time? Or a random node is selected from the NxM replica's nodes? Or
something e
By the way I'm already using SSD drives, as I believe I had mentioned in
my original question.
On 2021-12-23 22:23, Dmitri Maziuk wrote:
On 12/23/2021 11:42 AM, Walter Underwood wrote:
When you request 5000 rows, you are requesting 5000 disk reads times
the number
of fields requested. It is n
, 2021, 7:19 AM Ufuk YILMAZ
wrote:
I have a problem with my SolrCloud cluster which when I request a few
stored fields disk read rate caps at the maximum for a long period of
time, but when I request no fields response time is consistent with a
few seconds.
My cluster has 4 nodes. Total index
I have a problem with my SolrCloud cluster which when I request a few
stored fields disk read rate caps at the maximum for a long period of
time, but when I request no fields response time is consistent with a
few seconds.
My cluster has 4 nodes. Total index size is 400GB per node. Each node