I imagine if you could afford to shut down all the running Solr instances for a
small amount of time, you could shut them all down, make the auth changes, and
start them all at the same time
--ufuk yilmaz
From: Jan Høydahl
Sent: Thursday, December 28, 2023 3:20
If your backend is in Java, SolrJ can do the same too. You instantiate a SolrJ
instance using the zookeeper URL instead of a specific Solr node url, and it
keeps itself in sync with live nodes list and forwards requests to whichever
node is alive
From: Rigolin, D
Hello Darren,
I had a very similar problem when running Solr on EKS kubernetes cluster. The
solution I found was to add a pre_stop shutdown hook to the kubernetes
deployment, which runs the command "/opt/solr/bin/solr stop -k solrrocks -p
8983" to gracefully stop Solr before the pod is killed.
ry high availability.
-ufuk yilmaz
____
From: uyil...@vivaldi.net.INVALID
Sent: Thursday, January 18, 2024 7:08 PM
To: users@solr.apache.org
Subject: Re: SOLR data on ECS problem with write lock files
Hello Darren,
I had a very similar problem when running Solr on EK
There's a way to produce, use and store them but it only supports a fixed
format:
https://solr.apache.org/guide/8_5/stream-source-reference.html#model
https://solr.apache.org/guide/8_5/stream-source-reference.html#train
https://solr.apache.org/guide/8_5/stream-decorator-reference.html#classify
m
Is there a general guideline to optimize Solr for very little number of
documents in the core and low memory? For example, let's say 2000 documents and
100mb of memory. It crashes often due to OOM error with the default
configuration.
Are there places in the Solr config where we can look to mak
Hi! For uploading the scripts from inside the image, I didn't implement a
locking mechanism or assigned an instance 🤔 Since if configset was already
uploaded on the first ever run, on the subsequent runs it would just log an
error (configset already exists) and continue with the rest of the proc
I also got this exception before and in order to avoid reindexing TB's of data,
had to resort to grouping via streaming expressions, which has its ups and
downs. If it's technically infeasible to substitute docValues for this purpose
(when useDocValuesAsStored:true), it would be nice if it was d
Hi,
Solr usually fills the heap with various caches so I wouldn't worry much about
it consuming %90 of the heap, unless I get OutOfMemory errors.
Pagination using rows parameter is intended for when row count is very low and
page number is also small (eg. rows=10 page=2 etc.). It's problematic
also if your index is a bit large, invest in an architecture which makes
reindexing very easy, as you will probably need to change the schema and
reindex multiple times
--ufuk yilmaz
From: Walter Underwood
Sent: Friday, April 5, 2024 8:59 PM
To: users@solr.apach
10 matches
Mail list logo