Hi Thomas,
As I understood it there is the ConcurrentUpdateSolrClient for this. Am I
better off doing this myself or should I just use the concurrent client?
Is it correct that I could reach your described scenario by setting
withQueueSize to 200 (default is 10) and setting "solr.cloud.client.st
Apache flume did those kinds of things via Solr sink, not sure if it’s still
supported since it’s a bit old.
—
> On May 13, 2025, at 21:22, Thomas Corthals wrote:
>
> Hi Dario,
>
> Regardless of this GOAWAY signal, I would advise to buffer the documents on
> the client side instead of index
Hi Dario,
Regardless of this GOAWAY signal, I would advise to buffer the documents on
the client side instead of indexing them individually for every event.
You'll have to try out which numbers work for your specific document size
and timeliness expectations. I usually start with a buffer size of
I’ve seen this signal in different places than Solr that use http2. From what
I’ve read, it’s something that the client should expect and handle
appropriately (eg. retry again later)
Let me know if someone knows better, as I’m just another victim:)
—
> On May 13, 2025, at 18:04, dario.v...@coop
Dear Solr People
On our system we have an indexer-service that indexes documents based on
incoming events. Sometimes a lot of these events happen at the same time (or at
least very close in time to each other). If that happens our indexer receives
GOAWAY signals from solr.
We're using the HttpJ