Hi

Ht is not surprising. If the cluster is told to force a commit for every single 
doc, it will cause a pile-up of index segments, causing more merges, warming 
etc, all while new indexing requests have to wait in line. This exhausts server 
side threads and the limit of 3000 that you are seeing.

So better fix the autoCommit setting.
If that does still trigger the "Max requests queued per destination" error, you 
may want to look at this bug https://issues.apache.org/jira/browse/SOLR-17240 
and provided workaround.

Jan

> 9. okt. 2024 kl. 11:01 skrev Vincenzo D'Amore <v.dam...@gmail.com>:
> 
> Hi Jan, thanks for your suggestion.
> The solrcloud cluster where I found this configuration
> (autoSoftCommit.maxDocs:1) recently went down. All nodes were affected, the
> status of all collections and all replicas were down or recovering (without
> success).
> The only error found in all the SorCloud nodes was
> "java.util.concurrent.RejectedExecutionException: Max requests queued per
> destination 3000 exceeded for HttpDestination".
> 
> On Mon, Oct 7, 2024 at 2:04 PM Jan Høydahl <jan....@cominvent.com> wrote:
> 
>> Rule of thumb is to commit as infrequently as possible and to batch ADD
>> requests instead of pushing one doc at a time. Also avoid the client
>> application doing explicit COMMIT calls to Solr. All this has a cost.
>> 
>> So if your requirement is an indexing latency of 30s, set autoCommit based
>> on time 30s, not any more frequent. Setting these limits too low will incur
>> a cost in that you must add more hardware to keep up.
>> 
>> Jan
>> 
>>> 7. okt. 2024 kl. 10:24 skrev Vincenzo D'Amore <v.dam...@gmail.com>:
>>> 
>>> Hi Jan, thanks for answering. This is the case, the collection has to be
>>> updated in real time, I'm just afraid that multiple updates could slow
>> down the
>>> cluster.
>> 
>> 
> 
> -- 
> Vincenzo D'Amore

Reply via email to