[ 
https://issues.apache.org/jira/browse/SOLR-17150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842339#comment-17842339
 ] 

Andrzej Bialecki commented on SOLR-17150:
-----------------------------------------

After discussing this with other people it looks like the dynamic limits would 
be tricky to properly set and the interaction between the occasional legitimate 
heavier query traffic, updates (which would trigger searcher re-open and a mem 
usage spike) and other factors could cause too many failures.

Still, having support for a hard limit to prevent a total run-away that would 
result in OOM seems useful. I'll prepare another patch that contains just the 
hard limit.

> Create MemQueryLimit implementation
> -----------------------------------
>
>                 Key: SOLR-17150
>                 URL: https://issues.apache.org/jira/browse/SOLR-17150
>             Project: Solr
>          Issue Type: Sub-task
>          Components: Query Limits
>            Reporter: Andrzej Bialecki
>            Assignee: Andrzej Bialecki
>            Priority: Major
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> An implementation of {{QueryTimeout}} that terminates misbehaving queries 
> that allocate too much memory for their execution.
> This is a bit more complicated than {{CpuQueryLimits}} because the first time 
> a query is submitted it may legitimately allocate many sizeable objects 
> (caches, field values, etc). So we want to catch and terminate queries that 
> either exceed any reasonable threshold (eg. 2GB), or significantly exceed a 
> time-weighted percentile of the recent queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org

Reply via email to