[ 
https://issues.apache.org/jira/browse/LUCENE-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16407861#comment-16407861
 ] 

Amir Hadadi commented on LUCENE-8213:
-------------------------------------

Indeed both are about trading throughput for latency.

However, there is a quantitative difference:

in parallel segments querying you would slice your index to e.g. 5 slices on 
each and every query.

async caching would happen only when caching is needed, and even then, only 
when the ratio between the caching cost and the leading query cost is big 
enough to justify async execution.

I would expect the additional async tasks triggered by async caching to be 100x 
less than parallel segments querying tasks.

Coupling these features together would mean that if someone is not willing to 
pay the overhead of parallel segments querying, he will not be able to use 
async caching.

> offload caching to a dedicated threadpool
> -----------------------------------------
>
>                 Key: LUCENE-8213
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8213
>             Project: Lucene - Core
>          Issue Type: Improvement
>          Components: core/query/scoring
>    Affects Versions: 7.2.1
>            Reporter: Amir Hadadi
>            Priority: Minor
>              Labels: performance
>
> IndexOrDocValuesQuery allows to combine non selective range queries with a 
> selective lead iterator in an optimized way. However, the range query at some 
> point gets cached by a querying thread in LRUQueryCache, which negates the 
> optimization of IndexOrDocValuesQuery for that specific query.
> It would be nice to see a caching implementation that offloads to a different 
> thread pool, so that queries involving IndexOrDocValuesQuery would have 
> consistent performance characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to