Ideally, after the user created the JIRA issue, it would be shared in this thread. Here it is: https://issues.apache.org/jira/browse/SOLR-17699 (fix for 9.9)
Nice response Hoss, particularly sharing the "filter" trick, which may be useful here. On Wed, Mar 19, 2025 at 2:18 PM Chris Hostetter <hossman_luc...@fucit.org> wrote: > > : We are experiencing high query times in Solr when using an fq filter that > : combines an OR condition with frange. The response time significantly > : increases compared to queries that do not use this combination. > : > : Query Example > : fq={!cache=false tag=prm}field:value OR {!frange l=1 u=1 v=$funcQuery} > > The first thing i want to make sure you understand is that the {!...} > syntax is prefix based, so {!cache=false tag=prm} applies to the *entire* > "fq" param -- not just the "field:value" boolean clause -- i want to > clarify that because the way you describe breakign the query down implies > you think otherwise... > > : Observations > : 1) When we use just {!frange l=1 u=1 v=$funcQuery}, the query executes > : quickly[20ms]. > > Details matter -- you say "when we use just ..." the frange portion it's > quick -- but you're not clarifying *how* you use the frange portion by > itself. > > If you mean a request with 'fq={!frange l=1 u=1 v=$funcQuery}' is quick, > that's likely because it winds up being slow the first time, and > then cached in the filterCache for very fast subsequent use. > > : Question > : 1) Why does the OR operation with frange cause a significant increase in > : query time? > > The way an frange query works is that it scans every document in the index > to compute the function value, and then checks if it is in range. > > In general, when Lucnee/Solr execute a two clause boolean "AND" query, > the searcher can tell individual clauses to "skip ahead" based on the > current match point from the other AND clause > > So in the case of "X:Y AND {!frange...}" where X:Y only matches a small > subset of the index, the frange doesn't have to be computed for every > document in the index, it gets to skip ahead to the first match of "X:Y" > and then skip ahead to the second match of "X:Y", etc... > > With "X:Y OR {!frange...}" it still has compute the function for every > document in the index ... and when you combine that with the "cache=false" > (on the entire "fq") > > : 2) Are there any optimizations or alternative query structures that could > : improve performance? > > you can use the special (and slightly odd) "filter()" syntax in the > default parser to say that a particular boolean clause should be cached as > non-scoring clause (and that cache hit will be re-used even when using in > other boolean queries... > > fq=X:Y OR filter({!frange...}) > fq=X:Z OR filter({!frange...}) // the frange will be a filterCache hit > > > -Hoss > http://www.lucidworks.com/ >