Hi Team,High Query Time in Solr When Using OR with frange in fq
We are experiencing high query times in Solr when using an fq filter that
combines an OR condition with frange. The response time significantly
increases compared to queries that do not use this combination.
Query Example
fq={!cache
Hi Team,High Query Time in Solr When Using OR with frange in fq
We are experiencing high query times in Solr when using an fq filter that
combines an OR condition with frange. The response time significantly
increases compared to queries that do not use this combination.
Query Example
fq={!cache
<
> tomasflo...@gmail.com>
> wrote:
>
> > > Then, the long answer is that Apache Solr implements already approaches
> > for
> > > 'early termination' such as Block Max WAND from Solr 8(thanks Lucene
> for
> > > this: https://www.ncbi.nlm.nih.
> for
> > 'early termination' such as Block Max WAND from Solr 8(thanks Lucene for
> > this: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148045/) to optimise
> > query time and 'skip un-worthy candidates'.
> >
>
> Note that this is not used by d
> Then, the long answer is that Apache Solr implements already approaches for
> 'early termination' such as Block Max WAND from Solr 8(thanks Lucene for
> this: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148045/) to optimise
> query time and 'skip un-worthy candidates
Then, the long answer is that Apache Solr implements already approaches for
'early termination' such as Block Max WAND from Solr 8(thanks Lucene for
this: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148045/) to optimise
query time and 'skip un-worthy candidates'.
For your se
Please include your schema and some sample queries so we have specifics to go
on.
> On Feb 8, 2023, at 9:00 AM, Mike wrote:
>
> I have a standalone Solr server and an index of millions of documents.
> Some queries that e.g. more than 1 million times exist takes a long time.
> I only need the
I have a standalone Solr server and an index of millions of documents.
Some queries that e.g. more than 1 million times exist takes a long time.
I only need the first 100 results, can I make solr stop ranking and sort by
the first 100 hits?
How can i limit the search time of sometimes more than 10
You can have 40+ million documents and half a terabyte index size and still not
need spark or solr cloud or sharding and get sub second results. Don’t over
think it until it becomes a real issue
> On Jan 29, 2023, at 1:53 PM, marc nicole wrote:
>
> Much appreciated.
>
>> Le dim. 29 janv. 20
Much appreciated.
Le dim. 29 janv. 2023 à 17:47, Andy Lester a écrit :
>
>
> > On Jan 29, 2023, at 4:45 AM, marc nicole wrote:
> >
> > Let's say you're right about the 200 rows being too few. From which row
> > count I can see the difference reflected in the results as it should
> (Solr
> > fas
> On Jan 29, 2023, at 4:45 AM, marc nicole wrote:
>
> Let's say you're right about the 200 rows being too few. From which row
> count I can see the difference reflected in the results as it should (Solr
> faster)?
It depends on how much data is in each record, but I'd think 10,000 - 100,000
Let's say you're right about the 200 rows being too few. From which row
count I can see the difference reflected in the results as it should (Solr
faster)?
Le dim. 29 janv. 2023 à 00:34, Jan Høydahl a écrit :
> For 200 values you need neither spark nor Solr. A plain Java in mem filter
> is much
For 200 values you need neither spark nor Solr. A plain Java in mem filter is
much simpler 😉
Sorry, you cannot benchmark like this. You have to select a real use case and
then select technology base on the requirements at hand. And to benchmark you
must use a realistic data set.
Jan Høydahl
>
Hello guys,
I have been playing with Solr lately, and I tested it over a csv file of
about 200 rows (that I indexed in Solr). I also read the file in Spark and
perform filtering over an attribute value and compute time of processing
when the dataset is loaded from File System vs. Solr.
I find the
14 matches
Mail list logo