It would be nice to have more clarity regarding the problem you're trying
to solve. Few questions:
1. Why do you need to return so many search results at the same time? If
it's a typical search usecase, could you not work with some manageable list
of documents, say 50/100? But I'm guessing this is
As far as I know there is no practical upper limit to the number of
documents, only a limit to the amount of memory available in your server
and client. (+ network timeouts, etc)
Deep paging slows down as the pages get bigger so use cursors in that case,
otherwise just test until you get OOM.
On
> 1. Why do you need to return so many search results at the same time? If
> it's a typical search usecase, could you not work with some manageable list
> of documents, say 50/100? But I'm guessing this is not a typical search
> that you're planning to support.
I’d just like to point out that Neh
My team is in the process of moving from solr 6.6 to 8.11.1 and have
noticed some weirdness (wrong parent docs in result) when using the
{!parent blockjoin query parser. We have multiple 'root' entities
configured in DIH and i'm wondering if this could be a causation or if
there is a bug at play w
Hi Neha,
My shot in the dark:
Have you indexed any document containing that field?
Are you using dynamic fields? (exact field name should have priority over
dynamic fields, but just to double-check).
Can you show us your schema? (at least the part related to that definition?)
Cheers
-
Neha,
On 4/27/22 16:35, Neha Gupta wrote:
I have different cores with different number of documents.
1) Core 1: - 227625 docs and each document having approx 10 String fields.
2) Core 2: - Approx 3.5 million documents and each having 3 string fields.
We still have no idea about the size of t
Try searching for that field and/or returning that fields. I’ve seen some
issues with the schema browser not showing data that I know is in the index. I
think it is related to docvalues, but I haven’t nailed it down.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/
Neha,
As Alessandro already mentioned, please share your schema if possible.
A wild guess is that sometimes a field is defined as indexed=true
stored=false which gives the impression that the document is missing the
field. Taking a look at the schema would help clarify that.
Thanks,
Rahul
On Thu,
The original post had a screenshot from the schema browser showing StrField,
indexed=true, stored=true, omitTermFreqAndPositions=true, omitNorms=true,
sortMissingLast=true.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Apr 28, 2022, at 8:17 AM, Ra
Hopefully this is the appropriate forum for a Zookeeper architecture question.
I have two servers, a Primary server and a failover server. Right now my
Zookeeper is on the primary server, so if it goes down Solr would not work.
Should I have a zookeeper on both the primary servers? Shou
Ideally you should run zookeeper on three (small) different servers from
solr.
You should always have an odd number of zk servers so they can vote and not
tie.
On Thu, Apr 28, 2022 at 2:26 PM Heller, George A III CTR (USA)
wrote:
> Hopefully this is the appropriate forum for a Zookeeper archite
Hi Walter,
I already tried returning that field in the response but it was not present.
Thanks and Regards
Neha Gupta
On 28/04/2022 17:16, Walter Underwood wrote:
Try searching for that field and/or returning that fields. I’ve seen some
issues with the schema browser not showing data that I
Hello Allessandro,
I indexed field with same name but in the different core.
I am not using dynamic fields and schema is as below.
Thanks
Neha Gupta
On 28/04/2022 16:37, Alessandro Benedetti wrote:
Hi Neha,
My shot in the dark:
Have you indexed any document containing that field?
Are you us
First of all Thanks to all who have replied to this question.
Just to make things clear my use case is not a typical one i.e. i am not
going to show first 50 or 100 result.
My use case is to create a CSV file (matrix kind of) depending on what
user filters from the web application and the res
Hello,
I am using Solr 7.7.2. Is it possible to stop a long running request ?
Using the "timeAllowed" parameter would return partial results, but I want
the query to outright terminate and ideally throw an exception so as to not
utilize additional resources.
Thanks,
Rahul
Neha,
On 4/28/22 16:54, Neha Gupta wrote:
Just to make things clear my use case is not a typical one i.e. i am not
going to show first 50 or 100 result.
My use case is to create a CSV file (matrix kind of) depending on what
user filters from the web application and the resulting set can range
> Firing SolrRequest again and again and asking for results (may be 10-100
at a time) from web application will increase the amount of time until
the CSV file is done.
Even if your assumption was correct, really the export of a CSV file is a
task so time critical? I don't think the gain was so hug
It depends ;-)
If you are directly querying a single Solr node, then the additional memory
usage is (max_results * 4) if not retrieving scores. It's just
a single int per document to keep track of the docids that matched the
query. Documents are "streamed" to the client... the
actual stored fiel
the 30+ million records I retrieved were always from a single standalone
solr node, and yes you can do that frequently and it doesnt have an impact
on the rest of the searches happening assuming you have enough memory to
deal with it. there is nothing wrong with requesting every one of your
docume
19 matches
Mail list logo