> On 5 Sep 2023, at 16:17, Michael Gibney wrote:
>
>> Note: for a test search that retrieves only 10 documents, qtime is very low
>> (2 msec) but the full request time to get javabin or json data is very slow
>> (several seconds).
>
> Reading between the lines here: does "full request" return
> On 4 Sep 2023, at 12:08, Ishan Chattopadhyaya
> wrote:
>
>> What we found is that it is manageable with 1000 added fields per
> document but It become unusable with 5000 added fields per document.
>
> Can you please elaborate on why it is unusable? Is it possible to reproduce
> outside of y
> Note: for a test search that retrieves only 10 documents, qtime is very low
> (2 msec) but the full request time to get javabin or json data is very slow
> (several seconds).
Reading between the lines here: does "full request" return a larger
number of documents? How many? Are you attempting t
On 9/3/23 15:08, Ing. Andrea vettori wrote:
Note: for a test search that retrieves only 10 documents, qtime is very low (2
msec) but the full request time to get javabin or json data is very slow
(several seconds).
Lucene writes stored fields in a compressed format. Decompressing
thousands
> What we found is that it is manageable with 1000 added fields per
document but It become unusable with 5000 added fields per document.
Can you please elaborate on why it is unusable? Is it possible to reproduce
outside of your setup? I can think of a solr-bench test, for example.
Also, is the r
Hello,
We’re using Solr for our e-commerce platform since many years and it always
worked very well for over one million documents with a couple hundreds fields
per document; we also do complex faceted searches and it works great.
Now we’re trying to use Solr for another project that builds on