> What we found is that it is manageable with 1000 added fields per
document but It become unusable with 5000 added fields per document.

Can you please elaborate on why it is unusable? Is it possible to reproduce
outside of your setup? I can think of a solr-bench test, for example.

Also, is the retrieval step show, or are searched slow too? Can you try
CBOR (9.3) to see if that works better?

On Mon, 4 Sept, 2023, 5:44 am Ing. Andrea vettori, <a.vett...@b2bires.com>
wrote:

> Hello,
> We’re using Solr for our e-commerce platform since many years and it
> always worked very well for over one million documents with a couple
> hundreds fields per document; we also do complex faceted searches and it
> works great.
>
> Now we’re trying to use Solr for another project that builds on the same
> data (so around one million documents) but adds many numeric fields that we
> want to retrieve and calculate stats on them (sums, averages, …).
> What we found is that it is manageable with 1000 added fields per document
> but It become unusable with 5000 added fields per document.
>
> Fields are a mix of tfloat and tint (20 dynamic fields that become 5000
> when considering wildcard expansion), stored but not indexed.
>
> Core size on disk is around 15GB. We dedicated 6GB of heap to Solr; the
> server is a dual processor with several cores (I think 40 total) that are
> shared with another application but cpu usage is low.
>
> I’d like to know if there’s some configuration or best practice that we
> should care of to enhance performances in our case.
> Maybe it’s simply not advisable to use such many fields ?
>
> Note: for a test search that retrieves only 10 documents, qtime is very
> low (2 msec) but the full request time to get javabin or json data is very
> slow (several seconds).
>
> Thank you
>
> —
> Ing. Andrea Vettori
> Sistemi informativi
>

Reply via email to