On 8/12/2021 12:02 AM, Satya Nand wrote:
Does this alternate format use different data structures to store the
document ids for filters with low document count, Other than the bitmap?
means the size constraint(filter cache size) would apply only on bitmap or
this alternate structure too or their
Thanks, Shawn,
This makes sense. Filter queries with high hit counts could be the trigger
for out-of-memory, That's why it is so infrequent.
We will try to relook filter queries and further try reducing filter cache
size.
one question though,
> There is an alternate format for filterCache entries
On 8/11/2021 6:04 AM, Satya Nand wrote:
*Filter cache stats:*
https://drive.google.com/file/d/19MHEzi9m3KS4s-M86BKFiwmnGkMh3DGx/view?usp=sharing
This shows the current size as 3912, almost full.
There is an alternate format for filterCache entries, that just lists
the IDs of the matching doc
Hi Shawn,
Please find the images.
*Filter cache stats:*
https://drive.google.com/file/d/19MHEzi9m3KS4s-M86BKFiwmnGkMh3DGx/view?usp=sharing
*Heap stats*
https://drive.google.com/file/d/1Q62ea-nFh9UjbcVcBJ39AECWym6nk2Yg/view?usp=sharing
I'm curious whether the 101 million document count is for one
On 8/10/2021 11:17 PM, Satya Nand wrote:
Thanks for explaining it so well. We will work on reducing the filter
cache size and auto warm count.
Though I have one question.
If your configured 4000 entry filterCache were to actually fill up, it
would require nearly 51 billion bytes, and t
Hi Shawn,
Thanks for explaining it so well. We will work on reducing the filter cache
size and auto warm count.
Though I have one question.
If your configured 4000 entry filterCache were to actually fill up, it
> would require nearly 51 billion bytes, and that's just for the one core
> with 101
Hi Dominique,
Thanks, But I still have one confusion, Please help me with it.
Pretty sure the issue is caused by caches size at new searcher warmup time.
We use leader-follower architecture with a replication interval of 3 hours.
This means every 3 hours we get a commit and the searcher warms u
On 8/10/2021 1:06 AM, Satya Nand wrote:
Document count is 101893353.
The OOME exception confirms that we are dealing with heap memory. That
means we won't have to look into the other resource types that can cause
OOME.
With that document count, each filterCache entry is 12736670 bytes, plu
Pretty sure the issue is caused by caches size at new searcher warmup time.
Dominique
Le mar. 10 août 2021 à 09:07, Satya Nand a
écrit :
> Hi Dominique,
>
> You don't provide information about the number of documents. Anyway, all
>> your cache size and mostly initial size are big. Cache are st
Hi Shawn,
>
>
> Do you have the actual OutOfMemoryError exception? Can we see that?
> There are several resources other than heap memory that will result in
> OOME if they are exhausted. It's important to be investigating the
> correct resource.
*Exception:*
Aug, 04 2021 15:38:36 org.apache.sol
Hi Dominique,
You don't provide information about the number of documents. Anyway, all
> your cache size and mostly initial size are big. Cache are stored in JVM
> heap.
Document count is 101893353.
About cache size, most is not always better. Did you make some performance
> benchmarks in order
On 8/8/2021 11:43 PM, Satya Nand wrote:
We are facing a strange issue in our solr system. Most of the days it keeps
running fine but once or twice in a month, we face OutofMemory on solr
servers.
We are using Leader-Follower architecture, one Leader and 4 followers.
Strangely we get OutofMemory
Hi,
You don't provide information about the number of documents. Anyway, all
your cache size and mostly initial size are big. Cache are stored in JVM
heap.
About cache size, most is not always better. Did you make some performance
benchmarks in order to set these values ?
Try with the default va
13 matches
Mail list logo