Hello Parshant, I can't see anything particular wrong with your query. You could try using collpaseqparser and expand instead of group to see if you have any improvements. Also you should check how much memory your caches use and check for any evictions or check the hitrate. Maybe you do not need your caches to be that big. Especially you should check the document cache. The documentation says the following: "The size for the documentCache should always be greater than max_results times the max_concurrent_queries, to ensure that Solr does not need to refetch a document during a request. The more fields you store in your documents, the higher the memory usage of this cache will be."
Also can you please share the gc log before a oom? Thanks, Florin Babeş În vin., 26 mar. 2021 la 08:53, Parshant Kumar <parshant.ku...@indiamart.com.invalid> a scris: > Hi Florin, > > Please check the info and let me know if some improvisation can be done. > > Query Example > shards=good&mc1="190850"&mc3="190850A"&mc4="190850B"&mc5="190850LS"&mc6="190850SS"&mc7="190850P"&mc12="190850CA"&mcHigh=190850&mcHighA=190850&mcHighB=190850B&mcHighAB=190850&q=bags&ps=2&rows=14&group=true&group.limit=5&group.field=glid&group.ngroups=true&lat=0&lon=0&spellcheck=true&fq=wt:[0 > TO 1299]&fq=-wt:(1259 1279 1289)&fq=id:(someids) OR ( titles:("imsw bags > imsw") AND titles:("bagimsw") )&boost=map(query({!dismax qf=id v=$mc3 > pf=""}),0,0,map(query({!dismax qf=id v=$mc4 pf=""}),0,0,map(query({!dismax > qf=id mm=0 v=$mcHighA pf=""}),0,0,map(query({!dismax qf=id mm=0 v=$mcHighB > pf=""}),0,0,map(query({!dismax qf=id v=$mc12 pf=""}),0,0,map(query({!dismax > qf=id v=$mc1 pf=""}),0,0,1,1.1 ),2.0),80.0),105.0),175.0),250.0)&some more > similiar boosts > cache configuration <filterCache class="solr.FastLRUCache" size="4000" > initialSize="2000" autowarmCount="100" /> > <queryResultCache class="solr.LRUCache" size="30000" initialSize="1000" > autowarmCount="100" /> > <documentCache class="solr.LRUCache" size="25000" initialSize="512" > autowarmCount="512" /> > JVM gc configuration -XX:CICompilerCount=4 -XX:ConcGCThreads=3 > -XX:G1HeapRegionSize=8388608 -XX:GCLogFileSize=20971520 > -XX:+HeapDumpOnOutOfMemoryError -XX:InitialHeapSize=17179869184 > -XX:MarkStackSize=4194304 -XX:MaxHeapSize=17179869184 > -XX:MaxNewSize=10301210624 -XX:MinHeapDeltaBytes=8388608 > -XX:NumberOfGCLogFiles=9 -XX:+PrintGC -XX:+PrintGCApplicationStoppedTime > -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps > -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:ThreadStackSize=256 > -XX:+UseCompressedClassPointers -XX:+UseCompressedOops > -XX:+UseFastUnorderedTimeStamps -XX:+UseG1GC -XX:+UseGCLogFileRotation > heap size 16GB > grouping field cardinality Cardinality of the group field is around 6.2 > Million > > On Thu, Mar 25, 2021 at 2:28 PM Florin Babes <babesflo...@gmail.com> > wrote: > > > Hello, > > Can you give a query example, cache configuration, JVM gc configuration > and > > heap size, grouping field cardinality? > > Thanks, > > > > Florin Babeş > > > > > > În joi, 25 mar. 2021 la 10:16, Parshant Kumar > > <parshant.ku...@indiamart.com.invalid> a scris: > > > > > Yes we are using grouped queries > > > > > > On Thu, Mar 25, 2021, 1:42 PM Saurabh Sharma < > saurabh.infoe...@gmail.com > > > > > > wrote: > > > > > > > Are you doing lots of group queries? Sometimes due to huge data scan, > > you > > > > will face high gc activity and may lead to oom errors. > > > > > > > > On Thu, Mar 25, 2021, 1:24 PM Parshant Kumar > > > > <parshant.ku...@indiamart.com.invalid> wrote: > > > > > > > > > We have 4solr servers which contain same data 100GB each. > > > > > Each server has following configuration: > > > > > > > > > > Solr version - 6.5 > > > > > RAM 96 GB > > > > > 14 Processors > > > > > DiskSpace 350GB for data folder > > > > > > > > > > Request Rate on our servers is around 20/second. > > > > > > > > > > Our servers go OutOfMemory quite often, either when replication > > > > completes( > > > > > not full replication, partial one) or when there is spike is > request > > > > count. > > > > > > > > > > Its not the case that it goes OOM with every replication cycle,but > > > > > sometimes. > > > > > > > > > > We are not able to figure out the reason for this. > > > > > Any help would be appreciated. > > > > > > > > > > -- > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > -- > >