We’ve run on AWS instances with 72 CPUs. They all get used. Throughput is linear with the number of CPUs. You need enough free RAM to cache all of the index files in OS file buffers.
The entire point of avoiding locking in the Lucene index is so that multiple threads can read it without contention. We made the same decision in the Ultraseek index design 25 years ago. We don’t do any special JVM tuning. We use the config that Shawn Heisey recommended five years ago. We reacently increased the heap from 8 GB to 16 GB. GC_TUNE=" \ -XX:+UseG1GC \ -XX:+ParallelRefProcEnabled \ -XX:G1HeapRegionSize=8m \ -XX:MaxGCPauseMillis=200 \ -XX:+UseLargePages \ -XX:+AggressiveOpts \ " wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) > On Nov 12, 2021, at 7:41 AM, Deepak Goel <deic...@gmail.com> wrote: > > My guess is (please note it is not a benchmark): you would need a lot of > tuning to make Solr use 32 cpu cores per node. After 4 cpu cores, you would > have to start tuning Solr, JVM, your app (requirement), IOP'S. > > Deepak > "The greatness of a nation can be judged by the way its animals are treated > - Mahatma Gandhi" > > +91 73500 12833 > deic...@gmail.com > > Facebook: https://www.facebook.com/deicool > LinkedIn: www.linkedin.com/in/deicool > > "Plant a Tree, Go Green" > > Make In India : http://www.makeinindia.com/home > > > On Fri, Nov 12, 2021 at 8:33 PM Rahul Goswami <rahul196...@gmail.com> wrote: > >> Hi, >> Does anyone have benchmarks on performance as the number of cores on a Solr >> node goes up? I am trying to get an idea about how many cores per node is >> too much. Assume 31 GB heap size, SSD disk and 32 CPU cores. >> Preferably non-SolrCloud (aka standalone), but even if you have insights >> from SolrCloud that would be a good start. >> I am using Solr 7.7.2. >> >> Thanks, >> Rahul >>