It is just a goal… however I would not tune the no of regions or region size yet.Simply specify gc algorithm and max heap size.Try to tune other options only if there is a need, only one at at time (otherwise it is difficult to determine cause/effects) and have a performance testing framework in pl
Thanks, IL check them out
Curious though, the official G1GC page
https://www.oracle.com/technical-resources/articles/java/g1gc.html says
that there must be no more than 2048 regions and region size is limited
upto 32mb
That's strange because our heaps go up to 100gb and that would require 64mb
re
If you do tests with newer Java versions you can also try:
- UseNUMA: -XX:+UseNUMA. See https://openjdk.org/jeps/345
You can also assess the new Java GC algorithms:
- -XX:+UseShenandoahGC - works with terabyte of heaps - more memory efficient
than zgc with heaps <32 GB. See also:
https://develo
Hi Faiz,
We find G1GC works well for some of our workloads that are Parquet-read
intensive and we have been using G1GC with Spark on Java 8 already
(spark.driver.extraJavaOptions and spark.executor.extraJavaOptions=
“-XX:+UseG1GC”), while currently we are mostly running Spark (3.3 and higher)