On Tue, Jan 9, 2018 at 10:53 AM, Leon Rosenberg <rosenberg.l...@gmail.com> wrote: > On Mon, Jan 8, 2018 at 10:26 AM, Mark Thomas <ma...@apache.org> wrote: > >> On 08/01/18 15:16, Christopher Schultz wrote: >> >> <snip/> >> >> >> Therefore, the first time that the GC runs, the process can take >> >> longer. Also, the heap is more likely to be fragmented and require >> >> a heap compaction. To avoid that, till now my strategy is to: - >> >> Start application with the minimum heap size that application >> >> requires - When the GC starts up, it runs frequently and >> >> efficiently because the heap is small >> > >> > I think this is a reasonable expectation for someone who doesn't >> > understand the Black Art of Garbage Collection, but I'm not sure it's >> > actually true. I'm not claiming that I know any better than you do, >> > but I suspect that the collector takes its parameters very seriously, >> > and when you introduce artificial constraints (such as a smaller >> > minimum heap size), the GC will attempt to respect those constraints. >> > The reality is that those constraints are completely unnecessary; you >> > have only imposed them because you think you know better than the GC >> > algorithm.
Thank you for all your response. Well, most of our clients are running on IBM J9 JVM and that is what IBM recommends :): https://www.ibm.com/support/knowledgecenter/SSYKE2_9.0.0/com.ibm.java.multiplatform.90.doc/diag/understanding/mm_heapsizing_initial.html We have started moving our clients from WAS to Tomcat+ HotSpot JDK8 platform - that's why I am here, learning about it and throwing questions :). One thing about memory allocation by OS: if I setup different values for initial and max, then after starting up the JVM, Windows *reserves* the max amount of memory exclusively for the JVM. I get that using Private Bytes counter. So that's why I believe there is no chance of OOM at OS level. What I am more interested is about the cost of heap expansion in HotSpot JVM. >> Generally, the more memory available, the more efficient GC is. The >> general rule is you can optimise for any two of the following at the >> expense of the third: >> - low pause time >> - high throughout >> - low memory usage >> >> It has been a few years since I listened to the experts talk about it >> but a good rule of thumb used to be that you should size your heap 3-5 >> times bigger than the minimum heap used once the application memory >> usages reaches steady state (i.e. the minimum value of the sawtooth on >> the heap usage graph) >> >> > Actually G1, which is very usable with java8 and default in jdk9, doesn't > produce the sawtooth graph anymore. > I also think the amount of memory has less influence on GC Performance in > G1 or Shenandoah, but instead influence if they would perform a STW phase > (which of course is also performance related, but differently). > But I am not an expert either, so I might be wrong here. > > As for OP's original statement: "When the GC starts up, it runs frequently > and > efficiently because the heap is small", I don't think it is correct > anymore, especially not for G1, as long as the object size is reasonable > (not Humongous). > > > Leon Yes Leon, we are seeing that G1 is works best for our app. We have some large objects and we can't reduce the size immediately. So we have decided to increase G1 region size for the time being and collecting dead Humongous objects during Young collections. Thanks! Suvendu --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org