That was the best part: after spending a bunch of time tweaking lots of options for ParallelGC, I found that G1 worked as intended with the default settings other than the heap size (1GB, if I remember correctly). I wanted to try to tweak a few settings in an effort to eliminate the slow growth of Old Gen that I described (adjusting the number of regions, loosening the pause time target, etc.), but I rolled off of that project before I had time to do that investigation. (Once G1 started showing that it could perform reasonably and greatly reduce the number of full GCs, other priorities became higher between then and when I rolled off.) In fact, the one setting I did manually set (I don't remember what it was off the top of my head, but it was recommended by several online blogs) actually made performance worse, so I rolled it back off. So your starting point should be to take all of the defaults, and only tweak from there if you're not happy with the performance you're seeing.
Tim On Wed, Oct 21, 2015 at 9:14 AM, Basmajian, Raffi <rbasmaj...@ofiglobal.com> wrote: > Tim, > > Can you share the G1 settings you used? > > Raffi > > -----Original Message----- > From: tbai...@gmail.com [mailto:tbai...@gmail.com] On Behalf Of Tim Bain > Sent: Wednesday, October 21, 2015 10:41 AM > To: ActiveMQ Users > Subject: Re: GC Overhead limit exceeded? [ EXTERNAL ] > > Before you can tune for throughput (which is what the book Arjen quoted > was almost certainly referring to), you have to have a large enough heap to > hold everything you're looking to hold. > > Arjen's recommendation that you not set a limit may not get you to where > you need to be; as of Hotspot 6u18, the JVM will use 1/4 of the physical > RAM as the default max heap size if you don't specify a value (source: > http://www.oracle.com/technetwork/java/javase/6u18-142093.html), so it's > not like it's unbounded just because you don't bound it explicitly. If 1/4 > of your RAM is large enough for your needs, then great; otherwise, you need > to set an explicit limit that is larger than the default (or you need to > buy more RAM and maybe also set a larger explicit limit). Contrary to what > Arjen said, get this working first so you stop getting OOMs, then figure > out how to tune the garbage collector; tuning the garbage collector when > you're OOMing is a waste of your time, especially since the GC algorithm > will perform differently (better) when you're not running out of heap. > > Be very careful using CMS (-XX:+UseConcMarkSweepGC); since CMS is a > non-compacting Old Gen GC algorithm, under certain circumstances it can > result in memory fragmentation that can result in high GC CPU usage, long > GC pause times (despite its intent to guarantee just the opposite) and > premature OOMs. I'd advise you not to use CMS unless you understand > intimately its risks and how they apply to your application, because the > worst-case scenario is so much worse than the worst-case scenario for the > other strategies. > > G1 comes with a bit more overhead than the other two, so its expected > throughput is slightly lower, but when I tuned our ActiveMQ JVMs to > minimize pause times I was pretty happy with the results. We had an > application that was sensitive to latency and Parallel GC's full GC pauses > appeared to be causing problems; G1 greatly reduced the number of those > full GCs by allowing incremental collection of Old Gen. However, I still > observed a very slow growth of Old Gen objects that weren't collected > during the incremental phases but that would be collected during a full GC > (so they were clearly dead, just not collected incrementally), which I > can't explain. But even with that oddity, latency for G1 was still worlds > better than parallel. > > Tim > > On Tue, Oct 20, 2015 at 2:35 PM, Arjen van der Meijden < > acmmail...@tweakers.net> wrote: > > > Afaik, in modern JVM versions, many of the 'old' recommendations may > > not really apply anymore or even reduce performance. The book 'Java > > Performance: The Definitive Guide' for instance has this statement on > > heap sizing: > > > > "1. The JVM will attempt to find a reasonable minimum and maximum heap > > size based on the machine it is running on. > > 2. Unless the application needs a larger heap than the default, > > consider tuning the performance goals of a GC algorithm (given in the > > next > > chapter) rather than fine-tuning the heap size." > > > > So basically; see how it works first without specifying any limit (so > > not 1GB either) and/or fiddle with the goals rather than the size of > > the heap. If you need a certain minimum heap, for instance to offer > > sufficient space for the 'memory'-store of ActiveMQ, you should > > obviously specify at least that. > > > > -XX:+UseTLAB; is enabled by default, so specifying it won't do > > anything :) > > > > -Xms2G and -Xmx2G; forcing the min and max to the same value is not a > > necessity for modern JVM's, it will also force the JVM in making > > (potentially) less than optimal allocations for the various areas in > > the HEAP and disable automatic resizing. On the other hand, it allows > > the JVM to skip the resizing process altogether, so it can gain a bit > > performance because of that as well. > > > > -XX:+UseConcMarkSweepGC; this GC-variant will reduce pause times, but > > on the cost of more cpu (although that may be worth it) and a bit less > > effective GC (compared to the parallel GC). If your application is > > sensitive to those long pauzes (which may be noticable at both the > > producer and consumer sides) selecting either CMS or G1 may indeed be > > a good choice. For heaps below 4GB, CMS is expected to outperform G1 > > (although that statement doesn't say G1 will always outperform CMS for > > heaps above 4GB). > > > > Best regards, > > > > Arjen > > > > On 20-10-2015 21:51, Howard W. Smith, Jr. wrote: > > > My recommendation (what i'm using for my java web app using latest > > > Java 8 > > > version): > > > > > > -Xms2G > > > -Xmx2G > > > -XX:+UseTLAB > > > -XX:+UseConcMarkSweepGC > > > -XX:+CMSClassUnloadingEnabled > > > > > > I've seen it recommended to make min and max the same value for best > > > GC, but feel free to google/confirm that. :) > > > > > > > > > > > > On Tue, Oct 20, 2015 at 3:25 PM, Kevin Burton <bur...@spinn3r.com> > > wrote: > > > > > >> It depends on your app. Keep increasing it until you don't get any > > >> more errors. > > >> > > >> On Tue, Oct 20, 2015 at 10:28 AM, <barry.barn...@wellsfargo.com> > wrote: > > >> > > >>> So just the max to 2048, leave the min as 1024? > > >>> > > >>> Regards, > > >>> > > >>> Barry > > >>> > > >>> > > >>> -----Original Message----- > > >>> From: Basmajian, Raffi [mailto:rbasmaj...@ofiglobal.com] > > >>> Sent: Tuesday, October 20, 2015 1:18 PM > > >>> To: users@activemq.apache.org > > >>> Subject: RE: GC Overhead limit exceeded? > > >>> > > >>> Max memory is only 1Gb? Not knowing anything about your use cases > > >>> or volumes, 1Gb is low, try increasing to 2Gb, so how far that gets > you. > > >>> > > >>> Btw, on Java 8, "PERM" settings related to PermGen memory space > > >>> are ignored; that space was removed, replaced with Meta space, use > > >>> -XX:MaxMetaspaceSize to configure it, but probably best to leave > > >>> it default, which I believe is unlimited. > > >>> > > >>> > > >>> > > >>> -----Original Message----- > > >>> From: barry.barn...@wellsfargo.com [mailto: > > barry.barn...@wellsfargo.com] > > >>> Sent: Tuesday, October 20, 2015 12:29 PM > > >>> To: users@activemq.apache.org > > >>> Subject: RE: GC Overhead limit exceeded? [ EXTERNAL ] > > >>> > > >>> We are at jdk1.8.0_45. > > >>> > > >>> These are our current mem settings: > > >>> > > >>> export JAVA_MIN_MEM=1024M # Minimum memory for the JVM export > > >>> JAVA_MAX_MEM=1024M # Maximum memory for the JVM export > > >> JAVA_PERM_MEM=128M # > > >>> Minimum perm memory for the JVM export JAVA_MAX_PERM_MEM=256M # > > >>> Maximum memory for the JVM > > >>> > > >>> Any recommendations on the increase? > > >>> > > >>> > > >>> > > >>> Regards, > > >>> > > >>> Barry Barnett > > >>> Enterprise Queuing Services | (QS4U) Open Queuing Services Wells > > >>> Fargo > > >>> Cell: 803-207-7452 > > >>> > > >>> > > >>> -----Original Message----- > > >>> From: burtonator2...@gmail.com [mailto:burtonator2...@gmail.com] > > >>> On Behalf Of Kevin Burton > > >>> Sent: Tuesday, October 20, 2015 10:54 AM > > >>> To: users@activemq.apache.org > > >>> Subject: Re: GC Overhead limit exceeded? > > >>> > > >>> This is memory. > > >>> > > >>> Increase ActiveMQ memory .... if you still have the problem try > > upgrading > > >>> to Java 8 as it's better with GC... > > >>> > > >>> On Tue, Oct 20, 2015 at 5:48 AM, <barry.barn...@wellsfargo.com> > wrote: > > >>> > > >>>> We are receiving the following errors: Any idea where I might > > >>>> look to figure this one out? I ran GC from JConsole already. > > >>>> The disk space on the persistent storage is fine as well. > > >>>> > > >>>> 20 Oct 2015 04:07:18,201 | ERROR | dClientAuth=true | > > >> TransportConnector > > >>>> | 94 - org.apache.activemq.activemq-osgi - 5.11.1 | > > >>>> Could not accept connection : java.lang.Exception: > > >>>> java.lang.OutOfMemoryError: GC overhead limit exceeded > > >>>> > > >>>> 20 Oct 2015 04:07:15,443 | ERROR | v-5.4.1_1/deploy | fileinstall > > >>>> | 7 - org.apache.felix.fileinstall - 3.5.0 | In main > > >>>> loop, we have serious troublejava.lang.OutOfMemoryError: GC > > >>>> overhead limit exceeded > > >>>> > > >>>> 20 Oct 2015 04:06:46,876 | WARN | roker] Scheduler | Topic > > >>>> | 94 - org.apache.activemq.activemq-osgi - 5.11.1 | > > >>>> Failed to browse Topic: TOPIC_NAME > > >>>> java.lang.OutOfMemoryError: GC overhead limit exceeded > > >>>> > > >>>> 20 Oct 2015 04:07:44,164 | ERROR | QUEUENAME] | RollbackTask > > >>>> | 105 - org.apache.aries.transaction.manager - 1.0.0 | > > >>>> Unexpected exception committing > > >>>> org.apache.geronimo.transaction.manager.WrapperNamedXAResource@19 > > >>>> 8926b f; continuing to commit other > > >>>> RMsjavax.transaction.xa.XAException: The method 'xa_rollback' has > > >>>> failed with errorCode '-4'. > > >>>> > > >>>> Regards, > > >>>> > > >>>> Barry > > >>>> > > >>>> > > >>>> > > >>> > > >>> -- > > >>> > > >>> We’re hiring if you know of any awesome Java Devops or Linux > > >>> Operations Engineers! > > >>> > > >>> Founder/CEO Spinn3r.com > > >>> Location: *San Francisco, CA* > > >>> blog: http://burtonator.wordpress.com … or check out my Google+ > > >>> profile <https://plus.google.com/102718274791889610666/posts> > > >>> > > >>> This e-mail transmission may contain information that is > > >>> proprietary, privileged and/or confidential and is intended > > >>> exclusively for the > > >>> person(s) to whom it is addressed. Any use, copying, retention or > > >>> disclosure by any person other than the intended recipient or the > > >> intended > > >>> recipient's designees is strictly prohibited. If you are not the > > intended > > >>> recipient or their designee, please notify the sender immediately > > >>> by > > >> return > > >>> e-mail and delete all copies. OppenheimerFunds may, at its sole > > >> discretion, > > >>> monitor, review, retain and/or disclose the content of all email > > >>> communications. > > >>> > > >> > > >> > > >> -- > > >> > > >> We’re hiring if you know of any awesome Java Devops or Linux > > >> Operations Engineers! > > >> > > >> Founder/CEO Spinn3r.com > > >> Location: *San Francisco, CA* > > >> blog: http://burtonator.wordpress.com … or check out my Google+ > > >> profile <https://plus.google.com/102718274791889610666/posts> > > >> > > > > > > This e-mail transmission may contain information that is proprietary, > privileged and/or confidential and is intended exclusively for the > person(s) to whom it is addressed. Any use, copying, retention or > disclosure by any person other than the intended recipient or the intended > recipient's designees is strictly prohibited. If you are not the intended > recipient or their designee, please notify the sender immediately by return > e-mail and delete all copies. OppenheimerFunds may, at its sole discretion, > monitor, review, retain and/or disclose the content of all email > communications. >