We've had no problems with G1 in all of our clusters with varying load levels. I think we've seen an occasional long GC here and there, but nothing recurring at this point.
What's the full command line that you're using with all the options? -Todd On Wed, Oct 14, 2015 at 2:18 PM, Scott Clasen <sc...@heroku.com> wrote: > You can also use -Xmn with that gc to size the new gen such that those > buffers don't get tenured > > I don't think that's an option with G1 > > On Wednesday, October 14, 2015, Cory Kolbeck <ckolb...@gmail.com> wrote: > > > I'm not sure that will help here, you'll likely have the same > > medium-lifetime buffers getting into the tenured generation and forcing > > large collections. > > > > On Wed, Oct 14, 2015 at 10:00 AM, Gerrit Jansen van Vuuren < > > gerrit...@gmail.com <javascript:;>> wrote: > > > > > Hi, > > > > > > I've seen pauses using G1 in other applications and have found that > > > -XX:+UseParallelGC > > > -XX:+UseParallelOldGC works best if you're having GC issues in general > > on > > > the JVM. > > > > > > > > > Regards, > > > Gerrit > > > > > > On Wed, Oct 14, 2015 at 4:28 PM, Cory Kolbeck <ckolb...@gmail.com > > <javascript:;>> wrote: > > > > > > > Hi folks, > > > > > > > > I'm a bit new to the operational side of G1, but pretty familiar with > > its > > > > basic concept. We recently set up a Kafka cluster to support a new > > > product, > > > > and are seeing some suboptimal GC performance. We're using the > > parameters > > > > suggested in the docs, except for having switched to java 1.8_40 in > > order > > > > to get better memory debugging. Even though the cluster is handling > > only > > > > 2-3k messages per second per node, we see periodic 11-18 second > > > > stop-the-world pauses on a roughly hourly cadence. I've turned on > > > > additional GC logging, and see no humongous allocations, it all seems > > to > > > be > > > > buffers making it into the tenured gen. They appear to be > collectable, > > as > > > > the collection triggered by dumping the heap collects them all. Ideas > > for > > > > additional diagnosis or tuning very welcome. > > > > > > > > --Cory > > > > > > > > > >