other nodes gc log looks like this:

[GC 99623.647: [ParNew: 284027K->6782K(314560K), 0.0136660 secs]
818985K->541887K(1013632K), 0.0137850 secs] [Times: user=0.10 sys=0.00,
real=0.02 secs]
[GC 99636.522: [ParNew: 286398K->9306K(314560K), 0.0149200 secs]
821503K->544533K(1013632K), 0.0150510 secs] [Times: user=0.11 sys=0.00,
real=0.01 secs]

I think these are minor GC, CMS mark-sweep is not minor GC

On Sun, Mar 13, 2016 at 7:57 PM, Manikumar Reddy <manikumar.re...@gmail.com>
wrote:

> Hi,
>
> These logs are minor GC logs and they look normal. Look for the word 'Full'
>  for full gc log  details.
>
>
>
>
> On Sun, Mar 13, 2016 at 3:06 PM, li jinyu <disorder...@gmail.com> wrote:
>
> > I'm using Kafka 0.8.1.1, have 10 nodes in a cluster, all are started with
> > default command:
> > ./bin/kafka-server-start.sh conf/server.properties
> >
> > but yesterday, I found that two nodes kept full GC(every 1-3 seconds),
> > although there was still enough memory:
> >
> > 99546.435: [GC [1 CMS-initial-mark: 532714K(699072K)] 538916K(1013632K),
> > 0.0099270 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> > 99546.446: [CMS-concurrent-mark-start]
> > 99546.520: [CMS-concurrent-mark: 0.074/0.074 secs] [Times: user=0.21
> > sys=0.03, real=0.08 secs]
> > 99546.520: [CMS-concurrent-preclean-start]
> > 99546.525: [CMS-concurrent-preclean: 0.005/0.005 secs] [Times: user=0.01
> > sys=0.00, real=0.00 secs]
> > 99546.525: [CMS-concurrent-abortable-preclean-start]
> > 99547.348: [CMS-concurrent-abortable-preclean: 0.822/0.823 secs] [Times:
> > user=1.53 sys=0.31, real=0.83 secs]
> > 99547.350: [GC[YG occupancy: 158101 K (314560
> > K)]2016-03-12T20:19:58.597+0800: 99547.350: [GC 99547.350: [ParNew:
> > 158101K->5170K(314560K), 0.0189700 secs] 690816K->538498K(1013632K),
> > 0.0190720 secs] [Times: user=0.11 sys=0.00, real=0.02 secs]
> > 99547.369: [Rescan (parallel) , 0.0099240 secs]99547.379: [weak refs
> > processing, 0.0000090 secs]99547.379: [class unloading, 0.0028860
> > secs]99547.382: [scrub symbol table, 0.0015400 secs]99547.383: [scrub
> > string table, 0.0001640 secs] [1 CMS-remark: 533327K(699072K)]
> > 538498K(1013632K), 0.0353890 secs] [Times: user=0.18 sys=0.00, real=0.03
> > secs]
> > 99547.386: [CMS-concurrent-sweep-start]
> > 99547.421: [CMS-concurrent-sweep: 0.035/0.035 secs] [Times: user=0.08
> > sys=0.01, real=0.04 secs]
> > 99547.421: [CMS-concurrent-reset-start]
> > 99547.426: [CMS-concurrent-reset: 0.005/0.005 secs] [Times: user=0.01
> > sys=0.01, real=0.00 secs]
> > 99549.132: [GC 99549.132: [ParNew: 284786K->4184K(314560K), 0.0158450
> secs]
> > 816880K->536891K(1013632K), 0.0159570 secs] [Times: user=0.08 sys=0.00,
> > real=0.01 secs]
> >
> >
> > is this caused by small head(1G by default), or consumer lag?
> >
> > I tried to use jmap to check the heap, but failed to attach to the java
> > process.
> > how to find the root cause?
> >
> > --
> > Don't schedule every day, make them disorder.
> >
>



-- 
Don't schedule every day, make them disorder.

Reply via email to