g roll times look
> at the dtrace log to see if it was that call?
>
> -Dave
>
> -Original Message-
> From: Stephen Powis [mailto:spo...@salesforce.com]
> Sent: Friday, January 13, 2017 9:25 AM
> To: users@kafka.apache.org
> Subject: Re: Taking a long time to roll
riday, January 13, 2017 9:25 AM
To: users@kafka.apache.org
Subject: Re: Taking a long time to roll a new log segment (~1 min)
So the underlying system call is ftruncate64, logged using dtrace.
# BEGIN stack trace for ftruncate, call took 34160541200ns:
> args==
> 0x7f5f9a1134d7 : ftruncate
re is a link to a GCEasy report:
> > >>> >
> > >>> > http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDEvMTIv
> > >>> LS10b3RhbEdDLWthZmthMS00LmxvZy5nei0tMTUtMzQtNTk=
> > >>> >
> > >>> >
> > >>
tings:
> >>> >
> >>> > -Xmx12G -Xms12G -server -XX:MaxPermSize=48M -verbose:gc
> >>> > -Xloggc:/var/log/kafka/gc.log -XX:+PrintGCDateStamps
> >>> -XX:+PrintGCDetails
> >>> > -XX:+PrintTenuringDistribution -XX:+PrintGCApplication
S10b3RhbEdDLWthZmthMS00LmxvZy5nei0tMTUtMzQtNTk=
> > >>> >
> > >>> >
> > >>> > Currently using G1 gc with the following settings:
> > >>> >
> > >>> > -Xmx12G -Xms12G -server -XX:MaxPermSize=48M -verbose:gc
> >
-XX:MaxPermSize=48M -verbose:gc
> >>> > -Xloggc:/var/log/kafka/gc.log -XX:+PrintGCDateStamps
> >>> -XX:+PrintGCDetails
> >>> > -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime
> >>> > -XX:+PrintTLAB -XX:+DisableExplicitGC -XX:+UseGCLogFi
Thursday, January 12, 2017 1:22 PM
> To: users@kafka.apache.org
> Subject: Re: Taking a long time to roll a new log segment (~1 min)
>
> I've further narrowed it down to this particular line:
> https://github.com/apache/kafka/blob/0.10.0/core/src/main/scala/kafka/log/
> O
You have a local filesystem? Linux?
-Dave
-Original Message-
From: Stephen Powis [mailto:spo...@salesforce.com]
Sent: Thursday, January 12, 2017 1:22 PM
To: users@kafka.apache.org
Subject: Re: Taking a long time to roll a new log segment (~1 min)
I've further narrowed it down to
X:+DisableExplicitGC -XX:+UseGCLogFileRotation
>>> > -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -XX:+UseCompressedOops
>>> > -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:MaxGCPauseMillis=20
>>> > -XX:+HeapDumpOnOutOfMemoryError
>>> > -XX:HeapDumpPath=/var/lo
ose:gc
>> > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
>> >
>> >
>> >
>> > On Thu, Jan 12, 2017 at 9:55 AM, Tauzell, Dave <
>> > dave.tauz...@surescripts.com
>> > > wrote:
>> >
>> > > Can you
gt; On Thu, Jan 12, 2017 at 9:55 AM, Tauzell, Dave <
> > dave.tauz...@surescripts.com
> > > wrote:
> >
> > > Can you collect garbage collection stats and verify there isn't a long
> GC
> > > happening at the same time?
> > >
> > >
ripts.com
> > wrote:
>
> > Can you collect garbage collection stats and verify there isn't a long GC
> > happening at the same time?
> >
> > -Dave
> >
> > -----Original Message-----
> > From: Stephen Powis [mailto:spo...@salesforce.com]
> > Sent: T
collection stats and verify there isn't a long GC
> happening at the same time?
>
> -Dave
>
> -Original Message-
> From: Stephen Powis [mailto:spo...@salesforce.com]
> Sent: Thursday, January 12, 2017 8:34 AM
> To: users@kafka.apache.org
> Subject: Re: Taking a lon
Can you collect garbage collection stats and verify there isn't a long GC
happening at the same time?
-Dave
-Original Message-
From: Stephen Powis [mailto:spo...@salesforce.com]
Sent: Thursday, January 12, 2017 8:34 AM
To: users@kafka.apache.org
Subject: Re: Taking a long time to r
So per the kafka docs I up'd our FD limit to 100k, and we are no longer
seeing the process die, which is good.
Unfortunately we're still seeing very high log segment roll times, and I'm
unsure if this is considered 'normal', as it tends to block producers
during this period.
We are running kafka
I can't speak to the exact details of why fds would be kept open longer in
that specific case, but are you aware that the recommendation for
production clusters for open fd limits is much higher? It's been suggested
to be 100,000 as a starting point for quite awhile:
http://kafka.apache.org/documen
16 matches
Mail list logo