gt; > continued
> > > to be active in the community and made significant contributions the
> > > project.
> > >
> > >
> > > Congratulations to Matthias!
> > >
> > > -- Guozhang
> > >
> >
>
--
Thanks,
Ankur Rana
Software Developer
FarEye
>> maxBytes=1048576, currentLeaderEpoch=Optional[813])},
> >> isolationLevel=READ_UNCOMMITTED, toForget=,
> >metadata=(sessionId=519957053,
> >> epoch=INITIAL)) (kafka.server.ReplicaFetcherThread)
> >> java.net.SocketTimeoutException: Failed to connect within 3 ms
> >> at
> >>
>
> >kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:93)
> >> at
> >>
>
> >kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190)
> >> at
> >>
>
> >kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241)
> >> at
> >>
>
> >kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130)
> >> at
> >>
>
> >kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129)
> >> at scala.Option.foreach(Option.scala:257)
> >> at
> >>
>
> >kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129)
> >> at
> >>
> >kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
> >> at
> >kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
> >>
> >>
> >>
> >>
>
--
Thanks,
Ankur Rana
Software Developer
FarEye
I’ve done it recently and it worked fine.
>
> Thanks,
>
> On Fri, 22 Feb 2019 at 07:47, Ankur Rana wrote:
>
> > Hi,
> > I'll be upgrading Kafka version from 2.1.0 to 2.1.1. Are there any
> special
> > steps to take? I'll be doing any Kafka upgrade for th
th the new version.
6. Once the server is up and running, I will follow the same steps with
another broker. We have 5 such brokers.
Just wanted to check if this is an okay way to upgrade Kafka version from
2.1.0 to 2.1.1?
--
Thanks,
Ankur Rana
Software Developer
FarEye
, 2019 at 3:30 AM Ankur Rana wrote:
> Hi Ismael,
>
> Thank you for replying.
>
> We are using kafka version 2.1.0
> and Kafka streams version 2.0.0
>
> Just you let you know, I was able to fix the problem by changing
> processing guarantee config from exactly once to
n Sat, Feb 16, 2019 at 10:32 PM Ismael Juma wrote:
> Hi,
>
> What version of Kafka are you using?
>
> Ismael
>
> On Fri, Feb 15, 2019 at 8:32 PM Ankur Rana
> wrote:
>
> > Any comments anyone?
> >
> > On Fri, Feb 15, 2019 at 6:08 PM Ankur Rana
> >
Any comments anyone?
On Fri, Feb 15, 2019 at 6:08 PM Ankur Rana wrote:
> Hi everyone,
>
> We have a Kafka cluster with 5 brokers with all topics having at least 2
> replication factor. We have multiple Kafka consumers applications running
> on this cluster. Most of these cons
ny more details.
Stream config :
[image: image.png]
Stream application code : https://codeshare.io/Gq6pLB
--
Thanks,
Ankur Rana
Software Developer
FarEye
negative by observing the results of the
> count().toStream() before the mapValues call?
>
>
> Thanks!
> Bill
>
> On Fri, Feb 8, 2019 at 1:31 PM Ankur Rana
> wrote:
>
> > Hi Bill,
> >
> > I will try to make that change but since the negative values a
)
> > .mapValues((k,v)-> new JobSummary(k,v))
> > .peek((k,v)->{
> > log.info(k.toString());
> > log.info(v.toString());
> > }).selectKey((k,v)-> v.getCompany_id()) // So that the count
> > is consumed in order for each company
> > .to(JOB_SUMMARY,Produced.with(Serdes.Long(),jobSummarySerde));
> >
> >
> > --
> > Thanks,
> >
> > Ankur Rana
> > Software Developer
> > FarEye
> >
>
--
Thanks,
Ankur Rana
Software Developer
FarEye
new JobSummary(k,v))
.peek((k,v)->{
log.info(k.toString());
log.info(v.toString());
}).selectKey((k,v)-> v.getCompany_id()) // So that the count
is consumed in order for each company
.to(JOB_SUMMARY,Produced.with(Serdes.Long(),jobSummarySerde));
> enable
> >
> > > >> core
> >
> > > >>>> dumping, try "ulimit -c unlimited" before starting Java again # #
> >
> > > >>>> If you would like to submit a bug report, please visit:
> >
> > > >>>> # http://bugreport.java.com/bugreport/crash.jsp
> >
> > > >>>> #
> >
> > > >>>> --- T H R E A D --- Current thread
> >
> > > >>>> (0x7f547a29e800): JavaThread
> >
> > > >> "kafka-request-handler-5"
> >
> > > >>>> daemon [_thread_in_Java, id=13722,
> >
> > > >>>> stack(0x7f53700f9000,0x7f53701fa000)]
> >
> > > >>>> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR),
> si_addr:
> >
> > > >>>> 0xdd310c13
> >
> > > >>>> Registers:
> >
> > > >>>> RAX=0x0001, RBX=0x0006e9072fc8,
> >
> > > RCX=0x0688,
> >
> > > >>>> RDX=0x00075e026fc0
> >
> > > >>>> RSP=0x7f53701f7f00, RBP=0x0006e98861f8,
> >
> > > RSI=0x7f53771a4238,
> >
> > > >>>> RDI=0x0006e9886098
> >
> > > >>>> R8 =0x132d, R9 =0xdd310c13,
> >
> > > R10=0x0007c010bbb0,
> >
> > > >>>> R11=0xdd310c13
> >
> > > >>>> R12=0x, R13=0xdd310b3d,
> >
> > > R14=0xdd310c0c,
> >
> > > >>>> R15=0x7f547a29e800
> >
> > > >>>> RIP=0x7f546a857d0d, EFLAGS=0x00010202,
> >
> > > >>>> CSGSFS=0x002b0033, ERR=0x0004
> >
> > > >>>> TRAPNO=0x000e
> >
> > > >>> Thanks,
> >
> > > >>>
> >
> > > >>
> >
> > >
> >
> > >
> >
>
--
Thanks,
Ankur Rana
Software Developer
FarEye
Hello Guys,
Can you please provide some insights into how much Average GC usage is in
your Kafka brokers.
I am seeing really high GC usage in some of our brokers. Sometimes it gets
as high as 30% and our producers start lagging.
--
Thanks,
Ankur Rana
Software Developer
FarEye
https://stackoverflow.com/questions/54039216/how-come-kafka-fails-to-commit-offset-for-a-particular-partition
https://stackoverflow.com/questions/54020753/why-is-kafka-producer-perf-test-sh-throwing-error
--
Thanks,
Ankur Rana
Software Developer
FarEye
14 matches
Mail list logo