> > > > > broker
> > > > > > manually.
> > > > > >
> > > > > > We learned that, replication can be auto resolved by Kafka, if we
> > can
> > > > > > manage to get the same broker.id on the new AWS instance spun-up
> > > > > in-place
> > > > > > of failed broker/instance.
> > > > > >
> > > > > > I have read, we can set broker.id.generation.enable= false, but
> > what
> > > is
> > > > > the
> > > > > > best way to identify and retain the broker.id? Any links/help is
> > > > > > appreciated.
> > > > > > Thanks and Regards,
> > > > > > Cnu
> > > > > >
> > > > > --
> > > > > Thanks,
> > > > > Naresh
> > > > > www.linkedin.com/in/naresh-dulam
> > > > > http://hadoopandspark.blogspot.com/
> > > > >
> > > > --
> > > > Thanks,
> > > > Andrey
> > > >
> > >
> >
>
--
Thanks and Regards,
Amit Pal
gt; > > best way to identify and retain the broker.id? Any links/help is
> > > appreciated.
> > > Thanks and Regards,
> > > Cnu
> > >
> > --
> > Thanks,
> > Naresh
> > www.linkedin.com/in/naresh-dulam
> > http://hadoopandspark.blogspot.com/
> >
> --
> Thanks,
> Andrey
>
--
Thanks and Regards,
Amit Pal
Hi Shantanu,
If you are using kafka stream, upgrade to the latest jar. There are a bunch
of fixes in the way it uses kafka consumers.
Apart from this: try these settings
1. Set the session.timeout.ms value higher, to something like 30
2. Set the heartbeat.interval.ms to lower value, something
I had a similar use case of joining two streams with windows spanning days.
That didn't work out well.
For you, this approach might work better:
1. Stream Trades and put it in a Key/Value store like (Aerospike).
2. Stream Risks and in the map function you can join it with key saved in
Aerospike.