Please suggest on how to avoid restart of Kafka Consumer Client on one of
the broker fail down in the cluster.
Or do we need to update the offset manually if we put back the failed
broker in the cluster.
On 5 January 2018 at 19:18, rAhul wrote:
> Hi,
>
> I have a Apache kafka cluster with 3 node
I don't think you'll know for sure until you try. You will have to properly
protect your consumer, as it's not thread safe. (I asked recently in a
situation where all I wanted to do from a separate thread was commit().)
Seems like an easy experiment to run. You have a producer which posts to
two t
Exactly. This is what I want to achieve.
Is it good practice to keep on updating subscription list of same kafka
consumer?
Any concerns in multi threaded environment?
Regards,
Chintan
On 07-Jan-2018 6:32 PM, "Skip Montanaro" wrote:
Got it. Does it work to maintain your currently subscribed to
Got it. Does it work to maintain your currently subscribed topic list, then
spend to it and subscribe to the extended list when you receive an update?
On Jan 6, 2018 9:37 PM, "chintan mavawala" wrote:
> Discovery of new topic is not a concern. Clients who creates a new topic
> sends my process t