"From what I understand, there's currently no way to prevent this type of
shuffling of partitions from worker to worker while the consumers are under
maintenance. I'm also not sure if this an issue I don't need to worry
about."
If you don't want rebalance, consumers can also manually subscribe to
If you don't want or need automated rebalancing or partition reassignment
amongst clients then you could always just have each worker/client subscribe
directly to individual partitions using consumer.assign() rather than
consumer.subscribe(). That way when client 1 is restarted the data in its
What I mean by "flapping" in this context is unnecessary rebalancing
happening. The example I would give is what a Hadoop Datanode would do in
case of a shutdown. By default, it will wait 10 minutes before replicating
the blocks owned by the Datanode so routine maintenance wouldn't cause
unnecessar
Not sure I understand your question about flapping. The LeaveGroupRequest
is only sent on a graceful shutdown. If a consumer knows it is going to
shutdown, it is good to proactively make sure the group knows it needs to
rebalance work because some of the partitions that were handled by the
consumer
I see... doesn't that cause flapping though?
On Wed, Jan 4, 2017 at 8:22 PM, Ewen Cheslack-Postava
wrote:
> The coordinator will immediately move the group into a rebalance if it
> needs it. The reason LeaveGroupRequest was added was to avoid having to
> wait for the session timeout before compl
The coordinator will immediately move the group into a rebalance if it
needs it. The reason LeaveGroupRequest was added was to avoid having to
wait for the session timeout before completing a rebalance. So aside from
the latency of cleanup/committing offests/rejoining after a heartbeat,
rolling bou