Not sure I understand your question about flapping. The LeaveGroupRequest is only sent on a graceful shutdown. If a consumer knows it is going to shutdown, it is good to proactively make sure the group knows it needs to rebalance work because some of the partitions that were handled by the consumer need to be handled by some other group members.
There's no "flapping" in the sense that the leave group requests should just inform the other members that they need to take over some of the work. I would normally think of "flapping" as meaning that things start/stop unnecessarily. In this case, *someone* needs to deal with the rebalance and pick up the work being dropped by the worker. There's no flapping because it's a one-time event -- one worker is shutting down, decides to drop the work, and a rebalance sorts it out and reassigns it to another member of the group. This happens once and then the "issue" is resolved without any additional interruptions. -Ewen On Thu, Jan 5, 2017 at 3:01 PM, Pradeep Gollakota <pradeep...@gmail.com> wrote: > I see... doesn't that cause flapping though? > > On Wed, Jan 4, 2017 at 8:22 PM, Ewen Cheslack-Postava <e...@confluent.io> > wrote: > > > The coordinator will immediately move the group into a rebalance if it > > needs it. The reason LeaveGroupRequest was added was to avoid having to > > wait for the session timeout before completing a rebalance. So aside from > > the latency of cleanup/committing offests/rejoining after a heartbeat, > > rolling bounces should be fast for consumer groups. > > > > -Ewen > > > > On Wed, Jan 4, 2017 at 5:19 PM, Pradeep Gollakota <pradeep...@gmail.com> > > wrote: > > > > > Hi Kafka folks! > > > > > > When a consumer is closed, it will issue a LeaveGroupRequest. Does > anyone > > > know how long the coordinator waits before reassigning the partitions > > that > > > were assigned to the leaving consumer to a new consumer? I ask because > > I'm > > > trying to understand the behavior of consumers if you're doing a > rolling > > > restart. > > > > > > Thanks! > > > Pradeep > > > > > >