Hey there everybody,

Thanks for the introduction Boyang. I appreciate the effort you are putting
into improving consumer behavior in Kafka.

@Matt
I also believe the default value is high. In my opinion, we should aim to a
default cap around 250. This is because in the current model any consumer
rebalance is disrupting to every consumer. The bigger the group, the longer
this period of disruption.

If you have such a large consumer group, chances are that your client-side
logic could be structured better and that you are not using the high number
of consumers to achieve high throughput.
250 can still be considered of a high upper bound, I believe in practice
users should aim to not go over 100 consumers per consumer group.

In regards to the cap being global/per-broker, I think that we should
consider whether we want it to be global or *per-topic*. For the time
being, I believe that having it per-topic with a global default might be
the best situation. Having it global only seems a bit restricting to me and
it never hurts to support more fine-grained configurability (given it's the
same config, not a new one being introduced).

On Tue, Nov 20, 2018 at 11:32 AM Boyang Chen <bche...@outlook.com> wrote:

> Thanks Matt for the suggestion! I'm still open to any suggestion to change
> the default value. Meanwhile I just want to point out that this value is a
> just last line of defense, not a real scenario we would expect.
>
>
> In the meanwhile, I discussed with Stanislav and he would be driving the
> 389 effort from now on. Stanislav proposed the idea in the first place and
> had already come up a draft design, while I will keep focusing on KIP-345
> effort to ensure solving the edge case described in the JIRA<
> https://issues.apache.org/jira/browse/KAFKA-7610>.
>
>
> Thank you Stanislav for making this happen!
>
>
> Boyang
>
> ________________________________
> From: Matt Farmer <m...@frmr.me>
> Sent: Tuesday, November 20, 2018 10:24 AM
> To: dev@kafka.apache.org
> Subject: Re: [Discuss] KIP-389: Enforce group.max.size to cap member
> metadata growth
>
> Thanks for the KIP.
>
> Will this cap be a global cap across the entire cluster or per broker?
>
> Either way the default value seems a bit high to me, but that could just be
> from my own usage patterns. I’d have probably started with 500 or 1k but
> could be easily convinced that’s wrong.
>
> Thanks,
> Matt
>
> On Mon, Nov 19, 2018 at 8:51 PM Boyang Chen <bche...@outlook.com> wrote:
>
> > Hey folks,
> >
> >
> > I would like to start a discussion on KIP-389:
> >
> >
> >
> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-389%253A%2BEnforce%2Bgroup.max.size%2Bto%2Bcap%2Bmember%2Bmetadata%2Bgrowth&amp;data=02%7C01%7C%7Cb0ee4fe97ad44cc046eb08d64e8f5d90%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636782774981237462&amp;sdata=Q2T7hIoVq8GiPVhr0HIxVkGNChkiz1Pvk2zyLD5gCu8%3D&amp;reserved=0
> >
> >
> > This is a pretty simple change to cap the consumer group size for broker
> > stability. Give me your valuable feedback when you got time.
> >
> >
> > Thank you!
> >
>


-- 
Best,
Stanislav

Reply via email to