Hi Jimmy, Thanks for your comments. JW01: Prior to renew acknowledgements, the user could be sure that a share group would always make progress in a timely fashion. All records delivered would have a limited number of delivery attempts and the processing time was bounded by the lock timeouts.
With renew acknowledgements, that’s no longer true. I think renew acknowledgements are very useful, but I think there’s a possibility that a user might want to enforce that renew acknowledgements are not used so they can be certain that progress will be made. JW02: While I like the idea of larger numbers of consumers in share groups, I think doing a good job of it is much more than just allowing the configuration to be changed dynamically. As such, I don’t think it’s a good idea to add this to this KIP. I’d like to see a much higher maximum number of in-flight records as well as larger groups. Thanks, Andrew > On 3 Jan 2026, at 18:27, Wang Jimmy <[email protected]> wrote: > > Hi Andrew, > > Thanks for this KIP! I think these additional group-level configurations will > be very useful for managing share groups. > > I have a couple of questions: > > JW01: I'm curious about the share.renew.acknowledge.enable configuration. > Could you help me understand what scenarios users might want to disable this > feature? > > JW02: I noticed that group.share.max.size currently has a default value of > 200. I'm wondering if we should also consider making this configuration > dynamically adjustable per group, similar to the other configurations > proposed in this KIP? Some share groups might need more than 200 consumers to > handle higher throughput, and allowing dynamic adjustment could provide > additional flexibility. > > Best, > > Jimmy Wang > > On 2025/11/24 21:15:48 Andrew Schofield wrote: >> Hi, >> I’d like to start the discussion on a small KIP which adds some >> configurations for share groups which were previously only available as >> broker configurations. >> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1240%3A+Additional+group+configurations+for+share+groups >> >> Thanks, >> Andrew
