[ 
https://issues.apache.org/jira/browse/KAFKA-19024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17939153#comment-17939153
 ] 

Lan Ding commented on KAFKA-19024:
----------------------------------

Thanks for your reply.
Whether reusing the GROUP_MAX_SIZE_REACHED error code is appropriate depends on 
whether clients need to differentiate between these two error scenarios. 
However, given that clients currently only need to retry and log the error (no 
special handling is required), reusing the error code seems acceptable.

> Enhance the client behaviour when it tries to exceed the 
> `group.share.max.groups`
> ---------------------------------------------------------------------------------
>
>                 Key: KAFKA-19024
>                 URL: https://issues.apache.org/jira/browse/KAFKA-19024
>             Project: Kafka
>          Issue Type: Sub-task
>            Reporter: Sanskar Jhajharia
>            Assignee: Lan Ding
>            Priority: Minor
>
> For share groups we use the `group.share.max.groups` config to define the 
> number of max share groups we allow. However, when we exceed the same, the 
> client logs do not specify any such error and simply do not consume. The 
> group doesn't get created but the client continues to send Heartbeats hoping 
> for one of the existing groups to shut down and allowing it to form a group. 
> Having a log or an exception in the client logs will help them debug such 
> situations accurately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to