[ 
https://issues.apache.org/jira/browse/KAFKA-19024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17939117#comment-17939117
 ] 

Lan Ding commented on KAFKA-19024:
----------------------------------

In the current implementation, the `group.share.max.groups` is only used when 
setting the maxEntries for the ShareSessionCache.
Perhaps we could introduce a new error type `MAX_SHARE_GROUP_SIZE_REACHED`. 
When processing a Heartbeat request, if the group ID does not exist and the 
`group.share.max.groups` limit is reached, this exception would be thrown. 
Clients could then catch this exception and handle it accordingly (e.g., 
logging an error message). Do you think this approach is feasible?

> Enhance the client behaviour when it tries to exceed the 
> `group.share.max.groups`
> ---------------------------------------------------------------------------------
>
>                 Key: KAFKA-19024
>                 URL: https://issues.apache.org/jira/browse/KAFKA-19024
>             Project: Kafka
>          Issue Type: Sub-task
>            Reporter: Sanskar Jhajharia
>            Assignee: Lan Ding
>            Priority: Minor
>
> For share groups we use the `group.share.max.groups` config to define the 
> number of max share groups we allow. However, when we exceed the same, the 
> client logs do not specify any such error and simply do not consume. The 
> group doesn't get created but the client continues to send Heartbeats hoping 
> for one of the existing groups to shut down and allowing it to form a group. 
> Having a log or an exception in the client logs will help them debug such 
> situations accurately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to