[ https://issues.apache.org/jira/browse/KAFKA-9965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17260987#comment-17260987 ]
Sandeep S edited comment on KAFKA-9965 at 1/8/21, 5:06 AM: ----------------------------------------------------------- Hi, I am facing similar issue. |GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID| | | | | | | | | |nadc_status_consumers cms-nadc-status-topic 5 - 0 - Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-9-a6f6c35a-4d71-461a-be7c-637afc775196 /192. 168.71.165 Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-9| |nadc_status_consumers cms-nadc-status-topic 0 1 3 2 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-10-9f88cee5-d21c-4966-bec7-b45ca41c9c2d /192. 168.28.218 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-10| |nadc_status_consumers cms-nadc-status-topic 4 - 0 - Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-11-c5d02f8b-5a62-414c-ab23-db8ca9b3f74a /192. 168.71.165 Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-11| |nadc_status_consumers cms-nadc-status-topic 2 1 2 1 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-9-b52c9d8d-8739-42ae-9867-895ad6088dab /192. 168.28.218 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-9| |nadc_status_consumers cms-nadc-status-topic 1 1 1 0 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-11-44b424f7-831d-4071-8b81-b3f653ac1c81 /192. 168.28.218 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-11| |nadc_status_consumers cms-nadc-status-topic 3 1 2 1 Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-10-ccaf194d-3b23-4eb8-8342-3a74736ad8a5 /192. 168.71.165 Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-10| I am posting 4 Messages simultaneously to a topic "cms-nadc-status-topic" There are 6 partitions and 6 consumers. I assume, 4 messages will be going to 4 different partitions, but its not happening . In above instance partition 0 is getting 2 messages, and 3 partitions(consumers) are idle. Consumer Config configurations= {'enable.auto.commit': 'false', 'auto.offset.reset': 'earliest', 'partition.assignment.strategy': 'roundrobin', 'max.poll.interval.ms': 3600000 } , Tried with default partition assignment also, but doesn't seem to help Version - Confluent - 5.0.0 was (Author: sandes): Hi, I am facing similar issue. |GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID| | | | | | | | | |nadc_status_consumers cms-nadc-status-topic 5 - 0 - Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-9-a6f6c35a-4d71-461a-be7c-637afc775196 /192. 168.71.165 Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-9| |nadc_status_consumers cms-nadc-status-topic 0 1 3 2 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-10-9f88cee5-d21c-4966-bec7-b45ca41c9c2d /192. 168.28.218 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-10| |nadc_status_consumers cms-nadc-status-topic 4 - 0 - Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-11-c5d02f8b-5a62-414c-ab23-db8ca9b3f74a /192. 168.71.165 Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-11| |nadc_status_consumers cms-nadc-status-topic 2 1 2 1 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-9-b52c9d8d-8739-42ae-9867-895ad6088dab /192. 168.28.218 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-9| |nadc_status_consumers cms-nadc-status-topic 1 1 1 0 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-11-44b424f7-831d-4071-8b81-b3f653ac1c81 /192. 168.28.218 Consumer::Mule-cm-chg-mgmt-68c6884d8b-4788p-11| |nadc_status_consumers cms-nadc-status-topic 3 1 2 1 Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-10-ccaf194d-3b23-4eb8-8342-3a74736ad8a5 /192. 168.71.165 Consumer::Mule-cm-chg-mgmt-68c6884d8b-q6wzm-10| I am posting 4 Messages simultaneously to a topic "cms-nadc-status-topic" There are 6 partitions and 6 consumers. I assume, 4 messages will be going to 4 different partitions, but its not happening . In above instance partition 0 is getting 2 messages, and 3 partitions(consumers) are idle. Consumer Config configurations={'enable.auto.commit': 'false', 'auto.offset.reset': 'earliest', 'partition.assignment.strategy': 'roundrobin', 'max.poll.interval.ms': 3600000 }, Version - Confluent - 5.0.0 > Uneven distribution with RoundRobinPartitioner in AK 2.4+ > --------------------------------------------------------- > > Key: KAFKA-9965 > URL: https://issues.apache.org/jira/browse/KAFKA-9965 > Project: Kafka > Issue Type: Bug > Components: producer > Affects Versions: 2.4.0, 2.5.0, 2.4.1 > Reporter: Michael Bingham > Priority: Major > > {{RoundRobinPartitioner}} states that it will provide equal distribution of > records across partitions. However with the enhancements made in KIP-480, it > may not. In some cases, when a new batch is started, the partitioner may be > called a second time for the same record: > [https://github.com/apache/kafka/blob/2.4/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L909] > [https://github.com/apache/kafka/blob/2.4/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L934] > Each time the partitioner is called, it increments a counter in > {{RoundRobinPartitioner}}, so this can result in unequal distribution. > Easiest fix might be to decrement the counter in > {{RoundRobinPartitioner#onNewBatch}}. > -- This message was sent by Atlassian Jira (v8.3.4#803005)