Incorrect declared batch size, premature EOF reached
Hello, I receive that exception while polling data with Kafka consumer. I can't find how to recover from it, it prevents the consumer from consuming new data. Do you know what can be done to fix that? I pasted the stack trace below, removing non significant data or replacing it with generic patterns (like TOPIC-PARTITION). Any help would be appreciated. Sébastien Rebecchi Received exception when fetching the next record from TOPIC-PARTITION. If needed, please seek past the record to continue consumption. org.apache.kafka.common.KafkaException: Received exception when fetching the next record from TOPIC-PARTITION. If needed, please seek past the record to continue consumption. at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1473) at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1600(Fetcher.java:1332) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:645) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:606) at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1263) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1225) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1201) . . . Caused by: org.apache.kafka.common.record.InvalidRecordException: Incorrect declared batch size, premature EOF reached
Re: Regarding Kafka connect task to partition relationship for both source and sink connectors
Hello Confirmed. Partition is the minimal granularity level, so having more consumers than the number of partitions of a topic for a same consumer group is useless, having P partitions means maximum parallelism is reached using P consumers. Regards, Sébastien. Le jeu. 30 mai 2024 à 14:43, Yeikel Santana a écrit : > Hi everyone, > > > From my understanding, if a topic has n partitions, we can create up to n > tasks for both the source and sink connectors to achieve the maximum > parallelism. Adding more tasks would not be beneficial, as they would > remain idle and be limited to the number of partitions of the topic > > > Could you please confirm if this understanding is correct? > > > If this understanding is incorrect could you please explain the > relationship if any? > > > Thank you! > > >
Kafka rebalance
Hello, If I have a consumer group with more members than the number of partition of a topic, adding a consumer to the group will still trigger a rebalancing of partitions to the group? Imagine the partitions are already perfectly balanced, ie each consumer has 1 partition. Then reblancing won't be of any use in theory. So does Kafka still triggers a rebalancing? Thanks , Sébastien