Hi team, I am facing this exception, org.apache.kafka.common.KafkaException: Received exception when fetching the next record from topic_log-3. If needed, please seek past the record to continue consumption.
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1076) at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:944) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:567) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:528) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043) at org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:257) Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size is less than the minimum record overhead (14) Also, when I consume message from ubuntu terminal consumer, I get same error. How can skip this corrupt record?