[ https://issues.apache.org/jira/browse/KAFKA-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16026977#comment-16026977 ]
Guozhang Wang commented on KAFKA-5211: -------------------------------------- For this specific issue, Streams should be independent of whichever behavior in Consumer since Streams always use the {{Consumer<byte[], byte[]>}} and perform deserialization after polling the records. This is because embedded consumer within Streams would probably fetch from topics with different serde mechanisms, so it need to deserialize based on the topics. > KafkaConsumer should not skip a corrupted record after throwing an exception. > ----------------------------------------------------------------------------- > > Key: KAFKA-5211 > URL: https://issues.apache.org/jira/browse/KAFKA-5211 > Project: Kafka > Issue Type: Bug > Reporter: Jiangjie Qin > Assignee: Jiangjie Qin > Labels: clients, consumer > Fix For: 0.11.0.0 > > > In 0.10.2, when there is a corrupted record, KafkaConsumer.poll() will throw > an exception and block on that corrupted record. In the latest trunk this > behavior has changed to skip the corrupted record (which is the old consumer > behavior). With KIP-98, skipping corrupted messages would be a little > dangerous as the message could be a control message for a transaction. We > should fix the issue to let the KafkaConsumer block on the corrupted messages. -- This message was sent by Atlassian JIRA (v6.3.15#6346)