[ https://issues.apache.org/jira/browse/KAFKA-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17167357#comment-17167357 ]
Tommy Becker commented on KAFKA-10324: -------------------------------------- We have some legacy applications whose consumer versions are not easily upgraded hitting this issue, and it's hard to diagnose since the consumers do not give a proper message (or indeed any message in the case of the 0.10.1.0 consumer) and since it is dependent on the way messages are batched, which is opaque to clients. > Pre-0.11 consumers can get stuck when messages are downconverted from V2 > format > ------------------------------------------------------------------------------- > > Key: KAFKA-10324 > URL: https://issues.apache.org/jira/browse/KAFKA-10324 > Project: Kafka > Issue Type: Bug > Reporter: Tommy Becker > Priority: Major > > As noted in KAFKA-5443, The V2 message format preserves a batch's lastOffset > even if that offset gets removed due to log compaction. If a pre-0.11 > consumer seeks to such an offset and issues a fetch, it will get an empty > batch, since offsets prior to the requested one are filtered out during > down-conversion. KAFKA-5443 added consumer-side logic to advance the fetch > offset in this case, but this leaves old consumers unable to consume these > topics. > The exact behavior varies depending on consumer version. The 0.10.0.0 > consumer throws RecordTooLargeException and dies, believing that the record > must not have been returned because it was too large. The 0.10.1.0 consumer > simply spins fetching the same empty batch over and over. -- This message was sent by Atlassian Jira (v8.3.4#803005)