[ https://issues.apache.org/jira/browse/KAFKA-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978384#comment-15978384 ]
Ismael Juma commented on KAFKA-4935: ------------------------------------ While doing this, we should also consider if we should do the CRC checks at the record batch level for older message formats too. For the uncompressed path, this is equivalent to doing it at the record level and for the compressed path, it would be more efficient as the number of CRCs would be lower and they would be done on the compressed payload. > Consider disabling record level CRC checks for message format V2 > ---------------------------------------------------------------- > > Key: KAFKA-4935 > URL: https://issues.apache.org/jira/browse/KAFKA-4935 > Project: Kafka > Issue Type: Sub-task > Reporter: Apurva Mehta > Fix For: 0.11.0.0 > > > With the new message format proposed in KIP-98, the record level CRC has been > moved to the the batch header. > Because we expose the record-level CRC in `RecordMetadata` and > `ConsumerRecord`, we currently compute it eagerly based on the key, value and > timestamp even though these methods are rarely used. Ideally, we'd deprecate > the relevant methods in `RecordMetadata` and `ConsumerRecord` while making > the CRC computation lazy. This seems pretty hard to achieve in the Producer > without increasing memory retention, but it may be possible to do in the > Consumer. > An alternative option is to return the batch CRC from the relevant methods. -- This message was sent by Atlassian JIRA (v6.3.15#6346)