[
https://issues.apache.org/jira/browse/KAFKA-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020751#comment-16020751
]
ASF GitHub Bot commented on KAFKA-4935:
---------------------------------------
GitHub user hachikuji opened a pull request:
https://github.com/apache/kafka/pull/3123
KAFKA-4935 [WIP]: Deprecate client checksum API and compute lazy partial
checksum for magic v2
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/hachikuji/kafka KAFKA-4935
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/kafka/pull/3123.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #3123
----
----
> Consider disabling record level CRC checks for message format V2
> ----------------------------------------------------------------
>
> Key: KAFKA-4935
> URL: https://issues.apache.org/jira/browse/KAFKA-4935
> Project: Kafka
> Issue Type: Sub-task
> Reporter: Apurva Mehta
> Assignee: Jason Gustafson
> Priority: Blocker
> Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> With the new message format proposed in KIP-98, the record level CRC has been
> moved to the the batch header.
> Because we expose the record-level CRC in `RecordMetadata` and
> `ConsumerRecord`, we currently compute it eagerly based on the key, value and
> timestamp even though these methods are rarely used. Ideally, we'd deprecate
> the relevant methods in `RecordMetadata` and `ConsumerRecord` while making
> the CRC computation lazy. This seems pretty hard to achieve in the Producer
> without increasing memory retention, but it may be possible to do in the
> Consumer.
> An alternative option is to return the batch CRC from the relevant methods.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)