[ https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409181#comment-15409181 ]
ASF GitHub Bot commented on FLINK-4035: --------------------------------------- Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/2231 Finished first review of the code. Let me summarize parts of the Kafka 0.10 API that requires us to have a separate module: 1. `ConsumerRecord` in 0.10 has a new `ConsumerRecord#timestamp()` method to retrieve Kafka server-side timestamps. If we want to attach this timestamp to the records as the default event time in the future, we'd definitely need a separate module (0.10 timestamp feature not included in this PR). 2. `PartitionMetaData` (used in `KafkaTestEnvironmentImpl`s) has a breaking change to the APIs for retrieving the info, so we can't simply bump the version either. Other than the above, the rest of the code is the same between (or changes are irrelevant to Kafka API changes) the 0.9 connector. > Bump Kafka producer in Kafka sink to Kafka 0.10.0.0 > --------------------------------------------------- > > Key: FLINK-4035 > URL: https://issues.apache.org/jira/browse/FLINK-4035 > Project: Flink > Issue Type: Bug > Components: Kafka Connector > Affects Versions: 1.0.3 > Reporter: Elias Levy > Priority: Minor > > Kafka 0.10.0.0 introduced protocol changes related to the producer. > Published messages now include timestamps and compressed messages now include > relative offsets. As it is now, brokers must decompress publisher compressed > messages, assign offset to them, and recompress them, which is wasteful and > makes it less likely that compression will be used at all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)