[ https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376410#comment-15376410 ]
ASF GitHub Bot commented on FLINK-4035: --------------------------------------- Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/2231 Hi @radekg , thank you for opening a PR for this! From a first look it seems that there isn't much changes to the code of `flink-connector-kafka-0.9` and this PR. Also, from the original discussion / comments in the JIRA, the Kafka API doesn't seem to have changed between 0.9 and 0.10, so it might be possible to let the Kafka 0.9 connector use the 0.10 client by putting the Kafka 0.10 dependency into the user pom. May I ask whether you have tried this approach out already? Also, > At The Weather Company we bumped into a problem while trying to use Flink with Kafka 0.10.x. What was the problem? If you can describe, it'll be helpful for deciding how we can proceed with this :) There's another contributor who was trying this out, I'll also try to ask for his feedback on this in the JIRA. > Bump Kafka producer in Kafka sink to Kafka 0.10.0.0 > --------------------------------------------------- > > Key: FLINK-4035 > URL: https://issues.apache.org/jira/browse/FLINK-4035 > Project: Flink > Issue Type: Bug > Components: Kafka Connector > Affects Versions: 1.0.3 > Reporter: Elias Levy > Priority: Minor > > Kafka 0.10.0.0 introduced protocol changes related to the producer. > Published messages now include timestamps and compressed messages now include > relative offsets. As it is now, brokers must decompress publisher compressed > messages, assign offset to them, and recompress them, which is wasteful and > makes it less likely that compression will be used at all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)