[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433195#comment-15433195
 ] 

ASF GitHub Bot commented on FLINK-4035:
---------------------------------------

Github user eliaslevy commented on the issue:

    https://github.com/apache/flink/pull/2369
  
    This may be the wrong place to bring this up, but as you are discussing 
changes to the Kafka connector API, I think it is worth bring it up.  
    
    As I've pointed out elsewhere, the current connector API makes it difficult 
to make use of Kafka native serializer or deserializer 
(`org.apache.kafka.common.[Serializer, Deserializer]`), which can be configured 
via the Kafka client and producer configs.  
    
    The connector code assumes that `ConsummerRecord`s and `ProducerRecord`s 
are both parametrized as `<byte[], byte[]>`, with the Flink serdes performing 
the conversion to/from `byte[]`.  This makes it difficult to make use of 
Confluent's `KafkaAvroSerializer` and `KafkaAvroDecoder`, which make use of 
their [schema 
registry](http://docs.confluent.io/3.0.0/schema-registry/docs/serializer-formatter.html#serializer).
    
    If you are going to change the connector API, it would be good to tackle 
this issue at the same time to avoid future changes.  The connector should 
allow the type parametrization of the Kafka consumer and producer, and should 
make use of a pass through Flink serde by default.


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---------------------------------------------------
>
>                 Key: FLINK-4035
>                 URL: https://issues.apache.org/jira/browse/FLINK-4035
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 1.0.3
>            Reporter: Elias Levy
>            Assignee: Robert Metzger
>            Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to