[ https://issues.apache.org/jira/browse/KAFKA-1797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319273#comment-14319273 ]
Jiangjie Qin commented on KAFKA-1797: ------------------------------------- I just realized that we removed the default value of key.serializer and value.serializer in ProducerConfig. This will require the users to provide a long class name of "org.apache.kafka.common.serialization.ByterraySerializer" in the property file. Just wondering the reason why we removed the default value, because it looks that the users are not likely to know the class name if they just begin with Kafka. > add the serializer/deserializer api to the new java client > ---------------------------------------------------------- > > Key: KAFKA-1797 > URL: https://issues.apache.org/jira/browse/KAFKA-1797 > Project: Kafka > Issue Type: Improvement > Components: core > Affects Versions: 0.8.2.0 > Reporter: Jun Rao > Assignee: Jun Rao > Fix For: 0.8.2.0 > > Attachments: kafka-1797.patch, kafka-1797.patch, kafka-1797.patch, > kafka-1797.patch, kafka-1797_2014-12-09_18:48:33.patch, > kafka-1797_2014-12-15_15:36:24.patch, kafka-1797_2014-12-17_09:47:45.patch > > > Currently, the new java clients take a byte array for both the key and the > value. While this api is simple, it pushes the serialization/deserialization > logic into the application. This makes it hard to reason about what type of > data flows through Kafka and also makes it hard to share an implementation of > the serializer/deserializer. For example, to support Avro, the serialization > logic could be quite involved since it might need to register the Avro schema > in some remote registry and maintain a schema cache locally, etc. Without a > serialization api, it's impossible to share such an implementation so that > people can easily reuse. -- This message was sent by Atlassian JIRA (v6.3.4#6332)