[ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=196122&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-196122
 ]

ASF GitHub Bot logged work on HIVE-21218:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 08/Feb/19 08:48
            Start Date: 08/Feb/19 08:48
    Worklog Time Spent: 10m 
      Work Description: cricket007 commented on pull request #526: HIVE-21218: 
KafkaSerDe doesn't support topics created via Confluent
URL: https://github.com/apache/hive/pull/526#discussion_r254991986
 
 

 ##########
 File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java
 ##########
 @@ -369,6 +379,20 @@ private SubStructObjectInspector(StructObjectInspector 
baseOI, int toIndex) {
     }
   }
 
+  static class ConfluentAvroBytesConverter extends AvroBytesConverter {
+    ConfluentAvroBytesConverter(Schema schema) {
+      super(schema);
+    }
+
+    @Override
+    Decoder getDecoder(byte[] value) {
+      /**
+       * Confluent 4 magic bytes that represents Schema ID as Integer. These 
bits are added before value bytes.
+       */
+      return DecoderFactory.get().binaryDecoder(value, 5, value.length - 5, 
null);
 
 Review comment:
   > About the compatibility, there are several compatibility modes in schema 
registry. It can be set for each topic and you can set BACKWARD compat
   
   Right, but Hive will likely only work properly if the config really is set 
to backwards, or at the least, the schema provided can read all the data for 
the offsets that are provided. 
   
   > with literally different schemas
   
   Maybe if more schema data was exposed, this would be easier to handle? For 
example, the Avro namespace + top-level record name? 
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 196122)
    Time Spent: 1h 10m  (was: 1h)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> -----------------------------------------------------------------------
>
>                 Key: HIVE-21218
>                 URL: https://issues.apache.org/jira/browse/HIVE-21218
>             Project: Hive
>          Issue Type: Bug
>          Components: kafka integration, Serializers/Deserializers
>    Affects Versions: 3.1.1
>            Reporter: Milan Baran
>            Assignee: Milan Baran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <magic_byte 0x00><4 bytes of schema ID><regular avro bytes for object that 
> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to