[ https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=398062&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-398062 ]
ASF GitHub Bot logged work on HIVE-21218: ----------------------------------------- Author: ASF GitHub Bot Created on: 05/Mar/20 01:52 Start Date: 05/Mar/20 01:52 Worklog Time Spent: 10m Work Description: davidov541 commented on pull request #933: HIVE-21218: Adding support for Confluent Kafka Avro message format URL: https://github.com/apache/hive/pull/933#discussion_r388037768 ########## File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java ########## @@ -369,6 +402,26 @@ private SubStructObjectInspector(StructObjectInspector baseOI, int toIndex) { } } + /** + * The converter reads bytes from kafka message and skip first @skipBytes from beginning. + * + * For example: + * The Confluent Avro serializer adds 5 magic bytes that represents Schema ID as Integer to the message. + */ + static class AvroSkipBytesConverter extends AvroBytesConverter { + private final int skipBytes; + + AvroSkipBytesConverter(Schema schema, int skipBytes) { + super(schema); + this.skipBytes = skipBytes; + } + + @Override + Decoder getDecoder(byte[] value) { + return DecoderFactory.get().binaryDecoder(value, this.skipBytes, value.length - this.skipBytes, null); Review comment: BinaryDecoder already throws a nice ArrayIndexOutOfBoundsException in this case, so I'm going to update to catch that, wrap in a SerDe exception, and keep going. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 398062) Time Spent: 11h 20m (was: 11h 10m) > KafkaSerDe doesn't support topics created via Confluent Avro serializer > ----------------------------------------------------------------------- > > Key: HIVE-21218 > URL: https://issues.apache.org/jira/browse/HIVE-21218 > Project: Hive > Issue Type: Bug > Components: kafka integration, Serializers/Deserializers > Affects Versions: 3.1.1 > Reporter: Milan Baran > Assignee: David McGinnis > Priority: Major > Labels: pull-request-available > Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, > HIVE-21218.4.patch, HIVE-21218.5.patch, HIVE-21218.patch > > Time Spent: 11h 20m > Remaining Estimate: 0h > > According to [Google > groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A] > the Confluent avro serialzier uses propertiary format for kafka value - > <magic_byte 0x00><4 bytes of schema ID><regular avro bytes for object that > conforms to schema>. > This format does not cause any problem for Confluent kafka deserializer which > respect the format however for hive kafka handler its bit a problem to > correctly deserialize kafka value, because Hive uses custom deserializer from > bytes to objects and ignores kafka consumer ser/deser classes provided via > table property. > It would be nice to support Confluent format with magic byte. > Also it would be great to support Schema registry as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)