[ https://issues.apache.org/jira/browse/HIVE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14220485#comment-14220485 ]
Brock Noland commented on HIVE-8909: ------------------------------------ parquet_array_null_element {noformat} Caused by: parquet.io.ParquetDecodingException: Can not read value at 0 in block 0 in file pfile:/Users/noland/workspaces/hive-apache/hive/itests/qtest/target/warehouse/parquet_array_null_element/000000_0 at parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:213) at parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:204) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:102) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:71) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:71) at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:504) at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:594) ... 29 more Caused by: java.lang.ArrayIndexOutOfBoundsException: 1 at org.apache.hadoop.hive.ql.io.parquet.convert.HiveStructConverter.set(HiveStructConverter.java:96) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveCollectionConverter.end(HiveCollectionConverter.java:65) at parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:410) at parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:194) ... 35 more {noformat} > Hive doesn't correctly read Parquet nested types > ------------------------------------------------ > > Key: HIVE-8909 > URL: https://issues.apache.org/jira/browse/HIVE-8909 > Project: Hive > Issue Type: Bug > Affects Versions: 0.13.1 > Reporter: Ryan Blue > Assignee: Ryan Blue > Attachments: HIVE-8909-1.patch, HIVE-8909-2.patch, HIVE-8909.2.patch, > HIVE-8909.3.patch, parquet-test-data.tar.gz > > > Parquet's Avro and Thrift object models don't produce the same parquet type > representation for lists and maps that Hive does. In the Parquet community, > we've defined what should be written and backward-compatibility rules for > existing data written by parquet-avro and parquet-thrift in PARQUET-113. We > need to implement those rules in the Hive Converter classes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)