Gabriel C Balan created HIVE-13114: -------------------------------------- Summary: parquet filter fails for type float when hive.optimize.index.filter=true Key: HIVE-13114 URL: https://issues.apache.org/jira/browse/HIVE-13114 Project: Hive Issue Type: Bug Affects Versions: 1.2.1 Reporter: Gabriel C Balan Priority: Trivial
Hive fails when selecting from a table 'stored as parquet' with a row filter based on a float column and hive.optimize.index.filter=true. The following example fails in hive 1.2.1 (HDP 2.3.2), but works fine in hive 1.1.0 (CDH 5.5.0): {code} create table p(f float)stored as parquet; insert into table p values (1), (2), (3); select * from p where f >= 2; set hive.optimize.index.filter=true; select * from p where f >= 2; {code} The first select query works fine, the second fails with: {code} Failed with exception java.io.IOException:java.lang.IllegalArgumentException: FilterPredicate column: f's declared type (java.lang.Double) does not match the schema found in file metadata. Column f is of type: FullTypeDescriptor(PrimitiveType: FLOAT, OriginalType: null) Valid types for this column are: [class java.lang.Float] {code} Here's the stack trace from log4j: {code} 2016-02-22 12:18:30,691 ERROR [main]: CliDriver (SessionState.java:printError(960)) - Failed with exception java.io.IOException:java.lang.IllegalArgumentException: FilterPredicate column: f's declared type (java.lang.Double) does not match the schema found in file metadata. Column f is of type: FullTypeDescriptor(PrimitiveType: FLOAT, OriginalType: null) Valid types for this column are: [class java.lang.Float] java.io.IOException: java.lang.IllegalArgumentException: FilterPredicate column: f's declared type (java.lang.Double) does not match the schema found in file metadata. Column f is of type: FullTypeDescriptor(PrimitiveType: FLOAT, OriginalType: null) Valid types for this column are: [class java.lang.Float] at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508) at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415) at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140) at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.lang.IllegalArgumentException: FilterPredicate column: f's declared type (java.lang.Double) does not match the schema found in file metadata. Column f is of type: FullTypeDescriptor(PrimitiveType: FLOAT, OriginalType: null) Valid types for this column are: [class java.lang.Float] at parquet.filter2.predicate.ValidTypeMap.assertTypeValid(ValidTypeMap.java:132) at parquet.filter2.predicate.SchemaCompatibilityValidator.validateColumn(SchemaCompatibilityValidator.java:185) at parquet.filter2.predicate.SchemaCompatibilityValidator.validateColumnFilterPredicate(SchemaCompatibilityValidator.java:160) at parquet.filter2.predicate.SchemaCompatibilityValidator.visit(SchemaCompatibilityValidator.java:124) at parquet.filter2.predicate.SchemaCompatibilityValidator.visit(SchemaCompatibilityValidator.java:59) at parquet.filter2.predicate.Operators$GtEq.accept(Operators.java:248) at parquet.filter2.predicate.SchemaCompatibilityValidator.validate(SchemaCompatibilityValidator.java:64) at parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:59) at parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:40) at parquet.filter2.compat.FilterCompat$FilterPredicateCompat.accept(FilterCompat.java:126) at parquet.filter2.compat.RowGroupFilter.filterRowGroups(RowGroupFilter.java:46) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.getSplit(ParquetRecordReaderWrapper.java:275) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:99) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:85) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:72) at org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674) at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324) at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446) ... 15 more {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)