[ 
https://issues.apache.org/jira/browse/HIVE-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15346772#comment-15346772
 ] 

Sahil Takiar edited comment on HIVE-14082 at 6/23/16 5:09 PM:
--------------------------------------------------------------

Furthermore, if {{hive.exec.post.hooks}} is set to 
{{org.apache.hadoop.hive.ql.hooks.LineageLogger}}, a different exception is 
thrown, but this time it is thrown in the Query Planning Phase. The full stack 
trace is below:

*Exception 2:*

{code}
Error: Error while compiling statement: FAILED: IndexOutOfBoundsException 
Index: 3, Size: 3 (state=42000,code=40000)
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: 
FAILED: IndexOutOfBoundsException Index: 3, Size: 3
        at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:239)
        at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:225)
        at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:244)
        at org.apache.hive.beeline.Commands.executeInternal(Commands.java:934)
        at org.apache.hive.beeline.Commands.execute(Commands.java:1120)
        at org.apache.hive.beeline.Commands.sql(Commands.java:1017)
        at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1095)
        at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:927)
        at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:855)
        at 
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:488)
        at org.apache.hive.beeline.BeeLine.main(BeeLine.java:471)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling 
statement: FAILED: IndexOutOfBoundsException Index: 3, Size: 3
        at 
org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:388)
        at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:145)
        at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:215)
        at 
org.apache.hive.service.cli.operation.Operation.run(Operation.java:326)
        at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:424)
        at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:401)
        at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:258)
        at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:503)
        at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
        at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
        at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:718)
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
        at java.util.ArrayList.rangeCheck(ArrayList.java:635)
        at java.util.ArrayList.get(ArrayList.java:411)
        at 
org.apache.hadoop.hive.ql.optimizer.lineage.OpProcFactory$ReduceSinkLineage.process(OpProcFactory.java:607)
        at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
        at 
org.apache.hadoop.hive.ql.lib.LevelOrderWalker.walk(LevelOrderWalker.java:143)
        at 
org.apache.hadoop.hive.ql.lib.LevelOrderWalker.walk(LevelOrderWalker.java:149)
        at 
org.apache.hadoop.hive.ql.lib.LevelOrderWalker.walk(LevelOrderWalker.java:149)
        at 
org.apache.hadoop.hive.ql.lib.LevelOrderWalker.startWalking(LevelOrderWalker.java:122)
        at 
org.apache.hadoop.hive.ql.optimizer.lineage.Generator.transform(Generator.java:102)
        at 
org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:198)
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10054)
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9868)
        at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:223)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:312)
        at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1201)
        at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1188)
        at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:143)
        ... 15 more
{code}


was (Author: stakiar):
Furthermore, if {{hive.exec.post.hooks}} is set to 
{{org.apache.hadoop.hive.ql.hooks.LineageLogger}}, a different exception is 
thrown, but this time it is thrown in the Query Planning Phase. The full stack 
trace is below:

{code}
Error: Error while compiling statement: FAILED: IndexOutOfBoundsException 
Index: 3, Size: 3 (state=42000,code=40000)
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: 
FAILED: IndexOutOfBoundsException Index: 3, Size: 3
        at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:239)
        at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:225)
        at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:244)
        at org.apache.hive.beeline.Commands.executeInternal(Commands.java:934)
        at org.apache.hive.beeline.Commands.execute(Commands.java:1120)
        at org.apache.hive.beeline.Commands.sql(Commands.java:1017)
        at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1095)
        at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:927)
        at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:855)
        at 
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:488)
        at org.apache.hive.beeline.BeeLine.main(BeeLine.java:471)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling 
statement: FAILED: IndexOutOfBoundsException Index: 3, Size: 3
        at 
org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:388)
        at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:145)
        at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:215)
        at 
org.apache.hive.service.cli.operation.Operation.run(Operation.java:326)
        at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:424)
        at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:401)
        at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:258)
        at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:503)
        at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
        at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
        at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:718)
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
        at java.util.ArrayList.rangeCheck(ArrayList.java:635)
        at java.util.ArrayList.get(ArrayList.java:411)
        at 
org.apache.hadoop.hive.ql.optimizer.lineage.OpProcFactory$ReduceSinkLineage.process(OpProcFactory.java:607)
        at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
        at 
org.apache.hadoop.hive.ql.lib.LevelOrderWalker.walk(LevelOrderWalker.java:143)
        at 
org.apache.hadoop.hive.ql.lib.LevelOrderWalker.walk(LevelOrderWalker.java:149)
        at 
org.apache.hadoop.hive.ql.lib.LevelOrderWalker.walk(LevelOrderWalker.java:149)
        at 
org.apache.hadoop.hive.ql.lib.LevelOrderWalker.startWalking(LevelOrderWalker.java:122)
        at 
org.apache.hadoop.hive.ql.optimizer.lineage.Generator.transform(Generator.java:102)
        at 
org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:198)
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10054)
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9868)
        at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:223)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:312)
        at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1201)
        at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1188)
        at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:143)
        ... 15 more
{code}

> Multi-Insert Query Fails with GROUP BY, DISTINCT, and WHERE clauses
> -------------------------------------------------------------------
>
>                 Key: HIVE-14082
>                 URL: https://issues.apache.org/jira/browse/HIVE-14082
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 1.1.0, 2.1.0
>            Reporter: Sahil Takiar
>
> The following MULTI-INSERT Query Fails in Hive. I've listed the query 
> required to re-produce this failure, as well as a few similar queries that 
> work properly.
> Setup Queries:
> {code}
> DROP SCHEMA IF EXISTS multi_table_insert_bug CASCADE;
> CREATE SCHEMA multi_table_insert_bug;
> USE multi_table_insert_bug;
> DROP TABLE IF EXISTS multi_table_insert_source;
> DROP TABLE IF EXISTS multi_table_insert_test;
> CREATE TABLE multi_table_insert_source (
>   date_column DATE,
>   column_1 STRING,
>   column_2 STRING,
>   column_3 STRING,
>   column_4 STRING
> );
> CREATE TABLE multi_table_insert_test (
>   column_1 STRING,
>   column_2 STRING,
>   line_count INT,
>   distinct_count_by_1_column INT,
>   distinct_count_by_2_columns INT
> )
> PARTITIONED BY (partition_column INT);
> INSERT OVERWRITE TABLE multi_table_insert_source VALUES
>   ('2016-01-22', 'value_1_1', 'value_1_2', 'value_1_3', 'value_1_4'),
>   ('2016-01-22', 'value_2_1', 'value_2_2', 'value_2_3', 'value_2_4'),
>   ('2016-01-22', 'value_3_1', 'value_3_2', 'value_3_3', 'value_3_4'),
>   ('2016-01-22', 'value_4_1', 'value_4_2', 'value_4_3', 'value_4_4'),
>   ('2016-01-22', 'value_5_1', 'value_5_2', 'value_5_3', 'value_5_4');
> {code}
> The following queries run successfully:
> *Query 1:*
> {code}
> FROM multi_table_insert_source
>   INSERT OVERWRITE TABLE multi_table_insert_test PARTITION (partition_column 
> = 365)
>   SELECT
>     column_1,
>     column_2,
>     COUNT(*) AS line_count,
>     COUNT(DISTINCT column_3) AS distinct_count_by_1_column,
>     COUNT(DISTINCT date_column, column_3) AS distinct_count_by_2_columns
>   WHERE date_column >= DATE_SUB(FROM_UNIXTIME(UNIX_TIMESTAMP()), 365)
>   GROUP BY
>     column_1,
>     column_2;
> {code}
> *Query 2:*
> {code}
> FROM multi_table_insert_source
>   INSERT OVERWRITE TABLE multi_table_insert_test PARTITION (partition_column 
> = 365)
>   SELECT
>     column_1,
>     column_2,
>     COUNT(*) AS line_count,
>     COUNT(DISTINCT column_3) AS distinct_count_by_1_column,
>     COUNT(DISTINCT date_column, column_3) AS distinct_count_by_2_columns
> --  WHERE date_column >= DATE_SUB(FROM_UNIXTIME(UNIX_TIMESTAMP()), 365)
>   GROUP BY
>     column_1,
>     column_2
>   INSERT OVERWRITE TABLE multi_table_insert_test PARTITION (partition_column 
> = 1096)
>   SELECT
>     column_1,
>     column_2,
>     COUNT(*) AS line_count,
>     COUNT(DISTINCT column_3) AS distinct_count_by_1_column,
>     COUNT(DISTINCT date_column, column_3) AS distinct_count_by_2_columns
> --  WHERE date_column >= DATE_SUB(FROM_UNIXTIME(UNIX_TIMESTAMP()), 1096)
>   GROUP BY
>     column_1,
>     column_2;
> {code}
> The following query fails with a {{ClassCastException}}:
> *Query 3:*
> {code}
> FROM multi_table_insert_source
>   INSERT OVERWRITE TABLE multi_table_insert_test PARTITION (partition_column 
> = 365)
>   SELECT
>     column_1,
>     column_2,
>     COUNT(*) AS line_count,
>     COUNT(DISTINCT column_3) AS distinct_count_by_1_column,
>     COUNT(DISTINCT date_column, column_3) AS distinct_count_by_2_columns
>   WHERE date_column >= DATE_SUB(FROM_UNIXTIME(UNIX_TIMESTAMP()), 365)
>   GROUP BY
>     column_1,
>     column_2
>   INSERT OVERWRITE TABLE multi_table_insert_test PARTITION (partition_column 
> = 1096)
>   SELECT
>     column_1,
>     column_2,
>     COUNT(*) AS line_count,
>     COUNT(DISTINCT column_3) AS distinct_count_by_1_column,
>     COUNT(DISTINCT date_column, column_3) AS distinct_count_by_2_columns
>   WHERE date_column >= DATE_SUB(FROM_UNIXTIME(UNIX_TIMESTAMP()), 1096)
>   GROUP BY
>     column_1,
>     column_2;
> {code}
> Here is the full stack-trace of the exception:
> *Exception 1:*
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing row (tag=0) 
> {"key":{"_col0":"value_1_1","_col1":"value_1_2","_col2":{0:{"_col0":"value_1_3"}}},"value":null}
>       at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:257)
>       at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
>       at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
>       at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:449)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row (tag=0) 
> {"key":{"_col0":"value_1_1","_col1":"value_1_2","_col2":{0:{"_col0":"value_1_3"}}},"value":null}
>       at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:245)
>       ... 3 more
> Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be 
> cast to org.apache.hadoop.hive.serde2.io.DateWritable
>       at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDateObjectInspector.getPrimitiveWritableObject(WritableDateObjectInspector.java:38)
>       at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:938)
>       at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:818)
>       at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:809)
>       at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan.evaluate(GenericUDFOPEqualOrGreaterThan.java:141)
>       at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:186)
>       at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
>       at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:65)
>       at 
> org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:112)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:878)
>       at 
> org.apache.hadoop.hive.ql.exec.ForwardOperator.process(ForwardOperator.java:38)
>       at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)
>       ... 3 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to