[ 
https://issues.apache.org/jira/browse/HIVE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14250834#comment-14250834
 ] 

Xuefu Zhang commented on HIVE-8920:
-----------------------------------

Even with HIVE-9041, a query like this:
{code}
from (select * from dec union all select * from dec2) s
insert overwrite table dec3 select s.name, sum(s.value) group by s.name
insert overwrite table dec4 select s.name, s.value order by s.value;
{code}
apparently finishing successfully even though the query actually failed. (This 
is a separate issue though.) Exception is seen in hive.log with a WARN (again, 
a separate problem) log level:
{code}
2014-12-17 15:35:53,741 WARN  [task-result-getter-2]: scheduler.TaskSetManager 
(Logging.scala:logWarning(71)) - Lost task 0.0 in stage 1.0 (TID 2, localhost): 
java.lang.RuntimeException: java.lang.IllegalStateException: Invalid input path 
hdfs://localhost:8020/user/hive/warehouse/dec2/dec.txt
        at 
org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:164)
        at 
org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
        at 
org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:28)
        at 
org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:96)
        at 
scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
        at 
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:210)
        at 
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:56)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalStateException: Invalid input path 
hdfs://localhost:8020/user/hive/warehouse/dec2/dec.txt
        at 
org.apache.hadoop.hive.ql.exec.MapOperator.getNominalPath(MapOperator.java:406)
        at 
org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:442)
        at 
org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
        at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
        at 
org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:153)
        ... 13 more
{code}
The exception is suspected to IOContext initialization problem.

> SplitSparkWorkResolver doesn't work with UnionWork [Spark Branch]
> -----------------------------------------------------------------
>
>                 Key: HIVE-8920
>                 URL: https://issues.apache.org/jira/browse/HIVE-8920
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>    Affects Versions: spark-branch
>            Reporter: Chao
>            Assignee: Xuefu Zhang
>
> The following query will not work:
> {code}
> from (select * from table0 union all select * from table1) s
> insert overwrite table table3 select s.x, count(1) group by s.x
> insert overwrite table table4 select s.y, count(1) group by s.y;
> {code}
> Currently, the plan for this query, before SplitSparkWorkResolver, looks like 
> below:
> {noformat}
>    M1    M2
>      \  / \
>       U3   R5
>       |
>       R4
> {noformat}
> In {{SplitSparkWorkResolver#splitBaseWork}}, it assumes that the 
> {{childWork}} is a ReduceWork, but for this case, you can see that for M2 the 
> childWork could be UnionWork U3. Thus, the code will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to