[ 
https://issues.apache.org/jira/browse/HIVE-8956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260006#comment-14260006
 ] 

chirag aggarwal commented on HIVE-8956:
---------------------------------------

Does this take care of instances like:
ERROR util.Utils: Uncaught exception in thread Result resolver thread-1 
java.lang.OutOfMemoryError: Java heap space 
        at java.nio.HeapByteBuffer.<init>(Unknown Source) 
        at java.nio.ByteBuffer.allocate(Unknown Source) 
        at org.apache.spark.storage.BlockMessage.set(BlockMessage.scala:94) 
        at 
org.apache.spark.storage.BlockMessage$.fromByteBuffer(BlockMessage.scala:176) 
        at 
org.apache.spark.storage.BlockMessageArray.set(BlockMessageArray.scala:63) 
        at 
org.apache.spark.storage.BlockMessageArray$.fromBufferMessage(BlockMessageArray.scala:109)
 
        at 
org.apache.spark.storage.BlockManagerWorker$.syncGetBlock(BlockManagerWorker.scala:138)
 
        at 
org.apache.spark.storage.BlockManager$$anonfun$doGetRemote$2.apply(BlockManager.scala:530)
 
        at 
org.apache.spark.storage.BlockManager$$anonfun$doGetRemote$2.apply(BlockManager.scala:528)
 
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
        at 
org.apache.spark.storage.BlockManager.doGetRemote(BlockManager.scala:528) 
        at 
org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:522) 
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:53)
 
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:47)
 
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:47)
 
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1311) 
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:46)
 
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
        at java.lang.Thread.run(Unknown Source) 

> Hive hangs while some error/exception happens beyond job execution [Spark 
> Branch]
> ---------------------------------------------------------------------------------
>
>                 Key: HIVE-8956
>                 URL: https://issues.apache.org/jira/browse/HIVE-8956
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Chengxiang Li
>            Assignee: Rui Li
>              Labels: Spark-M3
>             Fix For: spark-branch
>
>         Attachments: HIVE-8956.1-spark.patch
>
>
> Remote spark client communicate with remote spark context asynchronously, if 
> error/exception is throw out during job execution in remote spark context, it 
> would be wrapped and send back to remote spark client, but if error/exception 
> is throw out beyond job execution, such as job serialized failed, remote 
> spark client would never know what's going on in remote spark context, and it 
> would hangs there.
> Set a timeout in remote spark client side may not a great idea, as we are not 
> sure how long the query executed in spark cluster. we need find a way to 
> check whether job has failed(whole life cycle) in remote spark context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to