[ 
https://issues.apache.org/jira/browse/HIVE-15212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15924879#comment-15924879
 ] 

Sergey Shelukhin commented on HIVE-15212:
-----------------------------------------

Spark test failed due to {noformat}
2017-03-13T22:47:10,439 ERROR [c8c60e54-0f5e-4d72-8358-6c8bdd10ed96 main] 
SessionState: Job failed with java.io.IOException: Failed to create local dir 
in /tmp/blockmgr-33541d36-5096-47ce-8791-dff902c09eac/01.
        at 
org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70)
        at org.apache.spark.storage.DiskStore.contains(DiskStore.scala:124)
        at 
org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$getCurrentBlockStatus(BlockManager.scala:379)
        at 
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:959)
        at 
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:910)
        at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:866)
        at 
org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:910)
        at 
org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:700)
        at 
org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1213)
        at 
org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:103)
        at 
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:86)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
        at 
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:56)
        at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1370)
        at org.apache.spark.rdd.HadoopRDD.<init>(HadoopRDD.scala:125)
        at 
org.apache.spark.SparkContext$$anonfun$hadoopRDD$1.apply(SparkContext.scala:965)
        at 
org.apache.spark.SparkContext$$anonfun$hadoopRDD$1.apply(SparkContext.scala:961)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.SparkContext.withScope(SparkContext.scala:682)
        at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:961)
        at 
org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:412)
        at 
org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generateMapInput(SparkPlanGenerator.java:198)
        at 
org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generateParentTran(SparkPlanGenerator.java:138)
        at 
org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generate(SparkPlanGenerator.java:110)
        at 
org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient$JobStatusJob.call(RemoteHiveSparkClient.java:346)
        at 
org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:358)
        at 
org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:323)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
{noformat} 
so probably bad luck... will see on future runs

> merge branch into master
> ------------------------
>
>                 Key: HIVE-15212
>                 URL: https://issues.apache.org/jira/browse/HIVE-15212
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>         Attachments: HIVE-15212.01.patch, HIVE-15212.02.patch
>
>
> Filing the JIRA now; accidentally attached the merge patch somewhere, so I 
> will post the test results analysis here. We will re-run the tests here later.
> Relevant q file failures:
> load_dyn_part1, autoColumnStats_2 and _1, escape2, load_dyn_part2, 
> dynpart_sort_opt_vectorization, orc_createas1, combine3, update_tmp_table, 
> delete_where_non_partitioned, delete_where_no_match, update_where_no_match, 
> update_where_non_partitioned, update_all_types
> I suspect many ACID failures are due to incomplete ACID type patch.
> Also need to revert the pom change from spark test pom, that seems to break 
> Spark tests. I had it temporarily to get rid of the long non-maven download 
> in all cases (there's a separate JIRA for that)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to