[ 
https://issues.apache.org/jira/browse/HIVE-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14152843#comment-14152843
 ] 

Hive QA commented on HIVE-8300:
-------------------------------



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671962/HIVE-8300.1-spark.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6508 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_fs_default_name2
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/181/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/181/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-181/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671962

> Missing guava lib causes IllegalStateException when deserializing a task 
> [Spark Branch]
> ---------------------------------------------------------------------------------------
>
>                 Key: HIVE-8300
>                 URL: https://issues.apache.org/jira/browse/HIVE-8300
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>         Environment: Spark-1.2.0-SNAPSHOT
>            Reporter: Rui Li
>         Attachments: HIVE-8300.1-spark.patch
>
>
> In spark-1.2, we have guava shaded in spark-assembly. And we only ship 
> hive-exec to spark cluster. So spark executor won't have (original) guava in 
> its class path.
> This can cause some problem when TaskRunner deserializes a task, and throws 
> something like this:
> {code}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 
> (TID 3, node13-1): java.lang.IllegalStateException: unread block data
>         
> java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2421)
>         java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1382)
>         
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>         java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>         
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>         java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>         java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>         
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
>         
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
>         org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:164)
>         
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:744)
> {code}
> We may have to verify this issue and ship guava to spark cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to