[ https://issues.apache.org/jira/browse/HIVE-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509930#comment-14509930 ]
Hive QA commented on HIVE-10434: -------------------------------- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12727709/HIVE-10434.4-spark.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/835/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/835/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-835/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Tests exited with: InterruptedException: null {noformat} This message is automatically generated. ATTACHMENT ID: 12727709 - PreCommit-HIVE-SPARK-Build > Cancel connection when remote Spark driver process has failed [Spark Branch] > ----------------------------------------------------------------------------- > > Key: HIVE-10434 > URL: https://issues.apache.org/jira/browse/HIVE-10434 > Project: Hive > Issue Type: Sub-task > Components: Spark > Affects Versions: 1.2.0 > Reporter: Chao Sun > Assignee: Chao Sun > Attachments: HIVE-10434.1-spark.patch, HIVE-10434.3-spark.patch, > HIVE-10434.4-spark.patch, HIVE-10434.4-spark.patch > > > Currently in HoS, in SparkClientImpl it first launch a remote Driver process, > and then wait for it to connect back to the HS2. However, in certain > situations (for instance, permission issue), the remote process may fail and > exit with error code. In this situation, the HS2 process will still wait for > the process to connect, and wait for a full timeout period before it throws > the exception. > What makes it worth, user may need to wait for two timeout periods: one for > the SparkSetReducerParallelism, and another for the actual Spark job. This > could be very annoying. > We should cancel the timeout task once we found out that the process has > failed, and set the promise as failed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)