[ 
https://issues.apache.org/jira/browse/HIVE-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14199753#comment-14199753
 ] 

Hive QA commented on HIVE-8649:
-------------------------------



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12679762/HIVE-8649.1-spark.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 7098 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parallel
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.ql.io.parquet.serde.TestParquetTimestampUtils.testTimezone
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testNegativeTokenAuth
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/315/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/315/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-315/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12679762 - PreCommit-HIVE-SPARK-Build

> Increase level of parallelism in reduce phase [Spark Branch]
> ------------------------------------------------------------
>
>                 Key: HIVE-8649
>                 URL: https://issues.apache.org/jira/browse/HIVE-8649
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Brock Noland
>            Assignee: Jimmy Xiang
>             Fix For: spark-branch
>
>         Attachments: HIVE-8649.1-spark.patch
>
>
> We calculate the number of reducers based on the same code for MapReduce. 
> However, reducers are vastly cheaper in Spark and it's generally recommended 
> we have many more reducers than in MR.
> Sandy Ryza who works on Spark has some ideas about a heuristic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to