[ 
https://issues.apache.org/jira/browse/HIVE-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-9251:
-------------------------
    Attachment: HIVE-9251.2-spark.patch

Thanks [~jxiang] and [~xuefuz]. Upload another patch. I didn't remove the 
memory per task data in case we'll need it in future. For now, it's only used 
to print a warning if bytes per reducer is much larger than memory per reducer. 
Would like to know your opinions.

> SetSparkReducerParallelism is likely to set too small number of reducers 
> [Spark Branch]
> ---------------------------------------------------------------------------------------
>
>                 Key: HIVE-9251
>                 URL: https://issues.apache.org/jira/browse/HIVE-9251
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Rui Li
>            Assignee: Rui Li
>         Attachments: HIVE-9251.1-spark.patch, HIVE-9251.2-spark.patch
>
>
> This may hurt performance or even lead to task failures. For example, spark's 
> netty-based shuffle limits the max frame size to be 2G.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to