[ https://issues.apache.org/jira/browse/HIVE-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14263896#comment-14263896 ]
Xuefu Zhang commented on HIVE-9251: ----------------------------------- Could you elaborate why you think it's unnecessary to get the actual executor memory and instead use hive configured value for reducer memory? The memory idea came from [~sandyr], I'd say that everything is still experimental, and we are open to new ideas or theories. > SetSparkReducerParallelism is likely to set too small number of reducers > [Spark Branch] > --------------------------------------------------------------------------------------- > > Key: HIVE-9251 > URL: https://issues.apache.org/jira/browse/HIVE-9251 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Rui Li > Assignee: Rui Li > Attachments: HIVE-9251.1-spark.patch > > > This may hurt performance or even lead to task failures. For example, spark's > netty-based shuffle limits the max frame size to be 2G. -- This message was sent by Atlassian JIRA (v6.3.4#6332)