[ 
https://issues.apache.org/jira/browse/HIVE-15543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15800594#comment-15800594
 ] 

Rui Li commented on HIVE-15543:
-------------------------------

+1

> Don't try to get memory/cores to decide parallelism when Spark dynamic 
> allocation is enabled
> --------------------------------------------------------------------------------------------
>
>                 Key: HIVE-15543
>                 URL: https://issues.apache.org/jira/browse/HIVE-15543
>             Project: Hive
>          Issue Type: Improvement
>          Components: Spark
>    Affects Versions: 2.2.0
>            Reporter: Xuefu Zhang
>            Assignee: Xuefu Zhang
>         Attachments: HIVE-15543.patch
>
>
> Presently Hive tries to get numbers for memory and cores from the Spark 
> application and use them to determine RS parallelism. However, this doesn't 
> make sense when Spark dynamic allocation is enabled because the current 
> numbers doesn't represent available computing resources, especially when 
> SparkContext is initially launched.
> Thus, it makes send not to do that when dynamic allocation is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to