[ 
https://issues.apache.org/jira/browse/HIVE-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14650177#comment-14650177
 ] 

Lefty Leverenz commented on HIVE-11363:
---------------------------------------

I see the commit to Spark branch also created the new parameters instead of 
reusing the old ones, which explains why the merge to master did the same.  See 
commit 537114b964c71b7a5cd00c9938eadc6d0cf76536.

Was there a decision not to reuse the old parameters?

> Prewarm Hive on Spark containers [Spark Branch]
> -----------------------------------------------
>
>                 Key: HIVE-11363
>                 URL: https://issues.apache.org/jira/browse/HIVE-11363
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>    Affects Versions: 1.1.0
>            Reporter: Xuefu Zhang
>            Assignee: Xuefu Zhang
>              Labels: TODOC-SPARK
>             Fix For: spark-branch
>
>         Attachments: HIVE-11363.1-spark.patch, HIVE-11363.2-spark.patch, 
> HIVE-11363.3-spark.patch, HIVE-11363.4-spark.patch, HIVE-11363.5-spark.patch
>
>
> When Hive job is launched by Oozie, a Hive session is created and job script 
> is executed. Session is closed when Hive job is completed. Thus, Hive session 
> is not shared among Hive jobs either in an Oozie workflow or across 
> workflows. Since the parallelism of a Hive job executed on Spark is impacted 
> by the available executors, such Hive jobs will suffer the executor ramp-up 
> overhead. The idea here is to wait a bit so that enough executors can come up 
> before a job can be executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to