[ 
https://issues.apache.org/jira/browse/HIVE-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14081924#comment-14081924
 ] 

Chinna Rao Lalam commented on HIVE-7330:
----------------------------------------

Hi Chengxiang Li, my thoughts are

Task.execute() method is overridden by other classes and corresponding sub 
classes will be responsible for executing the work. 
In spark task situation for executing spark job required resource loading and 
submission of job need to be done here by using spark client. 

Spark client will be a wrapper class for providing the sparks functionality.

I have added one TODO for job monitoring in sprak task, here also we can use 
the class which provides functionality of the spark job monitoring and get the 
job status in spark task class itself.

Similar approach i have seen in ExecDriver.java and TezTask.java.

> Create SparkTask
> ----------------
>
>                 Key: HIVE-7330
>                 URL: https://issues.apache.org/jira/browse/HIVE-7330
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Chinna Rao Lalam
>         Attachments: HIVE-7330-spark.patch, HIVE-7330.1-spark.patch
>
>
> SparkTask handles the execution of SparkWork. It will execute a graph of map 
> and reduce work using a SparkClient instance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to