[ 
https://issues.apache.org/jira/browse/PIG-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15636461#comment-15636461
 ] 

Adam Szita commented on PIG-5052:
---------------------------------

[~kellyzly] I found that it is indeed problematic to have the same ID set as 
jobGroupId accross multiple Pig on Spark jobs; and this commit has actually 
introduced a bug because of this.
This can be seen by just repeating the same pig query, (e.g. load, foreach, 
dump, dump) - the second job will hang in SparkStatsUtil#waitForJobAddStats.
Reason is that JobGraphBuilder#getJobIDs will return all jobs accociated with 
the same groupID, in the case above 0 and 1. Then it will wait for job 0 to 
finish but that's no longer in sparkContext, it was the previous job..

So I think we should do something like in [^PIG-5052.2.patch], we can combine 
the appId provided by sparkContext with a random UUID.

> Initialize MRConfiguration.JOB_ID in spark mode correctly
> ---------------------------------------------------------
>
>                 Key: PIG-5052
>                 URL: https://issues.apache.org/jira/browse/PIG-5052
>             Project: Pig
>          Issue Type: Sub-task
>          Components: spark
>            Reporter: liyunzhang_intel
>            Assignee: Adam Szita
>             Fix For: spark-branch
>
>         Attachments: PIG-5052.2.patch, PIG-5052.patch
>
>
> currently, we initialize MRConfiguration.JOB_ID in SparkUtil#newJobConf.  
> we just set the value as a random string.
> {code}
>         jobConf.set(MRConfiguration.JOB_ID, UUID.randomUUID().toString());
> {code}
> We need to find a spark api to initiliaze it correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to