[ 
https://issues.apache.org/jira/browse/FLINK-17469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107411#comment-17107411
 ] 

Roey Shem Tov edited comment on FLINK-17469 at 5/15/20, 7:59 AM:
-----------------------------------------------------------------

[~aljoscha] You right, that is a problem because a Flink application can have 
multiple jobs.
 I guess either we provide this feature to only single job application (I think 
it`s most of the cases in flink`s apps, at least in my usage, but we need to 
check it maybe), or we are provide some unique id (same as uid) , to inject to 
app name (and maybe other configurations).

For example:

 
{code:java}
streamA.execute("unique-id1");
streamB.execute("unique-id2");
{code}
So when we have each job have unique id we can do smth like that:

 
{code:java}
--conf flink.job.name.unique-id1="JobName1"
--conf flink.job.name.unique-id2="JobName2"
{code}
 

And maybe (thinking to the future) is a good way to seperate configuartion from 
seperate jobs (e.g inject different parallelisem for each job, or maybe inject 
your custom parmaters for each job).

What do you think?

 

 


was (Author: roeyshemtov):
[~aljoscha] You right, that is a problem because a Flink application can have 
multiple jobs.
 I guess either we provide this feature to only single job application (I think 
it`s most of the cases in flink`s apps, at least in my usage, but we need to 
check it maybe), or we are provide some unique id (same as uid) , to inject to 
app name (and maybe other names).

For example:

 
{code:java}
streamA.execute("unique-id1");
streamB.execute("unique-id2");
{code}
So when we have each job have unique id we can do smth like that:

 
{code:java}
--conf flink.job.name.unique-id1="JobName1"
--conf flink.job.name.unique-id2="JobName2"
{code}
 

And maybe (thinking to the future) is a good way to seperate configuartion from 
seperate jobs (e.g inject different parallelisem for each job, or maybe inject 
your custom parmaters for each job).

What do you think?

 

 

> Support override of DEFAULT_JOB_NAME with system property for 
> StreamExecutionEnvironment
> ----------------------------------------------------------------------------------------
>
>                 Key: FLINK-17469
>                 URL: https://issues.apache.org/jira/browse/FLINK-17469
>             Project: Flink
>          Issue Type: New Feature
>          Components: API / DataSet, API / DataStream
>    Affects Versions: 1.10.0
>            Reporter: John Lonergan
>            Priority: Trivial
>
> We are running multiple jobs on a shared standalone HA Cluster.
> We want to be able to provide the job name via the submitting shell script 
> using a system property; for example "job.name".
> We could of course write Java application code in each job to achieve this by 
> passing the system property value ourselves to the execute(name)  method, 
> however we want to do this from the env.
> ---
> However, there exists already default job name in 
> StreamExecutionEnvironment.DEFAULT_JOB_NAME.
> Our proposed changed to add a method to StreamExecutionEnvironment...
> {code:java}
> String getDefaultJobName() {
>       return System.getProperty("default.job.name", 
> StreamExecutionEnvironment.DEFAULT_JOB_NAME);
> }
> {code}
> .. and call that method rather than directly accessing 
> StreamExecutionEnvironment.DEFAULT_JOB_NAME 
> This change is backwards compatible.
> We need this method to evalulate on a job by job basis so for example the 
> following small amendment to the existing DEFAULT_JOB_NAME  value will NOT 
> work because this will not allow us to vary the value job by job.
> {code:java}
> class StreamExecutionEnvironment {
>  static final String DEFAULT_JOB_NAME = 
> System.getProperty("default.job.name", "Flink Streaming Job"))
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to