I really interesting in making `ClusterClient` be used as multiple-instance
in a jvm, because we need submit job in a long running process.
I create a jira for this problem.
https://issues.apache.org/jira/browse/FLINK-9710
eSKa 于2018年7月3日周二 下午4:20写道:
> Yes - it seems that main method returns s
Yes - it seems that main method returns success but for some reason we have
that exception thrown.
For now we applied workaround to catch exception and just skip it (later on
our statusUpdater is reading statuses from FlinkDashboard).
--
Sent from: http://apache-flink-user-mailing-list-archiv
Dive into this call and you sill see that it mutates static fields in
the ExecutionEnvironment.
https://github.com/apache/flink/blob/master/flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java#L422
On 03.07.2018 10:07, Chuanlei Ni wrote:
HI, @chesnay
I read the code
HI, @chesnay
I read the code of `ClusterClient`, and have not found the `static` field.
So why cannot be used in the same jvm? (we also use `ClusterCLient` this
way, so we really care about this feature)
eSKa 于2018年7月3日周二 下午4:00写道:
> Yes - we are submitting jobs one by one.
> How can we change
Hi,
Let me summarize:
1) Sometimes you get the error message
"org.apache.flink.client.program.ProgramMissingJobException: The program
didn't contain a Flink job.". when submitting a program through the
YarnClusterClient
2) The logs and the dashboard state that the job ran succes
Yes - we are submitting jobs one by one.
How can we change that to work for our needs?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Are you executing these jobs concurrently?
The ClusterClient was not written to be used concurrently in the same
JVM, as it partially relies and mutates static fields.
On 03.07.2018 09:50, eSKa wrote:
We are running same job all the time. And that error is happening from
time to time.
Here
We are running same job all the time. And that error is happening from time
to time.
Here is job submittion code:
private JobSubmissionResult submitProgramToCluster(PackagedProgram
packagedProgram) throws JobSubmitterException,
ProgramMissingJobException, ProgramInvocationExce
Hmm. That's strange.
Can you explain a little more on how your YARN cluster is set up and how
you configure the submission context?
Also, did you try submitting the jobs in detach mode?
Is this happening from time to time for one specific job graph? Or it is
consistently throwing the exception fo
No.
execute was called, and all calculation succeeded - there were job on
dashboard with status FINISHED.
after execute we had our logs that were claiming that everything succeded.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
flink using YarnClusterClient
> Sometimes we have up to 6 jobs at the same time.
>
> From time to time we got error as below:
> The program didn't contain a Flink job.
> org.apache.flink.client.program.ProgramMissingJobException: The program
&g
Hello,We are currently running jobs on Flink 1.4.2. Our usecase is as
follows:
-service get request from customer
- we submit job to flink using YarnClusterClient
Sometimes we have up to 6 jobs at the same time.
>From time to time we got error as below:
The program didn't contain a F
12 matches
Mail list logo