Hi marke,
My advice is not to keep your client connected to JM.
If you expect continuous output, you can sink it out.
In addition, it does not rule out that your JM load is too high, such as
the emergence of full GC and so on.
So, make sure your JM has enough resources to use and monitor it.
Than
Hi Marke,
Are you expecting your job to quickly return the results of the stream
calculation?
If it is running for a long time, you can run it in detached mode when you
submit the job[1].
It will not cause your client to be blocked and stay connected to the Flink
JobManager.
Thanks, vino.
[1]:
h
Hi,
my flink job fails continously(sometimes behind minutes, sometimes behind
hours) with the
follwing exception.
Flink run configuration:
run with yarn: -yn 2 -ys 5 -yjm 8192 -ymt 12288
streaming-job: kafka source and redis sink
The program finished with the following exception:
org.apache.fl