Hi Vasily,
Unfortunately no, I don't think there is such an option in your case. With
per job mode, you could try to use the Distributed Cache, it should be
working in streaming as well [1], but this doesn't work in the application
mode, as in that case no code is executed on the JobMaster [2]
Tw
Hi all.
While running Flink jobs in application mode on YARN and Kuber, we need to
provide some configuration files to main class. Is there any option on
Flink CLI to copy local files on cluster without manually copying on DFS
or in docker image, something like *--files* option in spark-submit?