Hi Aljoscha, Kostas,

I would be in favour of something like "bin/flink run-application",
> maybe we should even have "run-job" in the future to differentiate.


I have no preference for the "-R/--remote-deploy" option of "flink run"  or
new
introduced "flink run-application". If we always bind the "application
mode" to
"run-main-on-cluster", i think both of them make sense to me.

For the "run-job", do you mean to submit a Flink job to an existing session
or
just like the current per-job to start a dedicated Flink cluster? Then will
"flink run" be deprecated?

How to fetch the jars and dependencies?


On Yarn deployment, we could register the local or HDFS jar/files
as LocalResource.
And let Yarn to localize the resource to workdir, when the entrypoint is
launched, all
the jars and dependencies exist locally. So the entrypoint will *NOT* do
the real fetching,
do i understand correctly?

If this is the case, for K8s deployment, the jars need to be built in image
or fetched
by init-container. Then the following code path will be exact same as Yarn.



Best,
Yang


Aljoscha Krettek <aljos...@apache.org> 于2020年3月9日周一 下午9:55写道:

>  > For the -R flag, this was in the PoC that I published just as a quick
>  > implementation, so that I can move fast to the entrypoint part.
>  > Personally, I would not even be against having a separate command in
>  > the CLI for this, sth like run-on-cluster or something along those
>  > lines.
>  > What do you think?
>
> I would be in favour of something like "bin/flink run-application",
> maybe we should even have "run-job" in the future to differentiate.
>
>  > For fetching jars, in the FLIP we say that as a first implementation
>  > we can have Local and DFS. I was wondering if in the case of YARN,
>  > both could be somehow implemented
>  > using LocalResources, and let Yarn do the actual fetch. But I have not
>  > investigated it further. Do you have any opinion on this?
>
> By now I'm 99 % sure that we should use YARN for that, i.e. use
> LocalResource. Then YARN does the fetching. This is also how the current
> per-job cluster deployment does it, the Flink code uploads local files
> to (H)DFS and then sets the remote paths as a local resource that the
> entrypoint then uses.
>
> Best,
> Aljoscha
>

Reply via email to