I am not sure if this suits your use case, but Flink YARN cli does support
transferring local resource to all YARN nodes.
Simply use[1]:
bin/flink run -m yarn-cluster -yt <local_resource>
or
bin/flink run -m yarn-cluster --yarnship <local_resource>
should do the trick.

It might have not been using the HDFS DistributedCache API though.

Thanks,
Rong

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/cli.html#usage

On Sun, Sep 2, 2018 at 2:07 AM 何春平 <244272...@qq.com> wrote:

> hi everyone!
>  can flink submit job which read some custom file distributed by hdfs
> DistributedCache.
>  like spark can do that with the follow command:
>     bin/spark-submit  --master yarn  --deploy-mode cluster  --files
> /opt/its007-datacollection-conf.properties#its007-datacollection-conf.properties
>  ...
>  then spark driver can read `its007-datacollection-conf.properties` file
> in work directory.
>
> thanks!
>

Reply via email to