Hey, I had a similar problem when I tried to list the jobs and kill one by name in yarn cluster. Initially I also tried to set YARN_CONF_DIR but it didn't work. What helped tho was passing hadoop conf dir to my application when starting it. Like that: java -cp application.jar:/etc/hadoop/conf
Reason was that my application was finding default configuration coming from hadoop dependency in fat jar and was not even trying to look for anything in environment variable. When I passed hadoop conf dir to it, it started working properly. Hope it helps, Cheers, Kamil. On Fri, Apr 7, 2017 at 8:04 AM, Jins George <jins.geo...@aeris.net> wrote: > Hello Community, > > I have a need to submit flink job to a remote Yarn cluster > programatically . I tried to use YarnClusterDescriptor.deploy() , but I get > message > *RMProxy.java:92:main] - Connecting to ResourceManager at /0.0.0.0:8032 > <http://0.0.0.0:8032>. *It is trying to connect the resouce manager on > the client machine. I have set the YARN_CONF_DIR on the client machine > and placed yarn-site.xml , core-site.xml etc. However it does not seems to > be picking these files. > > Is this the right way to sumit to a Remote Yarn cluster ? > > > Thanks, > Jins George >