Hi Theofilos,

I'm not sure whether I understand correctly what you are trying to do.
I'm assuming you don't want to use the command-line client.

You can setup the Yarn cluster in your code manually using the
FlinkYarnClient class. The deploy() method will give you a
FlinkYarnCluster which you can use to connect to the deployed cluster.
Then get the JobManager address and use the Client class to submit
Flink jobs to the cluster. I have to warn you that these classes are
subject to change in Flink 1.1.0 and above.

Let me know if the procedure works for you.

Cheers,
Max

On Tue, Apr 19, 2016 at 2:37 PM, Theofilos Kakantousis <t...@kth.se> wrote:
> Hi everyone,
>
> I'm using Flink 0.10.1 and hadoop 2.4.0 to implement a client that submits a
> flink application to Yarn. To keep it simple I use the ConnectedComponents
> app from flink examples.
>
> I set the required properties (Resources, AM ContainerLaunchContext etc.) on
> the YARN client interface. What happens is the JobManager and TaskManager
> processes start and based on the logs containers are running but the actual
> application does not start. I'm probably missing the proper way to pass
> parameters to the ApplicationMaster and it cannot pick up the application it
> needs to run. Anyone knows where I could get some info on how to pass
> runtime params to the AppMaster?
>
> The ApplicationMaster launchcontainer script includes the following:
> exec /bin/bash -c "$JAVA_HOME/bin/java -Xmx1024M
> org.apache.flink.yarn.ApplicationMaster  -c
> org.apache.flink.examples.java.graph.ConnectedComponents 1>
> /tmp/stdOut5237161854714899800 2>  /tmp/stdErr606502839107545371 "
>
> Thank you,
> Theofilos
>

Reply via email to