Hi Niels,

If you're using 1.1.1, then you can instantiate the
YarnClusterDescriptor and supply it with the Flink jar and
configuration and subsequently call `deploy()` on it to receive a
ClusterClient for Yarn which you can submit programs using the
`run(PackagedProgram program, String args)` method. You can also
cancel jobs or shutdown the cluster from the ClusterClient.

Cheers,
Max

On Thu, Aug 25, 2016 at 10:24 AM, Niels Basjes <ni...@basjes.nl> wrote:
> Hi,
>
> We have a situation where we need to start a flink batch job on a yarn
> cluster the moment an event arrives over a queue.
> These events occur at a very low rate (like once or twice a week).
>
> The idea we have is to run an application that listens to the queue and
> executes the batch when it receives a message.
>
> We found that if we start this using 'flink run -m yarn-cluster ..." the
> moment we run this the jobmanager in yarn is started and the resources for
> these batches is claimed immediately.
>
> What is the recommended way to only claim these resources when we actually
> have a job to run?
> Can we 'manually' start and stop the jobmanager in yarn in some way from our
> java code?
>
> --
> Best regards / Met vriendelijke groeten,
>
> Niels Basjes

Reply via email to