Hi Xinyu,

at the moment there is no such functionality in Flink. Whenever you submit
a job, Flink will try to execute the job right away. If the job cannot get
enough slots, then it will wait until the slot.request.timeout occurs and
either fail or retry if you have a RestartStrategy configured.

If you want to wait until you have enough slots before submitting a job, I
would suggest that you write yourself a small service which uses Flink's
REST API [1] to query the status and finally submit the job if there are
enough free slots.

[1]
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/rest_api.html#overview-1

Cheers,
Till

On Wed, Jan 2, 2019 at 2:09 PM 张馨予 <wsz...@gmail.com> wrote:

> Hi all
>
>
>
> We submit some batch jobs to a Flink cluster which with 500 slots for
> example. The parallelism of these jobs may be different, between 1 and 500.
>
> Is there any configuration that can make jobs running in submitting order
> once the cluster has enough slots? If not, could we meet this requirement?
>
>
>
> Thanks.
>
>
>
> Xinyu Zhang
>

Reply via email to