I don’t know any out of the box solution for the use case you mentioned. You 
can add an operator to orchestrate your Flink clusters, when certain conditions 
are met, trigger a stop with savepoint will achieve something like you 
mentioned. Maybe Arvid can share more information.

From: Sharon Xie <sharon.xie...@gmail.com>
Date: Monday, October 18, 2021 at 13:34
To: Arvid Heise <ar...@apache.org>
Cc: Fuyao Li <fuyao...@oracle.com>, user@flink.apache.org 
<user@flink.apache.org>
Subject: Re: [External] : Timeout settings for Flink jobs?
It's promising that I can #isEndOfStream at the source. Is there a way I can 
terminate a job from the sink side instead? We want to terminate a job based on 
a few conditions (either hit the timeout limit or the output count limit).

On Mon, Oct 18, 2021 at 2:22 AM Arvid Heise 
<ar...@apache.org<mailto:ar...@apache.org>> wrote:
Unfortunately, DeserializationSchema#isEndOfStream is only ever supported for 
KafkaConsumer. It's going to be removed entirely, once we drop the 
KafkaConsumer.

For newer applications, you can use KafkaSource, which allows you to specify an 
end offset explicitly.

On Fri, Oct 15, 2021 at 7:05 PM Fuyao Li 
<fuyao...@oracle.com<mailto:fuyao...@oracle.com>> wrote:
Hi Sharon,

I think for DataStream API, you can override the isEndOfStream() method in the 
DeserializationSchema to control the input data source to end and thus end the 
workflow.

Thanks,
Fuyao

From: Sharon Xie <sharon.xie...@gmail.com<mailto:sharon.xie...@gmail.com>>
Date: Monday, October 11, 2021 at 12:43
To: user@flink.apache.org<mailto:user@flink.apache.org> 
<user@flink.apache.org<mailto:user@flink.apache.org>>
Subject: [External] : Timeout settings for Flink jobs?
Hi there,

We have a use case where we want to terminate a job when a time limit is 
reached. Is there a Flink setting that we can use for this use case?


Thanks,
Sharon

Reply via email to