It's promising that I can #isEndOfStream at the source. Is there a way I
can terminate a job from the sink side instead? We want to terminate a
job based on a few conditions (either hit the timeout limit or the output
count limit).

On Mon, Oct 18, 2021 at 2:22 AM Arvid Heise <ar...@apache.org> wrote:

> Unfortunately, DeserializationSchema#isEndOfStream is only ever supported
> for KafkaConsumer. It's going to be removed entirely, once we drop the
> KafkaConsumer.
>
> For newer applications, you can use KafkaSource, which allows you to
> specify an end offset explicitly.
>
> On Fri, Oct 15, 2021 at 7:05 PM Fuyao Li <fuyao...@oracle.com> wrote:
>
>> Hi Sharon,
>>
>>
>>
>> I think for DataStream API, you can override the isEndOfStream() method
>> in the DeserializationSchema to control the input data source to end and
>> thus end the workflow.
>>
>>
>>
>> Thanks,
>>
>> Fuyao
>>
>>
>>
>> *From: *Sharon Xie <sharon.xie...@gmail.com>
>> *Date: *Monday, October 11, 2021 at 12:43
>> *To: *user@flink.apache.org <user@flink.apache.org>
>> *Subject: *[External] : Timeout settings for Flink jobs?
>>
>> Hi there,
>>
>>
>>
>> We have a use case where we want to terminate a job when a time limit
>> is reached. Is there a Flink setting that we can use for this use case?
>>
>>
>>
>>
>>
>> Thanks,
>>
>> Sharon
>>
>

Reply via email to