Hi Georg,

Flink batch applications run until all their input is processed. When
that's the case, the application finishes. You can read more about this in
the documentation for DataStream [1] or Table API [2]. I think this matches
the same as Spark is explaining in the documentation.

Best regards,

Martijn

[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/execution_mode/
[2]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/common/

On Mon, 2 May 2022 at 16:46, Georg Heiler <georg.kf.hei...@gmail.com> wrote:

> Hi,
>
> spark
> https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#triggers
> offers a variety of triggers.
>
> In particular, it also has the "once" mode:
>
> *One-time micro-batch* The query will execute *only one* micro-batch to
> process all the available data and then stop on its own. This is useful in
> scenarios you want to periodically spin up a cluster, process everything
> that is available since the last period, and then shutdown the cluster. In
> some case, this may lead to significant cost savings.
>
> Does flink have a similar possibility?
>
> Best,
> Georg
>

Reply via email to