Hi,

I am currently deploying a Flink pipeline using Flink Kubernetes Operator
v1.9 alongside Flink version 1.19.1 as part of a comprehensive use case.
When the use case is undeployed, I need to ensure that the Flink pipeline
is properly canceled and the Flink cluster is taken down.

My approach involves first canceling the job via the REST Cluster API,
followed by deleting the FlinkDeployment resource. This successfully tears
down the Flink cluster. However, after a period of time, I notice that the
Flink Operator runs a reconciliation process and the cluster is
subsequently restarted.

In flink deployment controller implementation, I observed that there is a
comparison with the previous status, and if the old status was "deployed,"
the cluster is recreated.

I would like to know the recommended approach for achieving this flow:

terminating the Flink job and fully bringing down the Flink cluster via
external calls, without the Operator triggering a cluster restart.

Thanks

Sigalit

Reply via email to