I've read the following upgrade application page
<https://ci.apache.org/projects/flink/flink-docs-stable/ops/upgrading.html>.
This seems to focus on doing this in a wrapper layer (e.g. Kubernetes).
Just checking to see if this is the common practice or do people do this
from their client jars.



On Sun, Sep 20, 2020 at 5:13 PM Dan Hill <quietgol...@gmail.com> wrote:

> I'm prototyping with Flink SQL.  I'm iterating on a client job with
> multiple INSERT INTOs.  Whenever I have an error, my Kubernetes job
> retries.  This creates multiple stream jobs with the same names.
>
> Is it up to clients to delete the existing jobs?  I see Flink CLI
> functions for this.  Do most people usually do this from inside their
> client jar or their wrapper code (e.g. Kubernetes job).
>
> - Dan
>

Reply via email to