Hi everyone!
I have encountered similar behavior in the case of native k8s HA with
multiple JobManagers.
Therefore, I have a question - were there any plans to add the ability
not to restart the job during the change
of the JobManagers leader? Or are there certain insurmountable
obstacles preventi
+1 to what Konstantin said. There is no real benefit running multiple
JMs on k8s, unless you need to optimize the JM startup time. Often the
time to get a replacement pod is negligible compared to the job
restart time.
Thomas
On Tue, May 10, 2022 at 2:27 AM Őrhidi Mátyás wrote:
>
> Ah, ok. Thank
Ah, ok. Thanks, Konstantin for the clarification, I appreciate the quick
response.
Best,
Matyas
On Tue, May 10, 2022 at 10:59 AM Konstantin Knauf wrote:
> Hi Matyas,
>
> yes, that's expected. The feature should have never been called "high
> availability", but something like "Flink Jobmanager f
Hi Matyas,
yes, that's expected. The feature should have never been called "high
availability", but something like "Flink Jobmanager failover", because
that's what it is.
With standby Jobmanagers what you gain is a faster failover, because a new
Jobmanager does not need to be started before resta