Hi,
In this case, you could cancel the job using the flink stop command, which
will clean up Flink HA metadata, and resubmit the job.
Best,
Zhanghao Chen
From: Jean-Marc Paulin
Sent: Monday, June 10, 2024 18:53
To: user@flink.apache.org
Subject: Failed to resu
Hi,
We have a 1.19 Flink streaming job, with HA enabled (ZooKeeper),
checkpoint/savepoint in S3. We had an outage and now the jobmanager keeps
restarting. We think it because it read the job id to be restarted from
ZooKeeper, but because we lost our S3 Storage as part of the outage it cannot
f
YW, ping me back whether it works because it's a nifty feature.
G
On Mon, Jun 10, 2024 at 9:26 AM Salva Alcántara
wrote:
> Thanks Gabor, I will give it a try!
>
> On Mon, Jun 10, 2024 at 12:01 AM Gabor Somogyi
> wrote:
>
>> Now I see the intention and then you must have a V2 sink, right? Maybe
Thanks Gabor, I will give it a try!
On Mon, Jun 10, 2024 at 12:01 AM Gabor Somogyi
wrote:
> Now I see the intention and then you must have a V2 sink, right? Maybe you
> look for the following:
>
> final String writerHash = "f6b178ce445dc3ffaa06bad27a51fead";
> final String committerHash = "68ac8