Hi Salva,
Seems similar issue has been reported to Apache RAT project [1] for a long
time, but there’s no solution yet.
According to Flink contribution guide, it is suggested to run `min clean
verify` and end-to-end tests before submitting a Pull Request [2].
Best Regards,
Xiqian
[1] https://
Mystery solved.
I am using a special pod for job deployments. It is the same image version as
the cluster, we just use it for job deployments.
Well, that pod didn’t have proper configuration for Flink. So, minimal config
on deployer + whatever actual job had set for checkpointing was the result
Simply put, HA metadata will only be deleted when the job reaches terminal
state (either failed or cancelled). The ref doc is
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/task_failure_recovery/#restart-strategies
Best,
Zhanghao Chen
From:
In short, when you don't care about
multiple KeyedStateReaderFunction.readKey calls then you're on the safe
side.
G
On Wed, Feb 5, 2025 at 6:27 PM Jean-Marc Paulin wrote:
> I am still hoping that I am still good. I just read the savepoint to
> extract information (parallelism 1, and only 1 task