Hi Team,
I am currently running batch jobs on flink 1.19.2 and the expected
throughput for a certain step is above 1TB.
What I have observed is that everytime my job reaches close to 30%, it
fails with the below error after a few retries. I am currently running this
on kubernetes and using 6 CPU an
Hi Lu Niu,
Your scenario must be same with me.
I'm so glad to share my solution with you.
1. use the operator state for save the custom statistics data,
2. save the data to external storage when the operator is closing.
3. query from external storage, and make some reduce operation with the data
Unsubscribe