I gave a talk about that setup:
https://www.youtube.com/watch?v=tiGxEGPyqCg&ab_channel=FlinkForward
The documentation suggests using unaligned checkpoints in case of
backpressure (
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/ops/state/checkpointing_under_backpressure/#unaligned
Yes. I do use RocksDB for (incremental) checkpointing. During each checkpoint
15-20GB of state gets created (new state added, some expired). I make use of
FIFO compaction.
I’m a bit surprised you were able to run with 10+TB state without unaligned
checkpoints because the performance in my appli
Hi Vishal,
Just wanted to comment on this bit:
> My job has very large amount of state (>100GB) and I have no option but
to use unaligned checkpoints.
I successfully ran Flink jobs with 10+ TB of state and no unaligned
checkpoints enabled. Usually, you consider enabling them when there is some
k
I wanted to achieve exactly once semantics in my job and wanted to make sure I
understood the current behaviour correctly:
1. Only one Kafka transaction at a time (no concurrent checkpoints)
2. Only one transaction per checkpoint
My job has very large amount of state (>100GB) and I have no opti