Hi,

In my project, we are trying to configure the "Incremental checkpointing"
with RocksDB in the backend.

We are using Flink 1.11 version and RockDB with AWS : S3 backend

Issue:
------
In my pipeline, my window size is 5 mins and the incremental checkpointing
is happening for every 2 mins.
I am pumping the data in such a way that the keys are not the same for each
record. That means, the incremental checkpointing size should keep
increasing for each checkpoint.

So, the expectation here is that the size of the checkpointing should reach
atleast 3-5 GB with the amount of the data pumped in.

However, the checkpointing size is not going beyond 300 MB and that too it
is taking around 2 mins duration for taking this 300 MB checkpoint.

In my set up, I am using

Cluster: Cloud cluster with instance storage.
Memory : 20 GB,
Heap : 10 GB
Flink Memory: 4.5 GB
Flink Version : 1.11
Back end: RocksDB with AWS S3 backend


I would feel that, there must be something bottleneck with Flink RocksDB
configurations.
Can you please advise me?

Thanks,

Reply via email to