I think what happens is the following:
- For full checkpoints, Flink iterates asynchronously over the data. That
means the whole checkpoint is a compact asynchronous operation.
- For incremental checkpoints, RocksDB has to flush the write buffer and
create a new SSTable. That flush is synchro
Hi Nico,
Thanks for the detailed explanation. I corrected the two issues you mentioned
on my application and I was able to observe the behavior you mentioned with
Flink 1.4.1. As you said the "Asynchronous RocksDB snapshot ..." message
appears only for full snapshots. The incremental snapshot
Hi Miyuru,
regarding "state.backend", I was looking at version 1.5 docs and some
things changed compared to 1.3. The "Asynchronous RocksDB snapshot ..."
messages only occur with full snapshots, i.e. non-incremental, and I
verified this for your program as well.
There are some issues with your proj
Hi Nico,
Thanks for the detailed explanation. The only change I have made in my
flink-conf.yaml file is the following.
state.backend.fs.checkpointdir: file:///home/ubuntu/tmp-flink-rocksdb
The default "state.backend" value is set to filesystem. Removing the
env.setStateBackend() method code or ch
Hi Miyuru,
Indeed, the behaviour you observed sounds strange and kind of go against
the results Stefan presented in [1]. To see what is going on, can you
also share your changes to Flink's configuration, i.e. flink-conf.yaml?
Let's first make sure you're really comparing RocksDBStateBackend with
v