Hi All,


I have  a flink job with Rabbitmq source,keyby, tumbling window and
aggregator function, sink.

I’m using rockDB state backend along with incremental checkpoints. Every
10s checkpointing is happening with 5s pause between checkpoints.



Constant load with same keys I used for my test. I did same test 5 times.

Following is the result of same.



*checkpoints size*

*Full checkpoint size*

*iteration*

303kb

303kb

 initial

321kb

612kb

iteration1

339kb

*339kb*

iteration2

356kb

585kb

iteration3

375kb

375kb

iteration4

392kb

609kb

iteration5



1.Why is checkpoint size is increasing for same keys/same load over a
period of time?. – Does it mean lot of managed memory or disk space iis
still there and compaction didn’t trigger yet?

2.Full checkpoint size has reduced in between. Is it because of compaction?.

3. I have also enabled rocks DB native metrics, It shows only pending
compaction? Is there any property which will show completed compaction?

4. I am using default rocks DB configuration (Flink version used is 1.18).
So write buffer size is 64MB and L0 compaction will create a L1 file which
will be of size 64MB.

What is the default multiplier value and also how many levels are totally
available.

5. Lets say before memtable is getting filled with my state (ie before it
becomes 64MB), if operator needs to take snapshot how is it handled? Will
it copy inflight data as it is before memtables content flushed to .sst
files? Because im seeing only sometimes .sst files. Other times a file
without extension is present in checkpoints directory.

6. If I reduce my diskspace will compaction run often? If so how to decide
how much space we need for state storage?

7. what is state.backend.rocksdb.compaction.level.max-size-level-base ?
document says The upper-bound of the total size of level base files in
bytes. " + "The default value is '256MB'."But it is not clear for me. Is it
lmax file sizes? Single lmax file size or complete lmax size?



Thanks

Banu

Reply via email to