Hi,

We are using ValueState to maintain state. It is a pretty simple job with a
keyBy operator on a stream and the subsequent map operator maintains state
in a ValueState instance. The transaction load is in billion transactions
per day. However the amount of state per key is a list of 18x6 long values
which are constantly updated. We have about 20 million keys and
transactions are uniformly distributed across those keys.

When the job starts the size of the checkpoints (Using RocksDB backed by
S3) is low (order of 500 MB). However, after 12 hours of operation the
checkpoint sizes have increased to about 4-5 GB. Time taken to complete the
checkpoint starts around 15-20 seconds and after 12 hours reaches about a
minute.

What is the reason behind the increasing size of checkpoints?

Thanks,
Sameer

Reply via email to