Hi Andrey,
Thank you for the reply.
We are using incremental checkpointing.
Good to know that the incremental cleanup only applies to the heap state
backend. Looks like taking some downtime to take a full savepoint and restore
everything is inevitable.
Thanks,
--
Ning
On Wed, 15 May 2019 10:5
Hi Ning,
If you have not activated non-incremental checkpointing then taking a
savepoint is the only way to trigger the full snapshot. In any case, it
will take time.
The incremental cleanup strategy is applicable only for heap state backend
and does nothing for RocksDB backend. At the moment, yo
Hi Stefan,
Thank you for the confirmation.
Doing a one time cleanup with full snapshot and upgrading to Flink 1.8
could work. However, in our case, the state is quite large (TBs).
Taking a savepoint takes over an hour, during which we have to pause
the job or it may process more events.
The Java
Hi,
If you are worried about old state, you can combine the compaction filter based
TTL with other cleanup strategies (see docs). For example, setting
`cleanupFullSnapshot` when you take a savepoint it will be cleared of any
expired state and you can then use it to bring it into Flink 1.8.
Bes
Just wondering if anyone has any insights into the new TTL state cleanup
feature mentioned below.
Thanks,
—
Ning
> On Mar 11, 2019, at 1:15 PM, Ning Shi wrote:
>
> It's exciting to see TTL state cleanup feature in 1.8. I have a question
> regarding the migration of existing TTL state to the
It's exciting to see TTL state cleanup feature in 1.8. I have a question
regarding the migration of existing TTL state to the newer version.
Looking at the Pull Request [1] that introduced this feature, it seems like
that Flink is leveraging RocksDB's compaction filter to remove stale state.
I ass