Flink MapState manages the TTL of each map entry independently. If you
add/update a “new hash”, the “old hash” that was written one hour ago stays
unaffected and can still expire as it should be. So it should meet your
requirements of deduplication and size-control.
> 2025年6月4日 10:54,Sachin Mi
Another option is to key the stream and use a simple function like this to
de-dupe each stream element:
public class DeduplicateElement extends KeyedProcessFunction {
private transient ValueState seen;
@Override
public void processElement(Event element,
KeyedProcessFunction.Context ctx,
Co
+user@flink.apache.org
On Wed, 4 Jun, 2025, 9:41 am Owais Ansari,
wrote:
> It expires the individual key and not the entire state. For your use case
> Map state is a good option.
>
> On Wed, 4 Jun, 2025, 8:26 am Sachin Mittal, wrote:
>
>> So my TTL config is like:
>>
>> StateTtlConfig.newBuild
So my TTL config is like:
StateTtlConfig.newBuilder(Duration.ofHours(1))
.setUpdateType(StateTtlConfig.UpdateType.OnReadAndWrite)
.setStateVisibility(StateTtlConfig.StateVisibility.ReturnExpiredIfNotCleanedUp)
.build();
Issue is that if every time I use ListState.update it would be
Hi Sachin,
I assume you are using the rocksdb state backend. The TTL for ListState is
applied for each list entry, if you are using `ListState.add`. However if
you do ListState.update, the entire list is rewrite so the ttl is updated.
Could you share your use case and the ttl config?
Another sugge
Hi,
I think ttl would be applied for the entire list,
I would like the ListState to restrict the entries by size and
automatically purge older added entries as new ones get added.
Something similar to a bounded list.
Thanks
Sachin
On Thu, May 29, 2025 at 6:51 PM Sigalit Eliazov wrote:
> hi,
>