Re:Re: Checkpoints and windows size

2024-06-19 Thread Feifan Wang
Hi banu: > Not all old sst files are present. Few are removed (i think it is because of > compaction). You are right, rocksdb implement delete a key by insert a entry with null value, the space will be release after compaction. > Now how can I maintain check points size under control??. S

Re:Checkpoints and windows size

2024-06-19 Thread Feifan Wang
Hi banu, First of all, it should be noted that the checkpoint interval does not affect the state data live time of the window operator. The life cycle of state data is the same as the life cycle of the tumbling window itself. A checkpoint is a consistent snapshot of the job ( include state d

Re: Checkpoints and windows size

2024-06-19 Thread banu priya
Hi Wang, Thanks a lot for your reply. Currently I have 2s window and check point interval as 10s. Minimum pass between check point is 5s. What happens is my check points size is growing gradually. I checked the content inside my rocks db local dir and also the shared checkpoints directory. Inside

Re: A way to meter number of deserialization errors

2024-06-19 Thread David Radley
Hi Ilya, I have not got any experience of doing this, but wonder if we could use the Flink Metrics . I wonder: - There could be hook point at that part of the code to discover some custom code that implements the metrics.

Runtime issue while using statefun-datastream v3.3.0

2024-06-19 Thread RAN JIANG
Hi all, We are trying to leverage statefun datastream features. After adding *org.apache.flink:statefun-flink-datastream:3.3.0* in our gradle file, we are experiencing a runtime error like this, *Caused by: java.lang.NoSuchMethodError: ‘com.google.protobuf.Descriptors$FileDescriptor com.google.pro

Re: A way to meter number of deserialization errors

2024-06-19 Thread Ilya Karpov
Does anybody experience the problem of metering deserialization errors? пн, 17 июн. 2024 г. в 14:39, Ilya Karpov : > Hi all, > we are planning to use flink as a connector between kafka and > external systems. We use protobuf as a message format in kafka. If > non-backward compatible changes occur