I have some jobs where I can configure the TTL duration for certain
operator state. The problem I'm noticing is that when I make changes in the
TTL configuration the new state descriptor becomes incompatible and I
cannot restart my jobs from current savepoints. Is that expected?
More precisely, I'
I tried, seems the offset will not be committed when doing savepoint
After submitting a flink job, I just use the following cmd to see the
committed offset
bin/kafka-consumer-groups.sh --bootstrap-server xxx:9092 --describe --group
groupName
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG
dwd_au
The Apache Flink community is very happy to announce the release of
Apache flink-connector-opensearch 2.0.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.
The release is available for downl
The Apache Flink community is very happy to announce the release of Apache
flink-connector-opensearch 1.2.0 for Flink 1.18 and Flink 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The
There's no such option yet. However, it might not be a good idea to silently
ignore the exception and restart from fresh state which violates the data
integrity. Instead, the job should be marked as terminally failed in this case
(maybe after a few retries) and just leave users or an external jo
Thanks for you reply,
Yes, this is indeed an option. But I was more after a config option to handle
that scenario. If the HA metadata points to a checkpoint that is obviously not
present (err 404 in the S3 case) there is little value in retrying. The HA data
are obviously worthless in that scen