Hi Maxim
You could write a script yourself which triggers the cancel with savepoint and
then starts a new version using the savepoint that was created during the
cancel.
However, I’ve built a tool that allows you to perform these steps more easily:
https://github.com/ing-bank/flink-deployer. T
Hi Steven, Oytun
You may find the tool we open-sourced last year useful. It offers deploying and
updating jobs with savepointing.
You can find it on Github: https://github.com/ing-bank/flink-deployer
There’s also a docker image available in Docker Hub.
Marc
On 24 Apr 2019, 17:29 +0200, Oytun T
Hi
We’ve got a job producing to a Kafka sink. The Kafka topics have a retention of
2 weeks. When doing a complete replay, it seems like Flink isn’t able to
back-pressure or throttle the amount of messages going to Kafka, causing the
following error:
org.apache.flink.streaming.connectors.kafka.
Hi
I’ve been trying to get state migration with Avro working on Flink 1.7.2 using
Scala case classes but I’m not getting anywhere closer to solving it.
We’re using the most basic streaming WordCount example as a reference to test
the schema evolution:
val wordCountStream: DataStream[WordWithCo