Hi,

I am working on a project that has to be resilient and highly available,
and would like to be able to deploy a Flink/Kafka setup as active/active
across two data centres (Live/Hot backup in a sense).
I came across an old forum post asking how failover could be performed if
one datacenter goes down, and the answer was to try and automate a solution
involving savepoints.

In my case, there would be independent Kafka clusters in each data centre,
but input data would be mirrored to both.

The documentation mentions that Flink saves the Kafka consumer offsets as
part of the checkpoint/savepoint, and resumes from those offsets on a
restore.
If I has a solution involving syncing checkpoints between DCs, what
would be the behaviour given the offsets of Flink in DC1 will not marry up
exactly with those in DC2?
e.g. will it just pick up from latest?

Many thanks in advance,

Marcus

Reply via email to