Hi Nicu,
What you described sounds reasonable to me.
In fact, solution 1 would not perfectly work if you have a failure on your
db right after step 5 but before step 6, so to make the txn commit in Kafka
and the txn commit in your sink DB "an atomic operation" together, you need
to either encode
Hi,
Indeed, solution 2 seems feasible using db transaction (e.g. Cassandra batch)
to include an offset update.
A sophisticated implementation is for instance under the hood of
https://doc.akka.io/docs/akka-stream-kafka/current/consumer.htmlhttps://doc.akka.io/docs/akka-stream-kafka/current/consum