Hi,

We are looking into a production use case of using Flink, to process
multiple streams of data from Kafka topics.

We plan to perform joins on these streams and then output aggregations on
that data. We plan to use the Table API and SQL capabilities for this.
We need to prepare a plan to productionize this flow, and were looking into
how Flink features like Checkpoints and Savepoints and state management are
being utilized here (In case of Table API).

Can you point me towards any documentation/articles/tutorials regarding how
Flink is doing these in case of the Table API and SQL?


*Thanks and regards!*
Vaibhav

Reply via email to