Hi everyone! I'm using Flink 1.12.0 with SQL API. I'm developing a streaming job with join and insertion into postgreSQL. There is two tables in join: 1. Dynamic table based on kafka topic 2. Small lookup JDBC table
>From what I can see Flink job reads data from JDBC table only on startup and mark task as FINISHED. Does it mean that Flink misses all updates from this table and join reflects only table state on startup? And the other question is, how to enable checkpointing for this job? I know that checkpointing for jobs with finished tasks is not supported now, but maybe I can keep such tasks in RUNNING state? Thank you! -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/