Hi, I have answered your questions inline: > It seems to me that checkpoints can be treated as flink internal recovery > mechanism, and savepoints act more as user-defined recovery points. Would > that be a correct assumption? You could see it that way, but I would describe savepoints more as user-defined *restart* points than *recovery* points. Please take a look at my answers in this thread, because they cover most of your question:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/difference-between-checkpoints-amp-savepoints-td14787.html <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/difference-between-checkpoints-amp-savepoints-td14787.html> . > While cancelling an application with -s option, it specifies the savepoint > location. Is there a way during application startup to identify the last know > savepoint from a folder by itself, and restart from there. Since I am saving > my savepoints on s3, I want to avoid issues arising from ls command on s3 due > to read-after-write consistency of s3. I don’t think that this feature exists, you have to specify the savepoint. > Suppose my application has a checkpoint at point t1, and say i cancel this > application sometime in future before the next available checkpoint( say > t1+x). If I start the application without specifying the savepoint, it will > start from the last known checkpoint(at t1), which wont have the application > state saved, since I had cancelled the application. Would this is a correct > assumption? If you restart a canceled application it will not consider checkpoints. They are only considered in recovery on failure. You need to specify a savepoint or externalized checkpoint for restarts to make explicit that you intend to restart a job, and not to run a new instance of the job. > Would using ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION be same as > manually saving regular savepoints? Not the same, because checkpoints and savepoints are different in certain aspects, but both methods leave you with something that survives job cancelation and can be used to restart from a certain state. Best, Stefan