On Fri, Sep 30, 2016 at 7:02 PM, dan bress <danbr...@gmail.com> wrote:
> Thanks for the answer.  In 1.1.X, If I deploy job "Job-V1", run for a
> while, trigger a save point, cancel the job, submit job "Job-V2" and resume
> save point.  Will Job-V2 understand the savepoint from Job-V1?

Currently it depends on the type of changes between V2 and V1. If you
don't change the topology, but only the user function code, it should
pick it up. If you plan to change the topology, you should provide IDs
for each operator in order to allow Flink to map old state to the new
job.

>    What is the scope of a savepoint?  Who does it belong to?  A Job?  A
> TaskManager?  A Job Manager?  Flink as a whole?

Savepoints essentially externalize Flink's internal state and are
owned by the user. Currently they contain snapshot meta data (like
checkpoint ID) and pointers to the actual snapshot state, both Flink
internal and your user state. For regular checkpoints this state is
garbage collected automatically whereas savepoints have to be manually
cleaned up.

There is blog post about the general idea of versioning state here:
http://data-artisans.com/how-apache-flink-enables-new-streaming-applications/

If you have further questions, please feel free to ask.

Reply via email to