tates with a save point directory? e.g.
>
> ./bin/flink run myJob.jar -s savepointDirectory
>
>
>
> Regards,
>
>
>
> Min
>
>
>
> *From:* Zili Chen [mailto:wander4...@gmail.com]
> *Sent:* Dienstag, 20. August 2019 04:16
> *To:* Biao Liu
> *Cc:* Ta
From: Zili Chen [mailto:wander4...@gmail.com]
Sent: Dienstag, 20. August 2019 04:16
To: Biao Liu
Cc: Tan, Min; user
Subject: [External] Re: Recovery from job manager crash using check points
Hi Min,
I guess you use standalone high-availability and when TM fails,
JM can recovered the job from an
Hi Min,
I guess you use standalone high-availability and when TM fails,
JM can recovered the job from an in-memory checkpoint store.
However, when JM fails, since you don't persist state on ha backend
such as ZooKeeper, even JM relaunched by YARN RM superseded by a
stand by, the new one knows not
Hi Min,
> Do I need to set up zookeepers to keep the states when a job manager
crashes?
I guess you need to set up the HA [1] properly. Besides that, I would
suggest you should also check the state backend.
1.
https://ci.apache.org/projects/flink/flink-docs-master/ops/jobmanager_high_availabilit
Wich kind of deployment system are you using,
Standalone ,yarn ... Other ?
On Mon, Aug 19, 2019, 18:28 wrote:
> Hi,
>
>
>
> I can use check points to recover Flink states when a task manger crashes.
>
>
>
> I can not use check points to recover Flink states when a job manger
> crashes.
>
>
>
> D
Hi,
I can use check points to recover Flink states when a task manger crashes.
I can not use check points to recover Flink states when a job manger crashes.
Do I need to set up zookeepers to keep the states when a job manager crashes?
Regards
Min
E-mails can involve SUBSTANTIAL RISKS, e.g. l