Hi Dian,

I'm using version 1.15.0 and 1.15.1 of PyFlink. I think it's to do with how the 
arguments are ordered, as when I run the container with:

      “standalone-job”, “-s”, “s3://<path to savepoint>”, “-n”, “-pym”, 
“foo.main”

then the job starts successfully and loads from the savepoint.

Many thanks,

John


________________________________
From: Dian Fu <dian0511...@gmail.com>
Sent: 08 July 2022 02:27
To: John Tipper <john_tip...@hotmail.com>
Cc: user@flink.apache.org <user@flink.apache.org>
Subject: Re: PyFlink: restoring from savepoint

Hi John,

Could you provide more information, e.g. the exact command submitting the job, 
the logs file, the PyFlink version, etc?

Regards,
Dian


On Thu, Jul 7, 2022 at 7:53 PM John Tipper 
<john_tip...@hotmail.com<mailto:john_tip...@hotmail.com>> wrote:
Hi all,

I have a PyFlink job running in Kubernetes. Savepoints and checkpoints are 
being successfully saved to S3. However, I am unable to get the job to start 
from a save point.

The container is started with these args:

“standalone-job”, “-pym”, “foo.main”, “-s”, “s3://<path to savepoint>”, “-n”

In the JM logs I can see “Starting StandaloneApplicationClusterEntrypoint…” 
where my arguments are listed.

However, I don’t see any restore occurring in the logs and my application 
restarts with no state. How do I start a PyFlink job like this from a given 
savepoint?

Many thanks,

John

Sent from my iPhone

Reply via email to