Hi Phil,
correction: But the error
you have is a familiar error if you have written some code to handle
directory path. --> But the error
you have is a familiar error if you have written some code to handle
directory path with Java.
No offence.
Best regards.
Jiadong. Lu
Jiadong Lu 于2024年5月20日周
Hi, Phil
I don't have more expertise about the flink-python module. But the error
you have is a familiar error if you have written some code to handle
directory path.
The correct form of Path/URI will be :
1. "/home/foo"
2. "file:///home/foo/boo"
3. "hdfs:///home/foo/boo"
4. or Win32 director
Hi Phil,
I think you can use the "-s :checkpointMetaDataPath" arg to resume the job
from a retained checkpoint[1].
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/checkpoints/#resuming-from-a-retained-checkpoint
Best,
Jinzhong Li
On Mon, May 20, 2024 at 2:29 AM Phil St
CC to the Paimon community.
Best,
Jingsong
On Mon, May 20, 2024 at 9:55 AM Jingsong Li wrote:
>
> Amazing, congrats!
>
> Best,
> Jingsong
>
> On Sat, May 18, 2024 at 3:10 PM 大卫415 <2446566...@qq.com.invalid> wrote:
> >
> > 退订
> >
> >
> >
> >
> >
> >
> >
> > Original Email
> >
> >
> >
> > Sender:
Dear Biao Geng,
thank you very much. With the help of your demo and the YAML configuration, I was able to successfully set up monitoring for my Apache Flink jobs.
Thanks again for your time and help.
Best regards,
Oliver
Gesendet: Sonntag, 19. Mai 2024 um 17:42 Uhr
Von: "Biao Gen
Hi Lu,
Thanks for your reply. In what way are the paths to get passed to the job that
needs to used the checkpoint? Is the standard way, using -s :/ or by
passing the path in the module as a Python arg?
Kind regards
Phil
> On 18 May 2024, at 03:19, jiadong.lu wrote:
>
> Hi Phil,
>
> AFAIK,
Hi Oliver,
I believe you are almost there. One thing I found could improve is that in
your job yaml, instead of using:
kubernetes.operator.metrics.reporter.prommetrics.reporters: prom
kubernetes.operator.metrics.reporter.prommetrics.reporter.prom.factory.class:
org.apache.flink.metrics.promet