Thanks Guowei for the comments and Lukáš Drbal for sharing the feedback.
I think it is not only for Kubernetes application mode, but also Yarn
application and standalone application,
the job id will be set to ZERO if not configured explicitly in HA mode.
For standalone application, we could use "
Hello Guowei,
I just checked it and it works!
Thanks a lot!
Here is workaround which use UUID as jobId:
-D\$internal.pipeline.job-id=$(cat /proc/sys/kernel/random/uuid|tr -d "-")
L.
On Thu, Mar 25, 2021 at 11:01 AM Guowei Ma wrote:
> Hi,
> Thanks for providing the logs. From the logs this i
Hi,
Thanks for providing the logs. From the logs this is a known bug.[1]
Maybe you could use `$internal.pipeline.job-id` to set your own
job-id.(Thanks to Wang Yang)
But keep in mind this is only for internal use and may be changed in
some release. So you should keep an eye on [1] for the correct s
Hello,
sure. Here is log from first run which succeed -
https://pastebin.com/tV75ZS5S
and here is from second run (it's same for all next) -
https://pastebin.com/pwTFyGvE
My Docker file is pretty simple, just take wordcount + S3
FROM flink:1.12.2
RUN mkdir -p $FLINK_HOME/usrlib
COPY flink-examp
Hi,
After some discussion with Wang Yang offline, it seems that there might be
a jobmanager failover. So would you like to share full jobmanager log?
Best,
Guowei
On Wed, Mar 24, 2021 at 10:04 PM Lukáš Drbal wrote:
> Hi,
>
> I would like to use native kubernetes execution [1] for one batch job
Hi,
I would like to use native kubernetes execution [1] for one batch job and
let scheduling on kubernetes. Flink version: 1.12.2.
Kubernetes job:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scheduled-job
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template: