Hi, David
The jarURI is required[1], otherwise Flink doesn't know which jar should be
used.
If you are using application mode, you can set jarURI to
"local:///opt/flink/usrlib/your-job.jar", and the jar will not upload to
H/A storage.
Best,
Weihua
On Wed, Apr 5, 2023 at 5:51 PM David Causse w
Hi, Le
It looks like a DNS issue. Maybe you can try to ping or nslookup the
'my-first-flink-cluster-rest.default'
on flink operator pods to check whether dns service is normal.
Best,
Weihua
On Wed, Apr 5, 2023 at 12:43 PM Le Xu wrote:
> Hello!
>
> I'm trying out the Kubernetes sample
>
Hi,
That's because the ConfigMap volume is always read-only.
Currently /docker-entrypoint.sh will try to update some configs in docker
Environment. But these are not needed in kubernetes.
So I think we can ignore those errors safely when using the operator/native
kubernetes integration.
There is
Hi,
I'm trying to deploy a job (flink 1.16) with the flink-operator, the job
jar is part of the image and placed under /opt/flink/usrlib.
I thought that by placing the job jar there I could avoid setting the
jarURI in the JobSpec but I'm getting a NPE (pasted at the end of this
email) suggesting t