The script looks good to me, did you run the SDK harness? External
environment needs the SDK harness to be run externally, see [1].
Generally, the best option is DOCKER, but that usually does not work in
k8s. For this, you might try PROCESS environment and build your own
docker image for flink, which will contain the Beam harness, e.g. [2].
You will need to pass the environment config using
--environment_config={"command": "/opt/apache/beam/boot"}.
From the screenshot it seems, that the Flink UI is accessible, so this
is the only option that comes to my mind. Did you check logs of the
Flink jobmanager pod?
Jan
[1] https://beam.apache.org/documentation/runtime/sdk-harness-config/
[2]
https://github.com/PacktPublishing/Building-Big-Data-Pipelines-with-Apache-Beam/blob/main/env/docker/flink/Dockerfile
On 1/31/23 13:33, P Singh wrote:
HI Jan,
Thanks for your reply, please find attached script, I am newbie with
flink and minikube though i am trying to connect them by script from
local machine as suggested by flink kubernetes documents link
<https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/standalone/kubernetes/>
I have changed the log level to ERROR but didn't find much... Can you
please help me out how to run the script from inside the pod.
On Tue, 31 Jan 2023 at 15:40, Jan Lukavský <je...@seznam.cz> wrote:
Hi,
can you please share the also the script itself? I'd say that the
problem is that the flink jobmanager is not accessible through
localhost:8081, because it runs inside the minikube. You need to
expose it outside of the minikube via [1], or run the script from
pod inside the minikube and access job manager via
flink-jobmanager:8081. I'm surprised that the log didn't make this
more obvious, though. Is it possible that you changed the default
log level to ERROR? Can you try DEBUG or similar?
Jan
[1] https://minikube.sigs.k8s.io/docs/handbook/accessing/
On 1/30/23 18:36, P Singh wrote:
Hi Jan,
Yeah I am using minikube and beam image with python 3.10.
Please find the attached screenshots.
On Mon, 30 Jan 2023 at 21:22, Jan Lukavský <je...@seznam.cz> wrote:
Hi,
can you please share the command-line and complete output of
the script?
Are you using minikube? Can you share list of your running pods?
Jan
On 1/30/23 14:25, P Singh wrote:
> Hi Team,
>
> I am trying to run beam job on top of flink on my local
machine
> (kubernetes).
>
> I have flink 1.14 and beam 2.43 images both running but
when i submit
> the job it's not reaching to the flink cluster and getting
failed with
> below error.
>
> ERROR:apache_beam.utils.subprocess_server:Starting job
service with
> ['java', '-jar',
>
'/Users/spsingh/.apache_beam/cache/jars/beam-runners-flink-1.14-job-server-2.43.0.jar',
> '--flink-master', 'http://localhost:8081', '--artifacts-dir',
>
'/var/folders/n3/dqblsr792yj4kfs7xlfmdj540000gr/T/beam-tempvphhje07/artifacts6kjt60ch',
> '--job-port', '57882', '--artifact-port', '0',
'--expansion-port', '0']
> ERROR:apache_beam.utils.subprocess_server:Error bringing up
service
> Traceback (most recent call last):
> File
>
"/Users/flink_deploy/flink_env/lib/python3.10/site-packages/apache_beam/utils/subprocess_server.py",
> line 88, in start
> raise RuntimeError(
> RuntimeError: Service failed to start up with error 1
>
> Any help would be appreciated.