Hi Jan, localhost:50000 is not open i got this error.. I have a question if I have deployed with the same configuration on GKE and port-forward job manager to localhost.. can't I port forward the task manager to localhost:50000. I don't know the IP of the running pod on GKE. That's why I'm getting this issue. please suggest.
nc: connectx to localhost port 50000 (tcp) failed: Connection refused nc: connectx to localhost port 50000 (tcp) failed: Connection refused On Thu, 2 Feb 2023 at 17:50, Jan Lukavský <je...@seznam.cz> wrote: > That would suggest it is uploading (you can verify that using jnettop). > But I'd leave this open for others to answer, because now it is purely > Flink (not Beam) question. > > Best, > > Jan > On 2/2/23 10:46, bigdatadeveloper8 wrote: > > Hi Jan, > > Job manager is configured and working.. when I submit python Job to flink > it's not showing or flink UI or simply hangs without any error. > > > > Sent from my Galaxy > > > -------- Original message -------- > From: Jan Lukavský <je...@seznam.cz> <je...@seznam.cz> > Date: 02/02/2023 15:07 (GMT+05:30) > To: user@flink.apache.org > Subject: Re: beam + flink + k8 > > I'm not sure how exactly minikube exposes the jobmanager, but in GKE you > likely need to port-forward it, e.g. > > $ kubectl port-forward svc/flink-jobmanager 8081:8081 > > This should make jobmanager accessible via localhost:8081. For production > cases you might want to use a different approach, like Flink operator, etc. > > Best, > > Jan > On 2/1/23 17:08, P Singh wrote: > > Hi Jan, > > Thanks for the reply, I was able to submit the job to flink but it's > failing due to an OOM issue so I am moving to the GKE. I got the flink UI > there but submitted a job not appearing on flink UI. I am using the same > script which I shared with you.. Do I need to make some changes for Google > Kubernetes Environment? > > On Tue, 31 Jan 2023 at 20:20, Jan Lukavský <je...@seznam.cz> wrote: > >> The script looks good to me, did you run the SDK harness? External >> environment needs the SDK harness to be run externally, see [1]. Generally, >> the best option is DOCKER, but that usually does not work in k8s. For this, >> you might try PROCESS environment and build your own docker image for >> flink, which will contain the Beam harness, e.g. [2]. You will need to pass >> the environment config using --environment_config={"command": >> "/opt/apache/beam/boot"}. >> >> From the screenshot it seems, that the Flink UI is accessible, so this is >> the only option that comes to my mind. Did you check logs of the Flink >> jobmanager pod? >> >> Jan >> >> [1] https://beam.apache.org/documentation/runtime/sdk-harness-config/ >> >> [2] >> https://github.com/PacktPublishing/Building-Big-Data-Pipelines-with-Apache-Beam/blob/main/env/docker/flink/Dockerfile >> On 1/31/23 13:33, P Singh wrote: >> >> HI Jan, >> >> Thanks for your reply, please find attached script, I am newbie with >> flink and minikube though i am trying to connect them by script from local >> machine as suggested by flink kubernetes documents link >> <https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/standalone/kubernetes/> >> >> I have changed the log level to ERROR but didn't find much... Can you >> please help me out how to run the script from inside the pod. >> >> On Tue, 31 Jan 2023 at 15:40, Jan Lukavský <je...@seznam.cz> wrote: >> >>> Hi, >>> >>> can you please share the also the script itself? I'd say that the >>> problem is that the flink jobmanager is not accessible through >>> localhost:8081, because it runs inside the minikube. You need to expose it >>> outside of the minikube via [1], or run the script from pod inside the >>> minikube and access job manager via flink-jobmanager:8081. I'm surprised >>> that the log didn't make this more obvious, though. Is it possible that you >>> changed the default log level to ERROR? Can you try DEBUG or similar? >>> >>> Jan >>> >>> [1] https://minikube.sigs.k8s.io/docs/handbook/accessing/ >>> On 1/30/23 18:36, P Singh wrote: >>> >>> Hi Jan, >>> >>> Yeah I am using minikube and beam image with python 3.10. >>> >>> Please find the attached screenshots. >>> >>> >>> >>> On Mon, 30 Jan 2023 at 21:22, Jan Lukavský <je...@seznam.cz> wrote: >>> >>>> Hi, >>>> >>>> can you please share the command-line and complete output of the >>>> script? >>>> Are you using minikube? Can you share list of your running pods? >>>> >>>> Jan >>>> >>>> On 1/30/23 14:25, P Singh wrote: >>>> > Hi Team, >>>> > >>>> > I am trying to run beam job on top of flink on my local machine >>>> > (kubernetes). >>>> > >>>> > I have flink 1.14 and beam 2.43 images both running but when i >>>> submit >>>> > the job it's not reaching to the flink cluster and getting failed >>>> with >>>> > below error. >>>> > >>>> > ERROR:apache_beam.utils.subprocess_server:Starting job service with >>>> > ['java', '-jar', >>>> > >>>> '/Users/spsingh/.apache_beam/cache/jars/beam-runners-flink-1.14-job-server-2.43.0.jar', >>>> >>>> > '--flink-master', 'http://localhost:8081', '--artifacts-dir', >>>> > >>>> '/var/folders/n3/dqblsr792yj4kfs7xlfmdj540000gr/T/beam-tempvphhje07/artifacts6kjt60ch', >>>> >>>> > '--job-port', '57882', '--artifact-port', '0', '--expansion-port', >>>> '0'] >>>> > ERROR:apache_beam.utils.subprocess_server:Error bringing up service >>>> > Traceback (most recent call last): >>>> > File >>>> > >>>> "/Users/flink_deploy/flink_env/lib/python3.10/site-packages/apache_beam/utils/subprocess_server.py", >>>> >>>> > line 88, in start >>>> > raise RuntimeError( >>>> > RuntimeError: Service failed to start up with error 1 >>>> > >>>> > Any help would be appreciated. >>>> >>>