Does the job runs in detached mode or attached mode? Could you share some code
snippets and the job submission command if possible?
Regards,
Dian
> 2021年3月18日 下午8:17,Robert Cullen 写道:
>
> Dian,
>
> Thanks for your reply. Yes, I would submit the same job in kubernetes
> session mode. Someti
Dian,
Thanks for your reply. Yes, I would submit the same job in kubernetes
session mode. Sometimes the job would succeed but successive tries would
fail. No stack trace, the job would never return a job id:
In this case I redeployed the cluster and the job completed ... and
multiple tries were
Hi Robert,
1) Do you mean that when submitting the same job multiple times and it succeed
sometimes and hangs sometimes or it only hangs for some specific job?
2) Which deployment mode do you use?
3) Is it possible to dump the stack trace? It would help us understanding
what’s happening.
Thank
Thanks All,
I've added python and pyflink to the TM image which fixed the problem. Now
however submitting a python script to the cluster successfully is sporadic;
sometimes it completes but most of the time it just hangs. Not sure what
is causing this.
On Mon, Mar 15, 2021 at 9:47 PM Xingbo Hua
Hi,
>From the error message, I think the problem is no python interpreter on
your TaskManager machine. You need to install a python 3.5+ interpreter on
the TM machine, and this python environment needs to install pyflink (pip
install apache-flink). For details, you can refer to the document[1].
[
Okay, I added the jars and fixed that exception. However I have a new
exception that is harder to decipher:
2021-03-15 14:46:20
org.apache.flink.runtime.JobException: Recovery is suppressed by
NoRestartBackoffTimeStrategy
at
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFail
Hey,
are you sure the class is in the lib/ folder of all machines / instances,
and you've restarted Flink after adding the files to lib/ ?
On Mon, Mar 15, 2021 at 3:42 PM Robert Cullen wrote:
> Shuiqiang,
>
> I added the flink-connector-kafka_2-12 jar to the /opt/flink/lib directory
>
> When sub
Shuiqiang,
I added the flink-connector-kafka_2-12 jar to the /opt/flink/lib directory
When submitting this job to my flink cluster I’m getting this stack trace
at runtime:
org.apache.flink.runtime.JobException: Recovery is suppressed by
NoRestartBackoffTimeStrategy
at
org.apache.flink.runti
Hi Robert,
You can refer to
https://github.com/apache/flink/blob/master/flink-end-to-end-tests/flink-python-test/python/datastream/data_stream_job.py
for the whole example.
Best,
Shuiqiang
Robert Cullen 于2021年3月13日周六 上午4:01写道:
> Shuiqiang, Can you include the import statements? thanks.
>
> On
Shuiqiang, Can you include the import statements? thanks.
On Fri, Mar 12, 2021 at 1:48 PM Shuiqiang Chen wrote:
> Hi Robert,
>
> Kafka Connector is provided in Python DataStream API since release-1.12.0.
> And the documentation for it is lacking, we will make it up soon.
>
> The following code
Hi Robert,
Kafka Connector is provided in Python DataStream API since release-1.12.0.
And the documentation for it is lacking, we will make it up soon.
The following code shows how to apply KafkaConsumers and KafkaProducer:
```
env = StreamExecutionEnvironment.get_execution_environment()
env.set_
11 matches
Mail list logo