Hello,
I would like to use PyFlink jobs on Kubernetes in Job Cluster. I managed to
do this in Cluster Session mode but deploying it as an independent Job
Cluster for each job seems a better option for me.
If I understand the documentation well [1], [2] I should create a custom
docker image which
n refer to the doc[2]
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/python/common_questions.html#adding-jar-files
> [2]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/common.html#create-a-tableenvironment
>
> Best,
>
the doc[3]
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/
> [2]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/common.html#translate-and-execute-a-query
>
> https://ci.apache.org/projects/flink/flink-docs-r
_table_sink
> of ConnectTableDescriptor have been removed. You need to use
> createTemporaryTable instead of these two methods.Besides, it seems that
> the version of your pyflink is 1.10, but the corresponding flink is 1.11.
>
> Best,
> Xingbo
>
> Wojciech Korczyński 于2020年7月23日周
ojtek,
> you need to use the fat jar 'flink-sql-connector-kafka_2.11-1.11.0.jar'
> which you can download in the doc[1]
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/kafka.html
>
> Best,
> Xingbo
>
> Wojciech Korczyń
Hello,
I am trying to deploy a Python job with Kafka connector:
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.dataset import ExecutionEnvironment
from pyflink.table import TableConfig, DataTypes, BatchTableEnvironment,
StreamTableEnvironment
from pyflink.table.descriptors