Hi Dian,
we had this discussion in the past. Yes, it might help in certain cases.
But on the other hand also helps in finding version mismatches when
people misconfigured there dependencies. Different JVM versions should
not result incompatible classes as the default serialVersionUID is
stand
Hi,
the InvalidClassException indicates that you are using different
versions of the same class. Are you sure you are using the same Flink
minor version (including the Scala suffix) for all dependencies and
Kubernetes?
Regards,
Timo
On 27.07.20 09:51, Wojciech Korczyński wrote:
Hi,
when
Hi,
when I try it locally it runs well. The problem is when I run it
using Kubernetes. I don't know how to make Flink and Kubernetes go well
together in that case.
Best, Wojtek
pt., 24 lip 2020 o 17:51 Xingbo Huang napisał(a):
> Hi Wojciech,
> In many cases, you can make sure that your code ca
Hi Wojciech,
In many cases, you can make sure that your code can run correctly in local
mode, and then submit the job to the cluster for testing. For how to add
jar packages in local mode, you can refer to the doc[1].
Besides, you'd better use blink planner which is the default planner. For
how to
Hi,
I've done like you recommended:
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.dataset import ExecutionEnvironment
from pyflink.table import TableConfig, DataTypes,
BatchTableEnvironment, StreamTableEnvironment, ScalarFunction
from pyflink.table.descriptors import Sche
Hi Wojtek,
The following ways of using Pyflink is my personal recommendation:
1. Use DDL[1] to create your source and sink instead of the descriptor way,
because as of flink 1.11, there are some bugs in the descriptor way.
2. Use `execute_sql` for single statement, use `create_statement_set` for
Hi,
thank you for your answer, it is very helpful.
Currently my python program looks like:
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.dataset import ExecutionEnvironment
from pyflink.table import TableConfig, DataTypes,
BatchTableEnvironment, StreamTableEnvironment
fro
Hi Wojtek,
In flink 1.11, the methods register_table_source and register_table_sink of
ConnectTableDescriptor have been removed. You need to use
createTemporaryTable instead of these two methods.Besides, it seems that
the version of your pyflink is 1.10, but the corresponding flink is 1.11.
Best,
Thank you for your answer.
I have replaced that .jar with Kafka version universal - the links to
other versions are extinct.
After the attempt of deploying:
bin/flink run -py
/home/wojtek/PycharmProjects/k8s_kafka_demo/kafka2flink.py --jarfile
/home/wojtek/Downloads/flink-sql-connector-kafka_2.11
Hi Wojtek,
you need to use the fat jar 'flink-sql-connector-kafka_2.11-1.11.0.jar'
which you can download in the doc[1]
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/kafka.html
Best,
Xingbo
Wojciech Korczyński 于2020年7月23日周四
下午4:57写道:
> Hello,
>
> I am tr
Hello,
I am trying to deploy a Python job with Kafka connector:
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.dataset import ExecutionEnvironment
from pyflink.table import TableConfig, DataTypes, BatchTableEnvironment,
StreamTableEnvironment
from pyflink.table.descriptors
11 matches
Mail list logo