Hi,
I'm trying to understand how to use python classes in flink DataStream
pipelines. I'm using python 3.11 and flink 1.19.
I've tried running a few simple programs and require some guidance.
Here's the first example:
from dataclasses import dataclass
from pyflink.common import Configuration
Thanks Xuyang, that shows table in the SQL client correctly! Two follow
up questions.
1. When I query that table from the SQL client, it seems to just wait
for something. How can that query work?
```
Flink SQL> show tables;
+---+
|table name |
+---+
| RandomNumbe
Hi Nic.
I do not have a solution (soy), but have seen something similar. And have
complained about it, already. Look up my mail in the archives.
I am also deploying our session cluster using Flink Kubernetes operator. I was
setting “execution.checkpointing.interval: 30”. This should set
Hi, I’m deploying Flink 1.19 via the k8s operator.
I’m setting `table.exec.source.idle-timeout: 30 s` in the
`spec.flinkConfiguration` section of the FlinkDeployment CR.
The Flink UI is showing the JobManager has been configured with the value and
the JM and TM logs both show `INFO [] - Loading
Looks like your Oracle DB is not configured for CDC. You should follow
instructions on Debezium site:
https://debezium.io/blog/2022/09/30/debezium-oracle-series-part-1/
Nix.
From: 秋天的叹息 <1341317...@qq.com>
Date: Friday, January 10, 2025 at 9:06 AM
To: user@flink.apache.org
Subject: Flink CDC r
Flink CDC synchronizes Oracle data to Kafka in real-time. The Oracle database
has set up sharded storage, and when starting a Flink job, an error occurs
directly. Using the same configuration, tables without sharded storage can be
synchronized normally. Please help
configurationString ddl =