Hi,
Need to fix my previous comment in the last reply - it should be totally
fine that the POM files for flink-connector-kafka 3.0.1-1.18 point to an
older version.
For example, in the ongoing flink-connector-opensearch release 1.1.0-1.18,
the POM files also still point to Flink 1.17.1 [1].
If th
Hi all,
There seems to be an issue with the connector release scripts used in the
release process that doesn't correctly overwrite the flink.version property
in POMs.
I'll kick off a new release for 3.0.2 shortly to address this. Sorry for
overlooking this during the previous release.
Best,
Gord
Thanks Feng,
I think my challenge (and why I expected I’d need to use Java) is that there
will be parquet files with different schemas landing in the s3 bucket - so I
don’t want to hard-code the schema in a sql table definition.
I’m not sure if this is even possible? Maybe I would have to write
Hi Oxlade
I think using Flink SQL can conveniently fulfill your requirements.
For S3 Parquet files, you can create a temporary table using a filesystem
connector[1] .
For Iceberg tables, FlinkSQL can easily integrate with the Iceberg
catalog[2].
Therefore, you can use Flink SQL to export S3 file
Hi all,
I'm attempting to create a POC in flink to create a pipeline to stream parquet
to a data warehouse in iceberg format.
Ideally - I'd like to watch a directory in s3 (minio locally) and stream those
to iceberg, doing the appropriate schema mapping/translation.
I guess first; does this so
Hi Danny
thanks for taking a look into it and for the hint.
Your assumption is correct - It compiles when the base connector is
excluded.
In sbt:
"org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18"
exclude("org.apache.flink", "flink-connector-base"),
Günter
On 23.11.23 14:24, Dan
Hey all,
I believe this is because of FLINK-30400. Looking at the pom I cannot see
any other dependencies that would cause a problem. To workaround this, can
you try to remove that dependency from your build?
org.apache.flink
flink-connector-kafka
3.0.1-1.18
Hi, Gurnterh
It seems a bug for me that 3.0.1-1.18 flink Kafka connector use flink 1.17
dependency which lead to your issue.
I guess we need propose a new release for Kafka connector for fix this issue.
CC: Gordan, Danny, Martijn
Best,
Leonard
> 2023年11月14日 下午6:53,Alexey Novakov via user
Thanks Hang.
I got it now. I will check on this and get back to you.
Thanks,
Tauseef.
On Thu, 23 Nov 2023 at 17:29, Hang Ruan wrote:
> Hi, Tauseef.
>
> This error is not that you can not access the Kafka cluster. Actually,
> this error means that the JM cannot access its TM.
> Have you ever ch
Hi, Tauseef.
This error is not that you can not access the Kafka cluster. Actually, this
error means that the JM cannot access its TM.
Have you ever checked whether the JM is able to access the TM?
Best,
Hang
Tauseef Janvekar 于2023年11月23日周四 16:04写道:
> Dear Team,
>
> We are facing the below iss
Hi, patricia.
Can you attach full stack about the exception. It seems the thread reading
source is stuck.
--
Best!
Xuyang
At 2023-11-23 16:18:21, "patricia lee" wrote:
Hi,
Flink 1.18.0
Kafka Connector 3.0.1-1.18
Kafka v 3.2.4
JDK 17
I get error on class org.apache.flink.s
Hi,
Flink 1.18.0
Kafka Connector 3.0.1-1.18
Kafka v 3.2.4
JDK 17
I get error on class
org.apache.flink.streaming.runtime.tasks.SourceStreamTask on
LegacySourceFunctionThread.run()
"java.util.concurrent.CompletableFuture@12d0b74 [Not completed, 1
dependents]
I am using the FlinkKafkaConsumer.
12 matches
Mail list logo