Hello Team
Request you to please share update on Flink OpenSearch connector. Any timeline
when it will be available ?
Thanks !!
From: Praveen Chandna via user
Sent: 10 April 2025 12:19
To: Gunnar Morling
Cc: user ; Anuj Kumar Jain A
Subject: RE: Kafka Connector Support for Flink 2.0 – Relea
Forgot to mention, Iam using flink version 1.18.1
On Wed, Apr 23, 2025 at 1:00 AM Yashoda Krishna T <
yashoda.kris...@netcoreunbxd.com> wrote:
> Hi,
> I have tried using lateral join on two tables in flink sql similar to the
> example
>
>
> https://github.com/apache/flink/blob/c724168fad4215626b5
Hi,
I have tried using lateral join on two tables in flink sql similar to the
example
https://github.com/apache/flink/blob/c724168fad4215626b5596dd63cb66e477948aa0/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/basics/UpdatingTopCityExample.java#L130
But I
Hi,
I'm working with Flink 1.20 (DataStream API, java 11) and I have a question
regarding the current capabilities for implementing a custom restart strategy .
I've gone through the documentation, but it’s not entirely clear whether it's
possible in this version to define a fully custom s
Dear Flink Community,
I'm currently working on a simple Apache Flink project inside Docker using
version 1.15.2. The goal of the project is to read user transactions from a
text file (`input.txt`), calculate the total transaction amount per user, and
write the result as a CSV.
My code uses this
Hi Flink Community,
I hope you're doing well.
I'm currently working on a simple Flink project where I read a file using
`env.readTextFile("/opt/input.txt")`, process transactions per user, and write
the results to an output CSV file. However, I keep encountering this error
during execution:
De
Nice work, Hang!
Best,
Leonard
> 2025 4月 22 17:54,Hang Ruan 写道:
>
> The Apache Flink community is very happy to announce the release of
> Apache flink-connector-jdbc 3.3.0 & 4.0.0.
>
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-availa
The Apache Flink community is very happy to announce the release of
Apache flink-connector-jdbc 3.3.0 & 4.0.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.
The release is available for dow
Or Flink will send a delete record to Kafka once the sub-query is done ?
Hi, team, I'm running a Flink SQL via Flink SQL gateway in the version of
1.20.
The SQL reads from Hive and writes into Kafka but needs to join with a
sub-query that queries out a problematic uuid and filter it out, it looks
like this:
INSERT INTO
> kafka_sink
> SELECT /*+ BROADCAST(t1) */
>
10 matches
Mail list logo