fka/commit/6c3d3d06689336f2fd37bfa5a3b17a5377f07887
On Sat, Dec 2, 2023 at 1:57 AM Tzu-Li (Gordon) Tai wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink Kafka Connectors 3.0.2. This release is compatible with the Apache
> Flink 1.17.x and 1.18.x release series.
>
&
The Apache Flink community is very happy to announce the release of Apache
Flink Kafka Connectors 3.0.2. This release is compatible with the Apache
Flink 1.17.x and 1.18.x release series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always
The Apache Flink community is very happy to announce the release of Apache
Flink Kafka Connectors 3.0.1. This release is compatible with the Apache
Flink 1.17.x and 1.18.x release series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always
Hi Prateek,
You will need to stop and restart your jobs with the new connector
configuration.
Best regards,
Martijn
On Thu, Apr 13, 2023 at 10:10 AM Prateek Kohli
wrote:
> Hi,
>
> I am using Flink Kafka connectors to communicate with Kafka broker over
> mutual TLS.
> Is t
Hi,
I am using Flink Kafka connectors to communicate with Kafka broker over
mutual TLS.
Is there any way or recommendation to handle certificate renewal for these
Kafka clients.
I am monitoring the pem files and recreating the keystore/truststore(jks)
on renewal, but how can I reload these to
Hi,
I’m also in favour of at least dropping support for Kafka 0.8.
More generally, I think we have to be careful when recommending to use Flink
connectors from older versions with a newer Flink versions. The interfaces
might be stable but they might use internal code that is not stable, as show
I second Stephan here. Moreover, given that the source and sink interfaces
are public API, it should still be possible to take the Kafka 0.8/0.9
connector from a previous Flink version and run it with a newer version.
Cheers,
Till
On Mon, Sep 16, 2019 at 10:06 AM Stephan Ewen wrote:
> I think t
I think this is very hypothetical, requiring a major version bump to drop
an old connector version. What is the actual problem that would arise for
users?
We have not required a major version to drop some old connectors that in
the past, and if we want to continue "mono repo" style in Flink, I don
Thanks for bringing this up, Stephan.
I am +1 on dropping support for Kafka 0.8. It is a pretty old version and I
don't think there are many users on that version now.
However, for Kafka 0.9, I think there are still quite some users on that
version. It might be better to keep it a little longer.
on 2019/9/11 16:17, Stephan Ewen wrote:
We still maintain connectors for Kafka 0.8 and 0.9 in Flink.
I would suggest to drop those with Flink 1.10 and start supporting only
Kafka 0.10 onwards.
Are there any concerns about this, or still a significant number of
users of these versions?
Bu
Hi all!
We still maintain connectors for Kafka 0.8 and 0.9 in Flink.
I would suggest to drop those with Flink 1.10 and start supporting only
Kafka 0.10 onwards.
Are there any concerns about this, or still a significant number of users
of these versions?
Best,
Stephan
Since you’re placing jars in the lib/ folder yourself instead of packaging an
uber jar, you also need to provide the Kafka dependency jars.
It usually isn’t recommended to place dependencies in the lib/ folder.
Packaging an uber jar is the recommended approach.
Using the maven-shade-plugin, you
Hi Paolo,
Have you followed the instructions in this documentation [1]?
The connectors are not part of the binary distributions, so you would need to
bundle the dependencies with your code by building an uber jar.
Cheers,
Gordon
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/d
Hi,
I am following the basic steps to implement a consumer and a producer with
Kafka for Flink. My Flink version is 1.2.0, the Kafka's one is 0.10.2.0, so
in my pom.xml I will add the :
org.apache.flink
flink-connector-kafka-0.10_2.10
1.2.0
The problem is that if I run the program with maven
Hi,
There currently isn’t any shaded version Kafka 0.8 connector version available,
so yes, you would need to do build that yourself.
I’m not completely sure if there will be any class name clashes, because the
Kafka 0.8 API is typically packaged under `kafka.javaapi.*`, while in 0.9 /
0.10 th
Hi Gwenhael,
I will loop in Gordon becaue he is more familar with the Kafka
connectors. Have you experiences with two versions in the same project?
Am 20/03/17 um 15:57 schrieb Gwenhael Pasquiers:
Hi,
Before doing it myself I thought it would be better to ask.
We need to consume from
Hi,
Before doing it myself I thought it would be better to ask.
We need to consume from kafka 0.8 and produce to kafka 0.10 in a flink app.
I guess there will be classes and package names conflicts for a lot of
dependencies of both connectors.
The obvious solution it to make a “shaded” version o
17 matches
Mail list logo