Hey
We have a flinkapp which is subscribing to multiple topics, we recently
upgraded our application from 1.13 to 1.15, which we started to use
KafkaSource instead of FlinkKafkaConsumer (deprecated).
But we noticed some weird issue with KafkaSource after the upgrade, we are
setting the topics wit
Thanks for sharing views.
Our client supports TCP stream based traffic only which is in some proprietary
format and need to decode that. System which is accepting this traffic is flink
based and that’s why all this tried with custom data source?
As you suggested message broker below then how it
The Apache Flink community is very happy to announce the release of Apache
Flink Kafka Connectors 3.0.1. This release is compatible with the Apache
Flink 1.17.x and 1.18.x release series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available
Flink natively supports a pull-based model for sources, where the source
operators request data from the external system when they are ready to
process it. Implementing a TCP server socket operator essentially creates
a push-based source, which could lead to backpressure problems if the data
inges
Hi,
I'm trying to understand where the Apache Beam dependency comes from; it's
not just a regular dependency of PyFlink, but a build system dependency.
Searching through the code, it seems like Beam is only used by PyFlink, and
not by non-Python Flink. In my (limited) understanding, it looks like
Hello,
We are writing TCP server socket custom source function in which TCP server
socket listener will accept connections and read data.
Single Custom source server socket function – ServerSocket serversocket = new
ServerSocket();
Now using thread pool accept multiple connections in separat
Please note that SourceFunction API is deprecated and is due to be removed,
possibly in the next major version of Flink.
Ideally you should not be manually spawning threads in your Flink
applications. Typically you would only perform data fetching in the sources
and do processing in the subsequent
Hi Zhanghao,
Thanks for the proposition.
In general +1, this sounds like a good idea as long it is clear that the
usage of these settings is discouraged.
Just one minor concern - the configuration page is already very long, do
you have a rough estimate of how many more options would be added with
Hi Alexis,
Sorry for the late answer … got carried away with other tasks 😊
I hope I get this right as there is a mixture of concepts in my mind with
respect for the old and the savepoint API. I’ll try to answer for the new API.
* If you want to patch an existing savepoint, you would load it