Hello Flink Community ,
We are running Jobs in flink version 1.12.7 which reads from Kafka , apply
some rules(stored in broadcast state) and then writes to kafka. This is a
very low latency and high throughput and we have set up at least one
semantics.
Checkpoint Configuration Used
1. We
Hi @Great
I think the two sinks in your example are equivalent and independent. If
there are some logical relationships between two sinks, you may need to
create a new combined sink and do it yourself.
On Thu, Jan 5, 2023 at 11:48 PM Great Info wrote:
>
> I have a stream from Kafka, after readi
Hi,
Have a query on the Job Manager HA for flink 1.15.
We currently run a standalone flink cluster with a single JobManager and
multiple TaskManagers, deployed on top of a kubernetes cluster (EKS
cluster) in application mode (reactive mode).
The Task Managers are deployed as a ReplicaSet and the
I have a stream from Kafka, after reading it and doing some
transformations/enrichment I need to store the final data stream in the
database and publish it to Kafka so I am planning to add two sinks like
below
*finalInputStream.addSink(dataBaseSink); // Sink1finalInputStream.addSink(
flinkKafkaPr
Also, it seems that sinks have to be synchronous, I think? (Unless there's some
async equivalent of SinkFunction?) Assuming they're synchronous, if I do the
retry strategy manually in the implementation of SinkFunction.invoke that
means I'll be blocking that thread while waiting to do a retry
I want to sink some data to a database, but the data needs to go into multiple
tables, in a single transaction. Am I right in saying that I cannot use the
JDBC Connector for this as it only handles single SQL statements?
Assuming that's right, I believe that I need to write a custom sink, so I n