I've considered a similar question before: Dynamic addition of Sinks /
based on some external configuration.
The answer I've mostly been given is: this is a bad idea. The checkpoint
state that flink uses for job recovery is dependent on the topology of the
job, and dynamically adding more sinks
Hello,
I am trying out flink for one stream processing scenario and was wondering
if it can be achieved using Apache Flink. So any pointers regarding how it
can be achieved will be of great help.
Scenario :-
A kafka topic has the input for stream processing, multiple applications
lets say A & B