Hi paul,
Thanks for the response.
Can you point out an example of how to create a dynamic client or wrapper
operator.
Thanks and Regards,
Dinesh.
On Thu, Jul 2, 2020 at 12:28 PM Paul Lam wrote:
> Hi Doinesh,
>
> I think the problem you meet is quite common.
>
> But with the current Flink
Hi Doinesh,
I think the problem you meet is quite common.
But with the current Flink architecture, operators must be determined at
compile time (when you submit your job). This is by design IIUC.
Suppose the operators are changeable, then Flink would need to go through the
compile-optimize-sch
Hi AllI also had a scenario which need dynamic and dynamic sink to route streaming data to different kafkaIs any way better to do it in runtime
Hi Danny,
Thanks for the response.
In short without restarting we cannot add new sinks or sources.
For better understanding I will explain my problem more clearly.
My scenario is I have two topics, one is configuration topic and second one
is event activities.
* In the configuration topic I wi
Sorry, a job graph is solid while we compile it before submitting to the
cluster, not dynamic as what you want.
You did can write some wrapper operators which response to your own PRCs to run
the appended operators you want,
But the you should keep the consistency semantics by yourself.
Best,
D
Hi All,
In a flink job I have a pipeline. It is consuming data from one kafka topic
and storing data to Elastic search cluster.
without restarting the job can we add another kafka cluster and another
elastic search sink to the job. Which means i will supply the new kafka
cluster and elastic searc