Hi,
our use is that the data sources are independent, we are using flink to
ingest data from kafka sources, do a bit of filtering and then write it to
S3.
Since we ingest from multiple kafka sources, and they are independent, we
consider them all optional. Even if 1 just kafka is up and running, we
Hi Bariša,
The way I see it is you either
- need data from all sources because you are doing some
conjoint processing. In that case stopping the pipeline is usually the
right thing to do.
- the streams consumed from multiple servers are not combined and hence
could be processed in independent Flin
Hi Bariša,
Could you share the reason why your data processing pipeline should keep
running when one kafka source is down?
It seems like any one among the multiple kafka sources is optional for the
data processing logic, because any kafka source could be the one that is
down.
Best regards,
Jing
Hi,
we are running a flink job with multiple kafka sources connected to
different kafka servers.
The problem we are facing is when one of the kafka's is down, the flink job
starts restarting.
Is there anyway for flink to pause processing of the kafka which is down,
and yet continue processing from