Hi Bariša,

The way I see it is you either
- need data from all sources because you are doing some
conjoint processing. In that case stopping the pipeline is usually the
right thing to do.
- the streams consumed from multiple servers are not combined and hence
could be processed in independent Flink jobs.
Maybe you could explain where specifically your situation does not fit in
one of those two scenarios?

Best,
Alexander Fedulov


On Wed, Jun 1, 2022 at 10:57 PM Jing Ge <j...@ververica.com> wrote:

> Hi Bariša,
>
> Could you share the reason why your data processing pipeline should keep
> running when one kafka source is down?
> It seems like any one among the multiple kafka sources is optional for the
> data processing logic, because any kafka source could be the one that is
> down.
>
> Best regards,
> Jing
>
> On Wed, Jun 1, 2022 at 5:59 PM Xuyang <xyzhong...@163.com> wrote:
>
>> I think you can try to use a custom source to do that although the one of
>> the kafka sources is down the operator is also running(just do nothing).
>> The only trouble is that you need to manage the checkpoint and something
>> else yourself. But the good news is that you can copy the implementation of
>> existing kafka source and change a little code conveniently.
>>
>> At 2022-06-01 22:38:39, "Bariša Obradović" <bbaj...@gmail.com> wrote:
>>
>> Hi,
>> we are running a flink job with multiple kafka sources connected to
>> different kafka servers.
>>
>> The problem we are facing is when one of the kafka's is down, the flink
>> job starts restarting.
>> Is there anyway for flink to pause processing of the kafka which is down,
>> and yet continue processing from other sources?
>>
>> Cheers,
>> Barisa
>>
>>

Reply via email to