Hello Hang/Lee,
Thanks!
In my usecase we listen from multiple topics but in few cases one of the topic may become inactive if producer decides to shutdown one of the topic but other topics still will be receiving data but what we observe is that if there’s one of the topic is getting in-active entire flink application is getting failed due to time out while getting metadata but we would like flink job to continue to consume data from other source topics even if one of the topic has any issue since failing entire flink application doesn’t make sense if one if the topic has issue.



Regards,
Madan 

On Nov 5, 2023, at 11:29 PM, Hang Ruan <ruanhang1...@gmail.com> wrote:


Hi, Madan.

This error seems like that there are some problems when the consumer tries to read the topic metadata. If you use the same source for these topics, the kafka connector cannot skip one of them. As you say, you need to modify the connector's default behavior.
Maybe you should read the code in KafkaSourceEnumerator to skip this error.

Best,
Hang

Junrui Lee <jrlee....@gmail.com> 于2023年11月6日周一 14:30写道:
Hi Madan,

Do you mean you want to restart only the failed tasks, rather than restarting the entire pipeline region? As far as I know, currently Flink does not support task-level restart, but requires restarting the pipeline region.

Best,
Junrui


Madan D via user <user@flink.apache.org> 于2023年10月11日周三 12:37写道:
Hello Team,
We are running the Flink pipeline by consuming data from multiple topics, but we recently encountered that if there's one topic having issues with participation, etc., the whole Flink pipeline is failing, which is affecting topics. Is there a way we can make Flink Piplein keep running even after one of the topics has an issue? We tried to handle exceptions to make sure the job wouldn't fail, but it didn't help out.

Caused by: java.lang.RuntimeException: Failed to get metadata for topics 
 
Can you please provide any insights?


Regards,
Madan

Reply via email to