Hi, Matyas. Thanks for driving this.
I have some questions and I hope you can help explain them:

1. The dynamic Kafka Sink looks like only be used in DataStream jobs
currently, will it be considered to be supported in TableAPI in the future?
2. Even though we have Kafka MetaData Service, due to the lag of Kafka
MetaData Service, we still encounter situations where the target cluster is
unavailable when calling the SinkWriter.write() method. Therefore, a
failover seems inevitable, but we can run normally after failover. Is my
understanding correct?
3. Because we only support at least once, we do not need to save
transaction information. I still have doubts about the necessity of
DynamicKafkaWriteState. When a job fails, can we directly build this
streamDataMap information from Kafka MetaData Service, or is it because our
Kafka MetaData Service has not completed initialization or we cannot
directly use the latest information obtained from Kafka MetaData Service?









Őrhidi Mátyás <matyas.orh...@gmail.com> 于2025年3月14日周五 05:04写道:

> Hi devs,
>
> I'd like to start a discussion on FLIP-515: Dynamic Kafka Sink [1]. This is
> an addition to the existing Dynamic Kafka Source [2] to make the
> functionality complete.
>
> Feel free to share your thoughts and suggestions to make this feature
> better.
>
> + Mason Chen
>
> Thanks,
> Matyas
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-515%3A+Dynamic+Kafka+Sink
>
> [2]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=217389320
>

Reply via email to