hi

>From my tests kafka sink in exactly-once and batch runtime will never
commit the transaction, leading to not honour the semantic. This is
likely by design since records are ack/commited during a checkpoint,
which never happens in batch mode. I am missing something or the
documentation should warn the users ?

Resources:

from 
https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/connectors/datastream/kafka/#boundedness
> Kafka source is designed to support both streaming and batch running mode

from 
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/connectors/datastream/kafka/#fault-tolerance
> AT_LEAST_ONCE: The sink will wait for all outstanding records in the Kafka 
> buffers to be acknowledged by the Kafka producer on a checkpoint
> EXACTLY_ONCE: In this mode, the KafkaSink will write all messages in a Kafka 
> transaction that will be committed to Kafka on a checkpoint.

from
https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/dev/datastream/execution_mode/#important-considerations
> Unsupported in BATCH mode: Checkpointing and any operations that depend on 
> checkpointing do not work.


Reply via email to