[ 
https://issues.apache.org/jira/browse/FLINK-17355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091005#comment-17091005
 ] 

Teng Fei Liao commented on FLINK-17355:
---------------------------------------

Just wanted to flag that existing kafka defaults aren't ideal. As a workaround, 
I've bumped "max.block.ms" which determines how long 
KafkaProducer#initTransactions waits by default before throwing.

> Exactly once kafka checkpointing sensitive to single node failures
> ------------------------------------------------------------------
>
>                 Key: FLINK-17355
>                 URL: https://issues.apache.org/jira/browse/FLINK-17355
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka
>    Affects Versions: 1.10.0
>            Reporter: Teng Fei Liao
>            Priority: Major
>
> With exactly one semantics, when checkpointing, FlinkKafkaProducer creates a 
> new KafkaProducer for each checkpoint. KafkaProducer#initTransactions can 
> timeout if a kafka node becomes unavailable, even in the case of multiple 
> brokers and in-sync replicas (see 
> [https://stackoverflow.com/questions/55955379/enabling-exactly-once-causes-streams-shutdown-due-to-timeout-while-initializing]).
> In non-flink cases, this might be fine since I imagine a KafkaProducer is not 
> created very often. With flink however, this is called per checkpoint which 
> means practically an HA kafka cluster isn't actually HA. This makes rolling a 
> kafka node particularly painful even in intentional cases such as config 
> changes or upgrades.
>  
> In our specific setup, these are our settings:
> 5 kafka nodes
> Per topic, we have a replication factor = 3 and in-sync replicas = 2 and 
> partitions = 3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to