[ https://issues.apache.org/jira/browse/FLINK-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16225064#comment-16225064 ]
ASF GitHub Bot commented on FLINK-7784: --------------------------------------- Github user aljoscha commented on a diff in the pull request: https://github.com/apache/flink/pull/4910#discussion_r147654876 --- Diff: flink-connectors/flink-connector-kafka-0.11/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011Tests.java --- @@ -83,49 +79,6 @@ public void before() { extraProperties.put("isolation.level", "read_committed"); } - @Test(timeout = 30000L) --- End diff -- Why are these removed? Did they never actually test anything? > Don't fail TwoPhaseCommitSinkFunction when failing to commit > ------------------------------------------------------------ > > Key: FLINK-7784 > URL: https://issues.apache.org/jira/browse/FLINK-7784 > Project: Flink > Issue Type: Bug > Components: DataStream API > Affects Versions: 1.4.0 > Reporter: Aljoscha Krettek > Assignee: Gary Yao > Priority: Blocker > Fix For: 1.4.0 > > > Currently, {{TwoPhaseCommitSinkFunction}} will fail if committing fails > (either when doing it via the completed checkpoint notification or when > trying to commit after restoring after failure). This means that the job will > go into an infinite recovery loop because we will always keep failing. > In some cases it might be better to ignore those failures and keep on > processing and this should be the default. We can provide an option that > allows failing the sink on failing commits. -- This message was sent by Atlassian JIRA (v6.4.14#64029)