ing.com>>
Cc: dev@flink.apache.org<mailto:dev@flink.apache.org>
mailto:dev@flink.apache.org>>
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi Fred,
ah yes, I think I understand the issue. The KafkaSink always creates a
KafkaCommitter even if
ckchannel(BackchannelImpl.java:96)
> ~[flink-sql-connector-kafka-4.0.0-2.0.jar:4.0.0-2.0]
> at
> org.apache.flink.connector.kafka.sink.internal.BackchannelFactory.getBackchannel(BackchannelFactory.java:110)
> ~[flink-sql-connector-kafka-4.0.0-2.0.jar:4.0.0-2.0]
> ... 18 more
>
>
by: java.lang.IllegalStateException: Writable backchannel already
> exists.
> at
> org.apache.flink.connector.kafka.sink.internal.BackchannelImpl.createWritableBackchannel(BackchannelImpl.java:96)
> ~[flink-sql-connector-kafka-4.0.0-2.0.jar:4.0.0-2.0]
> at
> org.apache.flink.con
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi Fred,
ah yes, I think I understand the issue. The KafkaSink always creates a
KafkaCommitter even if you are not using EXACTLY_ONCE. It's an unfortunate
limitation of our Sink design.
When I implemented the chan
Cc: ar...@apache.org
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi Fred,
I see. It looks like this check was added in
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FFLINK-37282&data=05%7C02%7
: Teunissen, F.G.J. (Fred)
Date: Monday, 19 May 2025 at 17:33
To: dev@flink.apache.org
Subject: [EXTERNAL] Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0
Kafka Sinks
Hi David,
Depending on the flink version we use a different Kafka connector.
*
flink:2.0.0 -> flink-connector-kafka:4.
(at-least-once), so according
to the docs, the transactionalIdPrefix should not be required.
kind regards,
Fred
From: David Radley
Date: Monday, 19 May 2025 at 17:57
To: dev@flink.apache.org
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi,
I had a quick loo
Hi,
I had a quick look at this. What version of the Flink Kafka connector are you
running?
I looked through recent commits in the Kafka connector and see
https://github.com/apache/flink-connector-kafka/commit/7c112abe8bf78e0cd8a310aaa65b57f6a70ad30a
for PR https://github.com/apache/flink-connecto