Dominik, INI-DNA-INF
> *Cc: *matthias.schwa...@viseca.ch ,
> user@flink.apache.org
> *Subject: *Re: Kafka connector exception restarting Flink 1.19 pipeline
>
> *Be aware:* This is an external email.
>
>
>
> Hi Dominic,
>
>
>
> The issue has nothing to do with DynamicKaf
chwa...@viseca.ch ,
user@flink.apache.org
Subject: Re: Kafka connector exception restarting Flink 1.19 pipeline
Be aware: This is an external email.
Hi Dominic,
The issue has nothing to do with DynamicKafkaSource.
The scenario what happens here is clear:
* At some point in time you've use
. Thanks for
> mentioning!
>
>
>
> Best,
>
>
>
>
> *Dominik Bünzli *Data, Analytics & AI Engineer III
>
>
>
> *From: *Schwalbe Matthias
> *Date: *Tuesday, 3 September 2024 at 11:07
> *To: *Bünzli Dominik, INI-DNA-INF ,
> user@flink.apache.org
&
; AI Engineer III
From: Schwalbe Matthias
Date: Tuesday, 3 September 2024 at 11:07
To: Bünzli Dominik, INI-DNA-INF ,
user@flink.apache.org
Subject: RE: Kafka connector exception restarting Flink 1.19 pipeline
Be aware: This is an external email.
… really hard to tell, your original error mes
: Schwalbe Matthias ; user@flink.apache.org
Subject: [External] Re: Kafka connector exception restarting Flink 1.19 pipeline
⚠EXTERNAL MESSAGE – CAUTION: Think Before You Click ⚠
Hi Matthias,
Thank you for your reply!
There should not be a dependency for 3.0.x in my docker image, I only add 3.2.0
zli
Data, Analytics & AI Engineer III
From: Schwalbe Matthias
Date: Tuesday, 3 September 2024 at 09:59
To: Bünzli Dominik, INI-DNA-INF ,
user@flink.apache.org
Subject: RE: Kafka connector exception restarting Flink 1.19 pipeline
Be aware: This is an external email.
Hi Dominik,
No clue why this
...@swisscom.com
Sent: Monday, September 2, 2024 1:35 PM
To: user@flink.apache.org
Subject: [External] Kafka connector exception restarting Flink 1.19 pipeline
⚠EXTERNAL MESSAGE – CAUTION: Think Before You Click ⚠
Dear Flink community
We recently migrated our pipelines from Flink 1.17 to 1.19.0 (and
Dear Flink community
We recently migrated our pipelines from Flink 1.17 to 1.19.0 (and subsequently
to 1.19.1). We are sourcing events from Kafka and write enriched events back to
Kafka. I’m currently using the flink-connector-kafka (3.2.0-1.19). When
initially deploying (via k8s operator), the