Between the two versions this [1] change came in which might effect the
deserialization logic.
There is an obvious bug in it, namely TopicPartitionAndAssignmentStatus has
no equals and hashcode
methods which breaks the set contract (practically the same topic/partition
can be put into the set)
but
Hi Gabor,
Please find below info.
*Flink version 1.18 *
org.apache.flink
flink-connector-kafka
*3.2.0-1.18*
*Flink version : 1.20.1*
org.apache.flink
flink-connector-kafka
* 3.3.0-1.20*
Thank you
Jasvendra
On Tue, Apr 1, 2025 at 7:45 PM Gabor Somogyi
wrote:
> What v
What version of kafka connectors are you using for 1.18 and what for 1.20?
BR,
G
On Tue, Apr 1, 2025 at 4:02 PM Gabor Somogyi
wrote:
> I would suggest allowNonRestoredState=true only if data loss or replay is
> acceptable since it will drop the Kafka part of the state.
> I see some changes in
I would suggest allowNonRestoredState=true only if data loss or replay is
acceptable since it will drop the Kafka part of the state.
I see some changes in KafkaSourceEnumStateSerializer but that said it would
be good to have a repro app.
BR,
G
On Tue, Apr 1, 2025 at 3:59 PM jasvendra kumar
wrot
Dear Yang,
Thank you for your response and suggestion.
I have already tried using allowNonRestoredState=true, but the issue still
persists. Changing the *operator ID* of the Kafka source is something I
haven’t tested yet. I will attempt this and see if it resolves the
partition assignment issue.
ok, so it seems we have all the libraries in place.below is the schema
registered on my Kafka topic, followed by the payload being posted and then
the Debezium CDC error... with the "after" tag problem.
*Avro Schema*
{
"name": "factory_iot_value",
"doc": "Factory Telemetry/IoT measuremen
Hi Gabor
Thank you for your response. I appreciate the clarification that savepoints
from Flink 1.18.0 should be compatible with Flink 1.20.1 and that there is
no known issue related to this.
I will try a *minimal reproducible example* and provide step-by-step
details or a public repository where
Hi Jasvendra,
In short 1.18 savepoint should be compatible from 1.20.
We don't know such existing issue. Can you please come up with a bare
minimal step-by-step or public repo where one can repro it easily?
BR,
G
On Tue, Apr 1, 2025 at 2:37 PM jasvendra kumar
wrote:
> Dear Flink Community,
>
Dear Flink Community,
I am currently in the process of upgrading our Flink cluster from *version
1.18.0 to 1.20.1*. The cluster itself is functioning correctly
post-upgrade, and I am able to deploy Flink jobs successfully. However, I
have encountered an issue when attempting to restore a job using
Yes
Hi,
Thanks for response .
If you are interested to see the attendee list, I will send it to you .
Are you interested ?
I’m waiting for your response.
Thank you.
From: Yuan Mei
Sent: Tuesday, March 25, 2025 8:10 AM
To: Leonard Xu
Cc: user...@flink.apache.org; dev ; user
; annou...@apac
10 matches
Mail list logo