Hello 👋

We're trying out the new 1.20 version of Flink and the 3.3.0 Kafka upsert
connector with SQL and we're hitting some problems.

We've been using the 3.1.0 version with Flink 1.18 so far, and it's been
working properly for us.

When upgrading one of our jobs, without changing the SQL code, we have
noticed that some messages have been sent by the upsert Kafka sink out of
order.

When an update is performed, it used to send a tombstone in the output
topic and then the updated state afterwards.
Now it appears to sometimes send the two messages in the wrong order (the
offset of the tombstone is greater than the offset of the update message,
in the same partition), causing the tombstone to delete things downstream.

The job is executed with a parallelism of 1 both before and after upgrading.

I've attached the SQL job definition.

Trying to debug the code, we've used the sql client in changelog mode to
run the select statement without the `insert into` part, and indeed there
were no delete operations.

Is this a bug with the new version of Flink/the Kafka connector?
Are we doing something wrong?

Any help is greatly appreciated 🙏

Attachment: bandi.sql
Description: application/sql

Reply via email to