Dear Flink Community,

I am currently in the process of upgrading our Flink cluster from *version
1.18.0 to 1.20.1*. The cluster itself is functioning correctly
post-upgrade, and I am able to deploy Flink jobs successfully. However, I
have encountered an issue when attempting to restore a job using a *savepoint
or state taken from Flink 1.18.0*.
*Issue Description*

   -

   When deploying the Flink job to the *Flink 1.20.1 cluster* using a
*savepoint
   from Flink 1.18.0*, the job is assigned *only one Kafka partition
   (partition 0)*. As a result, messages from the other partitions are not
   being consumed.
   -

   However, if I deploy the same job *without a savepoint*, the job
   correctly assigns all three partitions (*0, 1, 2*) and consumes messages
   as expected.

I have researched this issue extensively but have not found a clear
explanation. I would appreciate any guidance on the following queries:

   1.

   *Is this issue related to the compatibility of savepoint restoration
   between Flink 1.18.0 and Flink 1.20.1?*
   2.

   *Is this behavior a known bug or an expected outcome?*
   3.

   *If this is a bug, what are the recommended steps to resolve it?*
   -

      Are there any configuration changes required to properly restore
      partitions?
      -

      Would fixing this require modifications in the application code?

Your insights and assistance on this matter would be highly appreciated.

Thanks & Regards
Jasvendra Kumar

Reply via email to