AHeise commented on PR #111:
URL: 
https://github.com/apache/flink-connector-kafka/pull/111#issuecomment-2309529736

   > Thanks for the reply. I totally understand the pain points of maintain 
multiple flink version compatibility for a connector. In each Flink release, 
there are always some new experimental interfaces in api or runtime introduced.
   
   We broke connectors quite a bit in the past with these new APIs or changes 
to existing APIs (that's also why Hudi has different modules). I think we just 
need to craft our API extensions more carefully as outlined above. API that can 
be used easily internally may not be suited for external repos. 
   
   In theory, we could post-process compiled classes to remove usages of 1.20 
classes for 1.19 release but that's quite involved and probably hard to 
understand (too much magic).
   
   For now, I propose the following. We try to get as many other bugfixes in as 
possible. Then we do a release for 1.19 (I'm going to pick up the PR that you 
linked to get rid of 1.17 and 1.18). Then we merge this PR and add lineages and 
create a new minor (or even major). If we need to patch 1.19, we fork from 1.19 
minor and do a patch release.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to