Hello Chesney, The overall plan sounds good! Just to double check, is Dec 9th the proposed cutoff date for the release of those externalized connectors?
Also, will we reserve time for users to verify that the drop-in replacement from Flink 1.16 to those externalized connectors can work as expected before removing their code from the master branch? Thanks, Dong On Thu, Dec 1, 2022 at 11:01 PM Chesnay Schepler <ches...@apache.org> wrote: > Hello, > > let me clarify the title first. > > In the original proposal for the connector externalization we said that > an externalized connector has to exist in parallel with the version > shipped in the main Flink release for 1 cycle. > > For example, 1.16.0 shipped with the elasticsearch connector, but at the > same time there's the externalized variant as a drop-in replacement, and > the 1.17.0 release will not include a ES connector. > > The rational was to give users some window to update their projects. > > > We are now about to externalize a few more connectors (cassandra, > pulsar, jdbc), targeting 1.16 within the next week. > The 1.16.0 release has now been about a month ago; so it hasn't been a > lot of time since then. > I'm now wondering if we could/should treat these connectors as > externalized for 1.16, meaning that we would remove them from the master > branch now, not ship them in 1.17 and move all further development into > the connector repos. > > The main benefit is that we won't have to bother with syncing changes > across repos all the time. > > We would of course need some sort-of cutoff date for this (December > 9th?), to ensure there's still some reasonably large gap left for users > to migrate. > > Let me know what you think. > > Regards, > Chesnay > >