Hello,

+1

I was thinking the same. With regard to the cut off date I would be
inclined to be more aggressive and say feature freeze for 1.17. Users do
not *need* to migrate for 1.16.

Thanks

On Thu, 1 Dec 2022, 15:01 Chesnay Schepler, <ches...@apache.org> wrote:

> Hello,
>
> let me clarify the title first.
>
> In the original proposal for the connector externalization we said that
> an externalized connector has to exist in parallel with the version
> shipped in the main Flink release for 1 cycle.
>
> For example, 1.16.0 shipped with the elasticsearch connector, but at the
> same time there's the externalized variant as a drop-in replacement, and
> the 1.17.0 release will not include a ES connector.
>
> The rational was to give users some window to update their projects.
>
>
> We are now about to externalize a few more connectors (cassandra,
> pulsar, jdbc), targeting 1.16 within the next week.
> The 1.16.0 release has now been about a month ago; so it hasn't been a
> lot of time since then.
> I'm now wondering if we could/should treat these connectors as
> externalized for 1.16, meaning that we would remove them from the master
> branch now, not ship them in 1.17 and move all further development into
> the connector repos.
>
> The main benefit is that we won't have to bother with syncing changes
> across repos all the time.
>
> We would of course need some sort-of cutoff date for this (December
> 9th?), to ensure there's still some reasonably large gap left for users
> to migrate.
>
> Let me know what you think.
>
> Regards,
> Chesnay
>
>

Reply via email to