TBH I think you're overestimating how much work it is to create a non-Flink release. Having done most of the flink-shaded releases, I really don't see an issue of even doing weekly releases with that process.

We can not reduce the number of votes AFAIK; the ASF seems very clear on that matter to me: https://www.apache.org/foundation/voting.html#ReleaseVotes
However, the vote duration is up to us.

Additionally, we only /need /to vote on the /source/. This means we don't need to create the maven artifacts for each RC, but can do that at the very end.

On 19/10/2021 14:21, Arvid Heise wrote:
Okay I think it is clear that the majority would like to keep connectors under the Apache Flink umbrella. That means we will not be able to have per-connector repositories and project management, automatic dependency bumping with Dependabot, or semi-automatic releases.

So then I'm assuming the directory structure that @Chesnay Schepler <mailto:ches...@apache.org> proposed would be the most beneficial:
- A root project with some convenience setup.
- Unrelated subprojects with individual versioning and releases.
- Branches for minor Flink releases. That is needed anyhow to use new features independent of API stability. - Each connector maintains its own documentation that is accessible through the main documentation.

Any thoughts on alternatives? Do you see risks?

@Stephan Ewen <mailto:se...@apache.org> mentioned offline that we could adjust the bylaws for the connectors such that we need fewer PMCs to approve a release. Would it be enough to have one PMC vote per connector release? Do you know of other ways to tweak the release process to have fewer manual work?

On Mon, Oct 18, 2021 at 10:22 PM Thomas Weise <t...@apache.org> wrote:

    Thanks for initiating this discussion.

    There are definitely a few things that are not optimal with our
    current management of connectors. I would not necessarily characterize
    it as a "mess" though. As the points raised so far show, it isn't easy
    to find a solution that balances competing requirements and leads to a
    net improvement.

    It would be great if we can find a setup that allows for connectors to
    be released independently of core Flink and that each connector can be
    released separately. Flink already has separate releases
    (flink-shaded), so that by itself isn't a new thing. Per-connector
    releases would need to allow for more frequent releases (without the
    baggage that a full Flink release comes with).

    Separate releases would only make sense if the core Flink surface is
    fairly stable though. As evident from Iceberg (and also Beam), that's
    not the case currently. We should probably focus on addressing the
    stability first, before splitting code. A success criteria could be
    that we are able to build Iceberg and Beam against multiple Flink
    versions w/o the need to change code. The goal would be that no
    connector breaks when we make changes to Flink core. Until that's the
    case, code separation creates a setup where 1+1 or N+1 repositories
    need to move lock step.

    Regarding some connectors being more important for Flink than others:
    That's a fact. Flink w/o Kafka connector (and few others) isn't
    viable. Testability of Flink was already brought up, can we really
    certify a Flink core release without Kafka connector? Maybe those
    connectors that are used in Flink e2e tests to validate functionality
    of core Flink should not be broken out?

    Finally, I think that the connectors that move into separate repos
    should remain part of the Apache Flink project. Larger organizations
    tend to approve the use of and contribution to open source at the
    project level. Sometimes it is everything ASF. More often it is
    "Apache Foo". It would be fatal to end up with a patchwork of projects
    with potentially different licenses and governance to arrive at a
    working Flink setup. This may mean we prioritize usability over
    developer convenience, if that's in the best interest of Flink as a
    whole.

    Thanks,
    Thomas



    On Mon, Oct 18, 2021 at 6:59 AM Chesnay Schepler
    <ches...@apache.org> wrote:
    >
    > Generally, the issues are reproducibility and control.
    >
    > Stuffs completely broken on the Flink side for a week? Well then
    so are
    > the connector repos.
    > (As-is) You can't go back to a previous version of the snapshot.
    Which
    > also means that checking out older commits can be problematic
    because
    > you'd still work against the latest snapshots, and they not be
    > compatible with each other.
    >
    >
    > On 18/10/2021 15:22, Arvid Heise wrote:
    > > I was actually betting on snapshots versions. What are the limits?
    > > Obviously, we can only do a release of a 1.15 connector after
    1.15 is
    > > release.
    >
    >

Reply via email to