Hi Claire,

Thanks for reaching out. It's great that there is interest from Google
in spearheading the development of the respective Flink connectors.

As of now,there is only one GCP-specific connector developed directly as
part
of ASF Flink, namely the Pub/Sub one. It has already been externalized here
[1].
Grouping further connectors under apache/flink-connectors-gcp makes sense,
but
it would be nice to first understand which GCP connectors you plan to add
before we create this new umbrella project.

I do not think establishing a dedicated workgroup to help with the
GCP-specific
development is a realistic goal, though. The development will most probably
take
place on the regular ASF best effort basis (which involves mailing list
discussions,
reaching out to people for reviews, etc.) until your developers gain
committer status
and can work more independently.

One immediate open item where the Flink community would definitely
appreciate your
help is with the migration of the existing Pub/Sub connector to the new
Source API.
As you can see here [2], it is one of the two remaining connectors where we
have not
yet made progress, and it seems like a great place to start the
collaboration.
Flink 2.0 aims to remove the SourceFunction API, which the current Pub/Sub
connector
relies on. It would be great if your colleagues could assist with this
effort [3].

Best,
Alexander Fedulov

[1] https://github.com/apache/flink-connector-gcp-pubsub
[2] https://issues.apache.org/jira/browse/FLINK-28045
[3] https://issues.apache.org/jira/browse/FLINK-32673



On Tue, 13 Feb 2024 at 17:25, Claire McCarthy
<clairemccar...@google.com.invalid> wrote:

> Hi Devs!
>
> I’d like to kick off a discussion on setting up a repo for a new fleet of
> Google Cloud connectors.
>
> A bit of context:
>
>    -
>
>    We have a team of Google engineers who are looking to build/maintain
>    5-10 GCP connectors for Flink.
>    -
>
>    We are wondering if it would make sense to host our connectors under the
>    ASF umbrella following a similar repo structure as AWS (
>    https://github.com/apache/flink-connector-aws). In our case:
>    apache/flink-connectors-gcp.
>    -
>
>    Currently, we have no Flink committers on our team. We are actively
>    involved in the Apache Beam community and have a number of ASF members
> on
>    the team.
>
>
> We saw that one of the original motivations for externalizing connectors
> was to encourage more activity and contributions around connectors by
> easing the contribution overhead. We understand that the decision was
> ultimately made to host the externalized connector repos under the ASF
> organization. For the same reasons (release infra, quality assurance,
> integration with the community, etc.), we would like all GCP connectors to
> live under the ASF organization.
>
> We want to ask the Flink community what you all think of this idea, and
> what would be the best way for us to go about contributing something like
> this. We are excited to contribute and want to learn and follow your
> practices.
>
> A specific issue we know of is that our changes need approval from Flink
> committers. Do you have a suggestion for how best to go about a new
> contribution like ours from a team that does not have committers? Is it
> possible, for example, to partner with a committer (or a small cohort) for
> tight engagement? We also know about ASF voting and release process, but
> that doesn't seem to be as much of a potential hurdle.
>
> Huge thanks in advance for sharing your thoughts!
>
>
> Claire
>

Reply via email to