> would the repository ... be removed ... ?

Yes, I would remove it once it is merged into a version of flink that is
supported by GCP dataproc. It exists now (and I am creating releases and
maven artifacts for it) to unblock users in the interim period.

-Daniel

On Thu, Mar 16, 2023 at 3:32 PM Martijn Visser <martijnvis...@apache.org>
wrote:

> Hi Daniel,
>
> > I don't know how to get to this point, it sounds like more of an
> organizational constraint than a technical one though- who is responsible
> for the same role for the standard Pub/Sub connector? I'm working with the
> Pub/Sub team right now on prioritization of supporting the flink connector
> and converting it to support the recommended delivery mechanism for that
> service.
>
> It's not so much an organizational constraint, it's more if there are one
> or more committers in the Flink community who have the bandwidth to help
> with reviewing and merging a new connector. The PubSub connector has pretty
> much been unmaintained for the past couple of years; I have done a couple
> of outreaches to Google, but those were unfruitful.
>
> I'm hoping that someone from the Flink committers has the bandwidth for
> helping you out. @All, if you have bandwidth, please come forward.
>
> > I imagine our involvement would be similar to support for our
> self-managed client libraries
>
> I think that sounds fine.
>
> One question I have is if you envision that the code of
> https://github.com/googleapis/java-pubsublite-flink moves to
> https://github.com/apache/flink-connector-gcp-pubsub, would the repository
> https://github.com/googleapis/java-pubsublite-flink be removed or do you
> propose to have both of them exist? I would be in favour of having one, but
> wanted to check with you.
>
> Best regards,
>
> Martijn
>
> On Tue, Mar 14, 2023 at 4:09 AM Daniel Collins
> <dpcoll...@google.com.invalid>
> wrote:
>
> > Hi all,
> >
> > Thank you for the feedback. Responses inline.
> >
> > > we need feedback from a Committer who would review and help maintain it
> > going forward. Ideally, this Committer would guide one or more
> contributors
> > from Google to Committership so that Google could step up and maintain
> > Flink
> > 's PubSub and PubSub Lite Connector in the future.
> >
> > I don't know how to get to this point, it sounds like more of an
> > organizational constraint than a technical one though- who is responsible
> > for the same role for the standard Pub/Sub connector? I'm working with
> the
> > Pub/Sub team right now on prioritization of supporting the flink
> connector
> > and converting it to support the recommended delivery mechanism for that
> > service.
> >
> > > For this, it would be good to understand how you envision the
> involvement
> > of the PubSub Lite team at Google.
> >
> > I imagine our involvement would be similar to support for our
> self-managed
> > client libraries, we would field feature requests and dependency update
> > requests as they come in, as well as debugging any bug reports, and
> > improving the library as new service features arrive. We would want to
> work
> > with the flink community to support new features that our users find
> > useful. Our team did not contribute the Pub/Sub connector, but as more of
> > our customers come to use flink, we would like to bring it up to par both
> > in performance and supportability of the rest of our supported clients.
> >
> > > I think that giving the big architectural components and decisions
> would
> > help the discussion/vote.
> >
> > I will add discussion of these components to the proposal.
> >
> > > you need to replace the Google headers by the ASF v2 ones
> >
> > This is moot unless the proposal to merge is accepted.
> >
> > > when possible, try to use the equivalent Flink / JDK / ASF libs instead
> > of the Google ones
> > > look at the first commits of the Apache Beam project
> >
> > I've been an active contributor in the past to the apache beam project- I
> > don't think anything that I've used here would prevent merging this to
> the
> > beam project, and much of it is needed to interact with the client
> library.
> >
> > -Daniel
> >
> > On Mon, Mar 13, 2023 at 5:26 AM Etienne Chauchot <echauc...@apache.org>
> > wrote:
> >
> > > Hi all,
> > >
> > > I agree with Konstantin, mentoring is important especially on this new
> > > connector framework. Long time maintenance is even more important.
> > >
> > > I could not mentor you on this topic because I'm not a committer on the
> > > Flink project and because I don't know Pub/Sub tech. That being said I
> > > have on blog under writing to share what I learnt while authoring the
> > > Cassandra connector with the new source framework. I think it could be
> > > useful as a first learning step and to avoid some caveats.
> > >
> > > Regarding the FLIP, as that you already developed the connector inside
> > > Google, I understand why you gave the whole code inside the FLIP (there
> > > is no better doc than code) but I think that giving the big
> > > architectural components and decisions would help the discussion/vote.
> > >
> > > Also, I did not review the code but just took a quick look at the
> Google
> > > techs coupling:
> > >
> > > - you need to replace the Google headers by the ASF v2 ones
> > >
> > > - when possible, try to use the equivalent Flink / JDK / ASF libs
> > > instead of the Google ones (futures, collections, safe guard
> > > annotations, autovalue etc...)
> > >
> > > Finally, as a hint, I think you could take a look at the first commits
> > > of the Apache Beam project when DataFlow SDK was donated to the ASF and
> > > see what was done there to make the code ASF friendly.
> > >
> > > Best
> > >
> > > Etienne
> > >
> > > Le 09/03/2023 à 09:45, Konstantin Knauf a écrit :
> > > > Hi Daniel,
> > > >
> > > > I think, it would be great to have a PubSub Lite Connector in Flink.
> > > Before
> > > > you put this proposal up for a vote, though, we need feedback from a
> > > > Committer who would review and help maintain it going forward.
> Ideally,
> > > > this Committer would guide one or more contributors from Google to
> > > > Committership so that Google could step up and maintain Flink's
> PubSub
> > > and
> > > > PubSub Lite Connector in the future. For this, it would be good to
> > > > understand how you envision the involvement of the PubSub Lite team
> at
> > > > Google.
> > > >
> > > > I am specifically sensitive on this topic, because the PubSub
> connector
> > > has
> > > > lacked attention and maintenance for a long time. There was also a
> very
> > > > short-lived interested by Google in the past to contribute a Google
> > > PubSub
> > > > Connector [1].
> > > >
> > > > Best,
> > > >
> > > > Konstantin
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-22380
> > > >
> > > > Am Mi., 8. März 2023 um 14:45 Uhr schrieb Etienne Chauchot <
> > > > echauc...@apache.org>:
> > > >
> > > >> Hi,
> > > >>
> > > >> I agree with Ryan, even if clients might be totally different the
> > > >> backend technologies are the same so hosting them in the same repo
> > makes
> > > >> sense. Similar thinking made us put all the Cassandra related
> > connectors
> > > >> in the same cassandra repo.
> > > >>
> > > >> Etienne
> > > >>
> > > >> Le 02/03/2023 à 14:43, Daniel Collins a écrit :
> > > >>> Hello Ryan,
> > > >>>
> > > >>> Unfortunately there's not much shared logic between the two- the
> > > clients
> > > >>> have to look fundamentally different since the Pub/Sub Lite client
> > > >> exposes
> > > >>> partitions to the split level for repeatable reads.
> > > >>>
> > > >>> I have no objection to this living in the same repo as the Pub/Sub
> > > >>> connector, if this is an easier way forward than setting up a new
> > repo,
> > > >>> sounds good to me. The Pub/Sub team is organizationally close to
> us,
> > > and
> > > >> is
> > > >>> looking into providing more support for the flink connector in the
> > near
> > > >>> future.
> > > >>>
> > > >>> -Daniel
> > > >>>
> > > >>> On Thu, Mar 2, 2023 at 3:26 AM Ryan Skraba
> > > <ryan.skr...@aiven.io.invalid
> > > >>>
> > > >>> wrote:
> > > >>>
> > > >>>> Hello Daniel!  Quite a while ago, I started porting the Pub/Sub
> > > >> connector
> > > >>>> (from an existing PR) to the new source API in the new
> > > >>>> flink-connector-gcp-pubsub repository [PR2].  As Martijn
> mentioned,
> > > >> there
> > > >>>> hasn't been a lot of attention on this connector; any community
> > > >> involvement
> > > >>>> would be appreciated!
> > > >>>>
> > > >>>> Instead of considering this a new connector, is there an
> opportunity
> > > >> here
> > > >>>> to offer the two variants (Pub/Sub and Pub/Sub Lite) as different
> > > >> artifacts
> > > >>>> in that same repo?  Is there much common logic that can be shared
> > > >> between
> > > >>>> the two?  I'm not as familiar as I should be with Lite, but I do
> > > recall
> > > >>>> that they share many concepts and _some_ dependencies.
> > > >>>>
> > > >>>> All my best, Ryan
> > > >>>>
> > > >>>>
> > > >>>> On Wed, Mar 1, 2023 at 11:21 PM Daniel Collins
> > > >>>> <dpcoll...@google.com.invalid>
> > > >>>> wrote:
> > > >>>>
> > > >>>>> Hello all,
> > > >>>>>
> > > >>>>> I'd like to start an official discuss thread for adding a Pub/Sub
> > > Lite
> > > >>>>> Connector to Flink. We've had requests from our users to add
> flink
> > > >>>> support,
> > > >>>>> and are willing to maintain and support this connector long term
> > from
> > > >> the
> > > >>>>> product team.
> > > >>>>>
> > > >>>>> The proposal is https://cwiki.apache.org/confluence/x/P51bDg,
> what
> > > >> would
> > > >>>>> be
> > > >>>>> people's thoughts on adding this connector?
> > > >>>>>
> > > >>>>> -Daniel
> > > >>>>>
> > > >
> > >
> >
>

Reply via email to