Hi Alex,

Thanks for bringing this up for discussion. I think it's indeed important
that we make it possible that externalized connectors can be tested, both
against released Flink versions but also against SNAPSHOT versions.

I did a quick check at the ASF Jira service and noticed there is
https://issues.apache.org/jira/browse/INFRA-20959. When looking over at the
Airflow setup, I also see that they are publishing SNAPSHOT versions. I
don't think there's any issue from an ASF perspective to push to GCR.

What I will do, is double check with ASF Infra if this is indeed OK and if
so, how this can be setup. It'll probably require a ticket with ASF Infra
to setup the credentials, but I'll check and let you know.

Best regards,

Martijn Visser
https://twitter.com/MartijnVisser82
https://github.com/MartijnVisser


On Fri, 22 Apr 2022 at 10:21, Yang Wang <danrtsey...@gmail.com> wrote:

> The project flink-kubernetes-operator has already been using github
> packages to deliver the snapshot images[1].
>
> [1].
>
> https://github.com/apache/flink-kubernetes-operator/pkgs/container/flink-kubernetes-operator
>
> Best,
> Yang
>
> Jingsong Li <jingsongl...@gmail.com> 于2022年4月22日周五 10:43写道:
>
> > +1 to public Flink Docker image for snapshot.
> >
> > Best,
> > Jingsong
> >
> > On Fri, Apr 22, 2022 at 2:23 AM Alexander Fedulov
> > <alexan...@ververica.com> wrote:
> > >
> > > Hi everyone,
> > >
> > > in the scope of work on externalizing connectors [1] it became evident
> > that
> > > we need to add the process of releasing SNAPSHOT (nightly) Docker
> images
> > > for Flink. Let me briefly explain why this is the case:
> > > - currently, our container-based E2E tests rely on building Flink
> Docker
> > > images on-the-flight from flink-dist [2]
> > > - this works fine as long as there is a full Flink dist available (when
> > > working on a non-externalized connector, developed against the current
> > > Flink master)
> > > - when the connector is developed in a separate repository, flink-dist
> > is,
> > > obviously, not directly available
> > > - the base image for such E2E tests has to be in-sync with the master
> > branch
> > >
> > > My understanding is that there are some potential hurdles in the Apache
> > > process in terms of publishing binary "releases" in an automated way.
> > That
> > > said, there are other Apache projects that established such pipelines
> for
> > > the purposes of development and CI, for instance, Apache Airflow uses
> > > GHCR.IO [3]. I have two main questions:
> > > 1) What is your opinion on us following the same/similar path?
> > > 2) What is the procedure from the INFRA perspective to get this
> > > approved/set up?
> > >
> > > [1] https://lists.apache.org/thread/bywh947r2f5hfocxq598zhyh06zhksrm
> > > [2]
> > >
> >
> https://github.com/apache/flink/blob/96c2500739bc5d0a0503a165daaf7549a7b6a84c/flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/container/FlinkImageBuilder.java#L210
> > > [3]
> > >
> >
> https://cwiki.apache.org/confluence/display/INFRA/Github+Actions+to+DockerHub
> > >
> > > Thanks,
> > > Alexander Fedulov
> >
>

Reply via email to