Thanks Dian for kicking off the RC.

+1 from my side:

I heavily tested CDC use cases end-to-end and it works well.

- checked/verified signatures and hashes
- manually verified the diff pom and NOTICE files between 1.11.0 and 1.11.1
to check dependencies, looks good
- no missing artifacts in release staging area compared to the 1.11.0
release
- started cluster and ran some table examples, verified web ui and log
output, nothing unexpected
- started cluster to run e2e SQL queries with millions of records with
Kafka, MySQL, Elasticsearch as sources/lookup/sinks. Works well and the
results are as expected.
- use SQL CLI to read from Kafka with debezium data, and MySQL binlog
source, and write into MySQL and Elasticsearch. Nothing unexpected
- review the release PR

Best,
Jark

On Mon, 20 Jul 2020 at 13:55, Congxian Qiu <qcx978132...@gmail.com> wrote:

> Hi  Dian
>
> Thanks for the information.
>
> Best,
> Congxian
>
>
> Dian Fu <dian0511...@gmail.com> 于2020年7月20日周一 上午11:44写道:
>
> > Hi Congxian,
> >
> > FLINK-18544 was to fix an issue introduced in 1.11.1 and so it should not
> > appear in the release note according to the release guide[1].
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release#CreatingaFlinkRelease-ReviewReleaseNotesinJIRA
> > <
> >
> https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release#CreatingaFlinkRelease-ReviewReleaseNotesinJIRA
> > >
> >
> > Regards,
> > Dian
> >
> > > 在 2020年7月20日,上午11:32,Dian Fu <dian0511...@gmail.com> 写道:
> > >
> > > +1 (non-binding)
> > >
> > > - checked the checksum and signature
> > > - installed PyFlink package on MacOS and run some tests
> > >
> > > Regards,
> > > Dian
> > >
> > >> 在 2020年7月20日,上午11:11,Congxian Qiu <qcx978132...@gmail.com <mailto:
> > qcx978132...@gmail.com>> 写道:
> > >>
> > >> +1 (non-binding)
> > >>
> > >> I found that the fix version of FLINK-18544 is 1.11.1, and the release
> > did not contain it. I think we should fix it in the release note.
> > >>
> > >> checked
> > >> - build from source, ok
> > >> - sha512 sum, ok
> > >> - gpg key, ok
> > >> - License seem ok(checked the change of all pom.xml between 1.11.0 and
> > 1.11.1
> >
> https://github.com/apache/flink/compare/release-1.11.0..release-1.11.1-rc1
> > <
> >
> https://github.com/apache/flink/compare/release-1.11.0..release-1.11.1-rc1
> > >
> > >> - Run some demo locally
> > >>
> > >> Best,
> > >> Congxian
> > >>
> > >>
> > >> Rui Li <lirui.fu...@gmail.com <mailto:lirui.fu...@gmail.com>>
> > 于2020年7月18日周六 下午7:04写道:
> > >> +1 (non-binding)
> > >>
> > >> - Built from source
> > >> - Verified hive connector tests for all hive versions
> > >> - Played some simple cases with hive connector and everything seems
> fine
> > >>
> > >> On Sat, Jul 18, 2020 at 12:24 AM Rui Li <lirui.fu...@gmail.com
> <mailto:
> > lirui.fu...@gmail.com>> wrote:
> > >>
> > >> > OK, I agree FLINK-18588 can wait for the next release.
> > >> >
> > >> > On Fri, Jul 17, 2020 at 11:56 PM Leonard Xu <xbjt...@gmail.com
> > <mailto:xbjt...@gmail.com>> wrote:
> > >> >
> > >> >> +1 (non-binding)
> > >> >>
> > >> >> - checked/verified signatures and hashes
> > >> >> - built from source code with scala 2.11 succeeded
> > >> >> - checked that there are no missing artifacts
> > >> >> - started a cluster, the Web UI was accessible, submitted a
> > wordcount job
> > >> >> and ran succeeded, no suspicious log output
> > >> >> - test using SQL Client to submit job and the query result is
> > expected
> > >> >> - go through all issues which fix version property is 1.11.1, all
> > issues
> > >> >> are closed except FLINK-15794,
> > >> >>   and FLINK-15794 has fixed in master and 1.11.1 just wait for
> being
> > >> >> fixed in 1.10.2.
> > >> >> - the web PR looks good
> > >> >>
> > >> >> For FLINK-18588,  I also agree with Timo to put it to 1.11.2
> because
> > >> >> it's  a `Major` bug rather than `Blocker`.
> > >> >>
> > >> >> Best,
> > >> >> Leonard
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > Best regards!
> > >> > Rui Li
> > >> >
> > >>
> > >>
> > >> --
> > >> Best regards!
> > >> Rui Li
> > >
> >
> >
>

Reply via email to