@Chesnay

No. Users will have to manually build and install PyFlink themselves in
1.9.0:
https://ci.apache.org/projects/flink/flink-docs-release-1.9/flinkDev/building.html#build-pyflink

This is also mentioned in the announcement blog post (to-be-merged):
https://github.com/apache/flink-web/pull/244/files#diff-0cc840a590f5cab2485934278134c9baR291

On Thu, Aug 22, 2019 at 10:03 AM Chesnay Schepler <ches...@apache.org>
wrote:

> Are we also releasing python artifacts for 1.9?
>
> On 21/08/2019 19:23, Tzu-Li (Gordon) Tai wrote:
> > I'm happy to announce that we have unanimously approved this candidate as
> > the 1.9.0 release.
> >
> > There are 12 approving votes, 5 of which are binding:
> > - Yu Li
> > - Zili Chen
> > - Gordon Tai
> > - Stephan Ewen
> > - Jark Wu
> > - Vino Yang
> > - Gary Yao
> > - Bowen Li
> > - Chesnay Schepler
> > - Till Rohrmann
> > - Aljoscha Krettek
> > - David Anderson
> >
> > There are no disapproving votes.
> >
> > Thanks everyone who has contributed to this release!
> >
> > I will wait until tomorrow morning for the artifacts to be available in
> > Maven central before announcing the release in a separate thread.
> >
> > The release blog post will also be merged tomorrow along with the
> official
> > announcement.
> >
> > Cheers,
> > Gordon
> >
> > On Wed, Aug 21, 2019, 5:37 PM David Anderson <da...@ververica.com>
> wrote:
> >
> >> +1 (non-binding)
> >>
> >> I upgraded the flink-training-exercises project.
> >>
> >> I encountered a few rough edges, including problems in the docs, but
> >> nothing serious.
> >>
> >> I had to make some modifications to deal with changes in the Table API:
> >>
> >> ExternalCatalogTable.builder became new ExternalCatalogTableBuilder
> >> TableEnvironment.getTableEnvironment became
> StreamTableEnvironment.create
> >> StreamTableDescriptorValidator.UPDATE_MODE() became
> >> StreamTableDescriptorValidator.UPDATE_MODE
> >> org.apache.flink.table.api.java.Slide moved to
> >> org.apache.flink.table.api.Slide
> >>
> >> I also found myself forced to change a CoProcessFunction to a
> >> KeyedCoProcessFunction (which it should have been).
> >>
> >> I also tried a few complex queries in the SQL console, and wrote a
> >> simple job using the State Processor API. Everything worked.
> >>
> >> David
> >>
> >>
> >> David Anderson | Training Coordinator
> >>
> >> Follow us @VervericaData
> >>
> >> --
> >> Join Flink Forward - The Apache Flink Conference
> >> Stream Processing | Event Driven | Real Time
> >>
> >>
> >> On Wed, Aug 21, 2019 at 1:45 PM Aljoscha Krettek <aljos...@apache.org>
> >> wrote:
> >>> +1
> >>>
> >>> I checked the last RC on a GCE cluster and was satisfied with the
> >> testing. The cherry-picked commits didn’t change anything related, so
> I’m
> >> forwarding my vote from there.
> >>> Aljoscha
> >>>
> >>>> On 21. Aug 2019, at 13:34, Chesnay Schepler <ches...@apache.org>
> >> wrote:
> >>>> +1 (binding)
> >>>>
> >>>> On 21/08/2019 08:09, Bowen Li wrote:
> >>>>> +1 non-binding
> >>>>>
> >>>>> - built from source with default profile
> >>>>> - manually ran SQL and Table API tests for Flink's metadata
> >> integration
> >>>>> with Hive Metastore in local cluster
> >>>>> - manually ran SQL tests for batch capability with Blink planner and
> >> Hive
> >>>>> integration (source/sink/udf) in local cluster
> >>>>>      - file formats include: csv, orc, parquet
> >>>>>
> >>>>>
> >>>>> On Tue, Aug 20, 2019 at 10:23 PM Gary Yao <g...@ververica.com>
> wrote:
> >>>>>
> >>>>>> +1 (non-binding)
> >>>>>>
> >>>>>> Reran Jepsen tests 10 times.
> >>>>>>
> >>>>>> On Wed, Aug 21, 2019 at 5:35 AM vino yang <yanghua1...@gmail.com>
> >> wrote:
> >>>>>>> +1 (non-binding)
> >>>>>>>
> >>>>>>> - checkout source code and build successfully
> >>>>>>> - started a local cluster and ran some example jobs successfully
> >>>>>>> - verified signatures and hashes
> >>>>>>> - checked release notes and post
> >>>>>>>
> >>>>>>> Best,
> >>>>>>> Vino
> >>>>>>>
> >>>>>>> Stephan Ewen <se...@apache.org> 于2019年8月21日周三 上午4:20写道:
> >>>>>>>
> >>>>>>>> +1 (binding)
> >>>>>>>>
> >>>>>>>>   - Downloaded the binary release tarball
> >>>>>>>>   - started a standalone cluster with four nodes
> >>>>>>>>   - ran some examples through the Web UI
> >>>>>>>>   - checked the logs
> >>>>>>>>   - created a project from the Java quickstarts maven archetype
> >>>>>>>>   - ran a multi-stage DataSet job in batch mode
> >>>>>>>>   - killed as TaskManager and verified correct restart behavior,
> >>>>>> including
> >>>>>>>> failover region backtracking
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> I found a few issues, and a common theme here is confusing error
> >>>>>>> reporting
> >>>>>>>> and logging.
> >>>>>>>>
> >>>>>>>> (1) When testing batch failover and killing a TaskManager, the job
> >>>>>>> reports
> >>>>>>>> as the failure cause "org.apache.flink.util.FlinkException: The
> >>>>>> assigned
> >>>>>>>> slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
> >>>>>>>>      I think that is a pretty bad error message, as a user I don't
> >> know
> >>>>>>> what
> >>>>>>>> that means. Some internal book keeping thing?
> >>>>>>>>      You need to know a lot about Flink to understand that this
> >> means
> >>>>>>>> "TaskManager failure".
> >>>>>>>>      https://issues.apache.org/jira/browse/FLINK-13805
> >>>>>>>>      I would not block the release on this, but think this should
> >> get
> >>>>>>> pretty
> >>>>>>>> urgent attention.
> >>>>>>>>
> >>>>>>>> (2) The Metric Fetcher floods the log with error messages when a
> >>>>>>>> TaskManager is lost.
> >>>>>>>>       There are many exceptions being logged by the Metrics
> Fetcher
> >> due
> >>>>>> to
> >>>>>>>> not reaching the TM any more.
> >>>>>>>>       This pollutes the log and drowns out the original exception
> >> and
> >>>>>> the
> >>>>>>>> meaningful logs from the scheduler/execution graph.
> >>>>>>>>       https://issues.apache.org/jira/browse/FLINK-13806
> >>>>>>>>       Again, I would not block the release on this, but think this
> >>>>>> should
> >>>>>>>> get pretty urgent attention.
> >>>>>>>>
> >>>>>>>> (3) If you put "web.submit.enable: false" into the configuration,
> >> the
> >>>>>> web
> >>>>>>>> UI will still display the "SubmitJob" page, but errors will
> >>>>>>>>      continuously pop up, stating "Unable to load requested file
> >> /jars."
> >>>>>>>>      https://issues.apache.org/jira/browse/FLINK-13799
> >>>>>>>>
> >>>>>>>> (4) REST endpoint logs ERROR level messages when selecting the
> >>>>>>>> "Checkpoints" tab for batch jobs. That does not seem correct.
> >>>>>>>>       https://issues.apache.org/jira/browse/FLINK-13795
> >>>>>>>>
> >>>>>>>> Best,
> >>>>>>>> Stephan
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai <
> >>>>>>> tzuli...@apache.org>
> >>>>>>>> wrote:
> >>>>>>>>
> >>>>>>>>> +1
> >>>>>>>>>
> >>>>>>>>> Legal checks:
> >>>>>>>>> - verified signatures and hashes
> >>>>>>>>> - New bundled Javascript dependencies for flink-runtime-web are
> >>>>>>> correctly
> >>>>>>>>> reflected under licenses-binary and NOTICE file.
> >>>>>>>>> - locally built from source (Scala 2.12, without Hadoop)
> >>>>>>>>> - No missing artifacts in staging repo
> >>>>>>>>> - No binaries in source release
> >>>>>>>>>
> >>>>>>>>> Functional checks:
> >>>>>>>>> - Quickstart working (both in IDE + job submission)
> >>>>>>>>> - Simple State Processor API program that performs offline key
> >> schema
> >>>>>>>>> migration (RocksDB backend). Generated savepoint is valid to
> >> restore
> >>>>>>>> from.
> >>>>>>>>> - All E2E tests pass locally
> >>>>>>>>> - Didn’t notice any issues with the new WebUI
> >>>>>>>>>
> >>>>>>>>> Cheers,
> >>>>>>>>> Gordon
> >>>>>>>>>
> >>>>>>>>> On Tue, Aug 20, 2019 at 3:53 AM Zili Chen <wander4...@gmail.com>
> >>>>>>> wrote:
> >>>>>>>>>> +1 (non-binding)
> >>>>>>>>>>
> >>>>>>>>>> - build from source: OK(8u212)
> >>>>>>>>>> - check local setup tutorial works as expected
> >>>>>>>>>>
> >>>>>>>>>> Best,
> >>>>>>>>>> tison.
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Yu Li <car...@gmail.com> 于2019年8月20日周二 上午8:24写道:
> >>>>>>>>>>
> >>>>>>>>>>> +1 (non-binding)
> >>>>>>>>>>>
> >>>>>>>>>>> - checked release notes: OK
> >>>>>>>>>>> - checked sums and signatures: OK
> >>>>>>>>>>> - repository appears to contain all expected artifacts
> >>>>>>>>>>> - source release
> >>>>>>>>>>>       - contains no binaries: OK
> >>>>>>>>>>>       - contains no 1.9-SNAPSHOT references: OK
> >>>>>>>>>>>       - build from source: OK (8u102)
> >>>>>>>>>>> - binary release
> >>>>>>>>>>>       - no examples appear to be missing
> >>>>>>>>>>>       - started a cluster; WebUI reachable, example ran
> >>>>>> successfully
> >>>>>>>>>>> - checked README.md file and found nothing unexpected
> >>>>>>>>>>>
> >>>>>>>>>>> Best Regards,
> >>>>>>>>>>> Yu
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> On Tue, 20 Aug 2019 at 01:16, Tzu-Li (Gordon) Tai <
> >>>>>>>> tzuli...@apache.org
> >>>>>>>>>>> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>> Hi all,
> >>>>>>>>>>>>
> >>>>>>>>>>>> Release candidate #3 for Apache Flink 1.9.0 is now ready for
> >>>>>> your
> >>>>>>>>>> review.
> >>>>>>>>>>>> Please review and vote on release candidate #3 for version
> >>>>>> 1.9.0,
> >>>>>>>> as
> >>>>>>>>>>>> follows:
> >>>>>>>>>>>> [ ] +1, Approve the release
> >>>>>>>>>>>> [ ] -1, Do not approve the release (please provide specific
> >>>>>>>> comments)
> >>>>>>>>>>>> The complete staging area is available for your review, which
> >>>>>>>>> includes:
> >>>>>>>>>>>> * JIRA release notes [1],
> >>>>>>>>>>>> * the official Apache source release and binary convenience
> >>>>>>>> releases
> >>>>>>>>> to
> >>>>>>>>>>> be
> >>>>>>>>>>>> deployed to dist.apache.org [2], which are signed with the
> key
> >>>>>>>> with
> >>>>>>>>>>>> fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> >>>>>>>>>>>> * all artifacts to be deployed to the Maven Central Repository
> >>>>>>> [4],
> >>>>>>>>>>>> * source code tag “release-1.9.0-rc3” [5].
> >>>>>>>>>>>> * pull requests for the release note documentation [6] and
> >>>>>>>>> announcement
> >>>>>>>>>>>> blog post [7].
> >>>>>>>>>>>>
> >>>>>>>>>>>> As proposed in the RC2 vote thread [8], for RC3 we are only
> >>>>>>>>>>> cherry-picking
> >>>>>>>>>>>> minimal specific changes on top of RC2 to be able to
> reasonably
> >>>>>>>> carry
> >>>>>>>>>>> over
> >>>>>>>>>>>> previous testing efforts and effectively require a shorter
> >>>>>> voting
> >>>>>>>>> time.
> >>>>>>>>>>>> The only extra commits in this RC, compared to RC2, are the
> >>>>>>>>> following:
> >>>>>>>>>>>> - c2d9aeac [FLINK-13231] [pubsub] Replace Max outstanding
> >>>>>>>>>> acknowledgement
> >>>>>>>>>>>> ids limit with a FlinkConnectorRateLimiter
> >>>>>>>>>>>> - d8941711 [FLINK-13699][table-api] Fix TableFactory doesn’t
> >>>>>> work
> >>>>>>>>> with
> >>>>>>>>>>> DDL
> >>>>>>>>>>>> when containing TIMESTAMP/DATE/TIME types
> >>>>>>>>>>>> - 04e95278 [FLINK-13752] Only references necessary variables
> >>>>>> when
> >>>>>>>>>>>> bookkeeping result partitions on TM
> >>>>>>>>>>>>
> >>>>>>>>>>>> Due to the minimal set of changes, the vote for RC3 will be
> >>>>>> *open
> >>>>>>>> for
> >>>>>>>>>>> only
> >>>>>>>>>>>> 48 hours*.
> >>>>>>>>>>>> Please cast your votes before *Aug. 21st (Wed.) 2019, 17:00 PM
> >>>>>>>> CET*.
> >>>>>>>>>>>> It is adopted by majority approval, with at least 3 PMC
> >>>>>>> affirmative
> >>>>>>>>>>> votes.
> >>>>>>>>>>>> Thanks,
> >>>>>>>>>>>> Gordon
> >>>>>>>>>>>>
> >>>>>>>>>>>> [1]
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601
> >>>>>>>>>>>> [2]
> >>>>>>> https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc3/
> >>>>>>>>>>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >>>>>>>>>>>> [4]
> >> https://repository.apache.org/content/repositories/orgapacheflink-1236
> >>>>>>>>>>>> [5]
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>
> https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc3
> >>>>>>>>>>>> [6] https://github.com/apache/flink/pull/9438
> >>>>>>>>>>>> [7] https://github.com/apache/flink-web/pull/244
> >>>>>>>>>>>> [8]
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Apache-Flink-Release-1-9-0-release-candidate-2-tp31542p31933.html
> >>>>
>
>

Reply via email to