The voting time has passed and I'm happy to announce that we've collected 
enough votes to release this RC as Flink 1.3.2.

+1 votes:
- Piotrek (non-binding)
- Stefan (non-binding) 
- Fabian (binding)
- Chesnay (binding)
- Timo (binding)
- Tzu-Li (binding)
- Aljoscha (binding)

That's 7 votes, 5 binding. No 0 or -1 votes.

Thanks a lot, everyone, for testing and making sure that this will be a good 
release! I'll send out a separate announcement mail and push out the release 
artefacts and update the website now.

> On 12. Dec 2017, at 10:48, Aljoscha Krettek <aljos...@apache.org> wrote:
> 
> +1
> 
> Verified:
> - NOTICE and LICENSE are correct
> - source doesn't contain binaries
> - verified signatures
> - verified hashes
> - cluster testing on AWS and Cloudera VM (with Kerberos) (see release-testing 
> doc)
> - verified "mvn clean verify" for all supported Hadoop versions (2.4.1 to 
> 2.9.0)
> 
>> On 11. Dec 2017, at 15:23, Tzu-Li (Gordon) Tai <tzuli...@apache.org> wrote:
>> 
>> +1
>> 
>> - Staged Apache source & binary convenience releases looks good
>> - Built from source (macOS w/ Scala 2.11, hadoop-free, hadoop-2.8)
>> - Locally tested topic regex subscription for the Kafka consumer
>> - Quickstart projects looks good
>> 
>> Other things verified that are carried from previous RC votes
>> - Cluster tests on AWS with configuration detailed in [1], with special
>> focus on dynamic Kafka partition discovery
>> - Kinesis connector, Elasticsearch connector runs fine with cluster
>> execution + locally in IDE, without any dependency clashes
>> 
>> [1]
>> https://docs.google.com/document/d/1cOkycJwEKVjG_onnpl3bQNTq7uebh48zDtIJxceyU2E/edit#heading=h.sintcv4ccegd
>> 
>> On Mon, Dec 11, 2017 at 9:28 PM, Timo Walther <twal...@apache.org> wrote:
>> 
>>> +1 (binding)
>>> 
>>> - build the source locally
>>> - run various table programs
>>> - checked the resource consumption of table programs with retention
>>> enabled and disabled
>>> - built a quickstart project
>>> - tested the web ui submission (found https://issues.apache.org/jira
>>> /browse/FLINK-8187 but this is non-blocking)
>>> 
>>> 
>>> Am 12/11/17 um 2:16 PM schrieb Chesnay Schepler:
>>> 
>>> +1 (binding)
>>>> 
>>>> - checked contents of flink-dist for unshaded dependencies
>>>> - ran python examples (with/-out arguments) locally
>>>> - ran jobs on yarn on a cluster testing optional hadoop dependency
>>>> - verified that quickstarts work
>>>> - checked JM/TM logs for anything suspicious
>>>> 
>>>> On 11.12.2017 11:29, Fabian Hueske wrote:
>>>> 
>>>>> +1 (binding)
>>>>> 
>>>>> - Checked hashes & signatures
>>>>> - Checked no binaries in source release
>>>>> - Checked Flink version in Quickstart pom files
>>>>> 
>>>>> Cheers, Fabian
>>>>> 
>>>>> 2017-12-11 11:26 GMT+01:00 Stefan Richter <s.rich...@data-artisans.com>:
>>>>> 
>>>>> +1 (non-binding)
>>>>>> 
>>>>>> - did extensive cluster tests on Google Cloud with special focus on
>>>>>> checkpointing and recovery and Kafka 0.11 end-to-end exactly-once +
>>>>>> at-least-once.
>>>>>> - build from source.
>>>>>> 
>>>>>> Am 11.12.2017 um 09:53 schrieb Piotr Nowojski <pi...@data-artisans.com
>>>>>>>> :
>>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> +1 (non-binding)
>>>>>>> 
>>>>>>> I have:
>>>>>>> - verified Scala and Java sample projects are creating and working
>>>>>>> 
>>>>>> properly and that Quickstart docs are ok
>>>>>> 
>>>>>>> - verified that ChildFirstClassloader allows user to run his
>>>>>>> application
>>>>>>> 
>>>>>> with some custom akka version
>>>>>> 
>>>>>>> - tested Kafka 0.11 end to end exactly once
>>>>>>> - did some manual checks whether docs/distribution files are ok
>>>>>>> 
>>>>>>> Piotrek
>>>>>>> 
>>>>>>> On 8 Dec 2017, at 16:49, Stephan Ewen <se...@apache.org> wrote:
>>>>>>>> 
>>>>>>>> @Eron Given that this is actually an undocumented "internal" feature
>>>>>>>> at
>>>>>>>> this point, I would not expect that it is used heavily beyond Pravega.
>>>>>>>> 
>>>>>>>> Unless you feel strongly that this is a major issue, I would go ahead
>>>>>>>> 
>>>>>>> with
>>>>>> 
>>>>>>> the release...
>>>>>>>> 
>>>>>>>> On Fri, Dec 8, 2017 at 3:18 PM, Aljoscha Krettek <aljos...@apache.org
>>>>>>>>> 
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>> Thanks for the update! I would also say it's not a blocker but we
>>>>>>>>> 
>>>>>>>> should
>>>>>> 
>>>>>>> make sure that we don't break this after 1.4, then.
>>>>>>>>> 
>>>>>>>>> On 7. Dec 2017, at 22:37, Eron Wright <eronwri...@gmail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>> Just discovered:  the removal of Flink's Future (FLINK-7252) causes
>>>>>>>>>> a
>>>>>>>>>> breaking change in connectors that use
>>>>>>>>>> `org.apache.flink.runtime.checkpoint.MasterTriggerRestoreHook`,
>>>>>>>>>> 
>>>>>>>>> because
>>>>>> 
>>>>>>> `Future` is a type on one of the methods.
>>>>>>>>>> 
>>>>>>>>>> To my knowledge, this affects only the Pravega connector.  Curious
>>>>>>>>>> to
>>>>>>>>>> 
>>>>>>>>> know
>>>>>>>>> 
>>>>>>>>>> whether any other connectors are affected.  I don't think we (Dell
>>>>>>>>>> 
>>>>>>>>> EMC)
>>>>>> 
>>>>>>> consider it a blocker but it will mean that the connector is Flink
>>>>>>>>>> 
>>>>>>>>> 1.4+.
>>>>>> 
>>>>>>> Eron
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Thu, Dec 7, 2017 at 12:25 PM, Aljoscha Krettek <
>>>>>>>>>> 
>>>>>>>>> aljos...@apache.org>
>>>>>> 
>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>> I just noticed that I did a copy-and-paste error and the last
>>>>>>>>>>> 
>>>>>>>>>> paragraph
>>>>>> 
>>>>>>> about voting period should be this:
>>>>>>>>>>> 
>>>>>>>>>>> The vote will be open for at least 72 hours. It is adopted by
>>>>>>>>>>> 
>>>>>>>>>> majority
>>>>>> 
>>>>>>> approval, with at least 3 PMC affirmative votes.
>>>>>>>>>>> 
>>>>>>>>>>> Best,
>>>>>>>>>>> Aljoscha
>>>>>>>>>>> 
>>>>>>>>>>> On 7. Dec 2017, at 19:24, Bowen Li <bowen...@offerupnow.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> I agree that it shouldn't block the release. The doc website part
>>>>>>>>>>>> is
>>>>>>>>>>>> 
>>>>>>>>>>> even
>>>>>>>>> 
>>>>>>>>>> better!
>>>>>>>>>>>> 
>>>>>>>>>>>> On Thu, Dec 7, 2017 at 1:09 AM, Aljoscha Krettek <
>>>>>>>>>>>> 
>>>>>>>>>>> aljos...@apache.org>
>>>>>> 
>>>>>>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> Good catch, yes. This shouldn't block the release, though, since
>>>>>>>>>>>>> 
>>>>>>>>>>>> the
>>>>>> 
>>>>>>> doc
>>>>>>>>> 
>>>>>>>>>> is always built form the latest state of a release branch, i.e. the
>>>>>>>>>>>>> 
>>>>>>>>>>>> 1.4
>>>>>>>>> 
>>>>>>>>>> doc
>>>>>>>>>>> 
>>>>>>>>>>>> on the website will update as soon as the doc on the release-1.4
>>>>>>>>>>>>> 
>>>>>>>>>>>> branch
>>>>>>>>> 
>>>>>>>>>> is
>>>>>>>>>>> 
>>>>>>>>>>>> updated.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On 6. Dec 2017, at 20:47, Bowen Li <bowen...@offerupnow.com>
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> wrote:
>>>>>> 
>>>>>>> Hi Aljoscha,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I found Flink's State doc and javaDoc are very ambiguous on what
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> the
>>>>>> 
>>>>>>> replacement of FoldingState is, which will confuse a lot of
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> users. We
>>>>>> 
>>>>>>> need
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> to fix it in 1.4 release.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I have submitted a PR at https://github.com/apache/
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> flink/pull/5129
>>>>>> 
>>>>>>> Thanks,
>>>>>>>>>>>>>> Bowen
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Wed, Dec 6, 2017 at 5:56 AM, Aljoscha Krettek <
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> aljos...@apache.org>
>>>>>>>>> 
>>>>>>>>>> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi everyone,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please review and vote on release candidate #3 for the version
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 1.4.0,
>>>>>>>>> 
>>>>>>>>>> as
>>>>>>>>>>> 
>>>>>>>>>>>> follows:
>>>>>>>>>>>>>>> [ ] +1, Approve the release
>>>>>>>>>>>>>>> [ ] -1, Do not approve the release (please provide specific
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> comments)
>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>>>>>>> The complete staging area is available for your review, which
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> includes:
>>>>>>>>>>> 
>>>>>>>>>>>> * JIRA release notes [1],
>>>>>>>>>>>>>>> * the official Apache source release and binary convenience
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> releases
>>>>>> 
>>>>>>> to
>>>>>>>>>>> 
>>>>>>>>>>>> be
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> deployed to dist.apache.org[2], which are signed with the key
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> with
>>>>>> 
>>>>>>> fingerprint F2A67A8047499BBB3908D17AA8F4FD97121D7293 [3],
>>>>>>>>>>>>>>> * all artifacts to be deployed to the Maven Central Repository
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> [4],
>>>>>> 
>>>>>>> * source code tag "release-1.4.0-rc1" [5],
>>>>>>>>>>>>>>> * website pull request listing the new release [6].
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please have a careful look at the website PR because I changed
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> some
>>>>>> 
>>>>>>> wording and we're now also releasing a binary without Hadoop
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> dependencies.
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Please use this document for coordinating testing efforts: [7]
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> The only change between RC1 and this RC2 is that the source
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> release
>>>>>> 
>>>>>>> package does not include the erroneously included binary Ruby
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> dependencies
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> of the documentation anymore. Because of this I would like to
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> propose
>>>>>>>>> 
>>>>>>>>>> a
>>>>>>>>>>> 
>>>>>>>>>>>> shorter voting time and close the vote around the time that RC1
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> would
>>>>>>>>> 
>>>>>>>>>> have
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> closed. This would mean closing by end of Wednesday. Please let
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> me
>>>>>> 
>>>>>>> know
>>>>>>>>>>> 
>>>>>>>>>>>> if
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> you disagree with this. The vote is adopted by majority
>>>>>>>>>>>>>>> approval,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> with
>>>>>>>>> 
>>>>>>>>>> at
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> least 3 PMC affirmative votes.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>> Your friendly Release Manager
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?
>>>>>>>>>>>>>>> projectId=12315522&version=12340533
>>>>>>>>>>>>>>> [2] http://people.apache.org/~aljoscha/flink-1.4.0-rc3/
>>>>>>>>>>>>>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>>>>>>>>>>>>>> [4] https://repository.apache.org/content/repositories/
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> orgapacheflink-1141
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> [5] https://git-wip-us.apache.org/
>>>>>>>>>>>>>>> repos/asf?p=flink.git;a=tag;h=
>>>>>>>>>>>>>>> 8fb9635dd2e64dbb20887c84f646f02034b57cb1
>>>>>>>>>>>>>>> [6] https://github.com/apache/flink-web/pull/95
>>>>>>>>>>>>>>> [7] https://docs.google.com/document/d/1cOkycJwEKVjG_
>>>>>>>>>>>>>>> onnpl3bQNTq7uebh48zDtIJxceyU2E/edit?usp=sharing
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Pro-tip: you can create a settings.xml file with these
>>>>>>>>>>>>>>> contents:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> <settings>
>>>>>>>>>>>>>>> <activeProfiles>
>>>>>>>>>>>>>>> <activeProfile>flink-1.4.0</activeProfile>
>>>>>>>>>>>>>>> </activeProfiles>
>>>>>>>>>>>>>>> <profiles>
>>>>>>>>>>>>>>> <profile>
>>>>>>>>>>>>>>> <id>flink-1.4.0</id>
>>>>>>>>>>>>>>> <repositories>
>>>>>>>>>>>>>>> <repository>
>>>>>>>>>>>>>>> <id>flink-1.4.0</id>
>>>>>>>>>>>>>>> <url>
>>>>>>>>>>>>>>> https://repository.apache.org/content/repositories/
>>>>>>>>>>>>>>> orgapacheflink-1141/
>>>>>>>>>>>>>>> </url>
>>>>>>>>>>>>>>> </repository>
>>>>>>>>>>>>>>> <repository>
>>>>>>>>>>>>>>> <id>archetype</id>
>>>>>>>>>>>>>>> <url>
>>>>>>>>>>>>>>> https://repository.apache.org/content/repositories/
>>>>>>>>>>>>>>> orgapacheflink-1141/
>>>>>>>>>>>>>>> </url>
>>>>>>>>>>>>>>> </repository>
>>>>>>>>>>>>>>> </repositories>
>>>>>>>>>>>>>>> </profile>
>>>>>>>>>>>>>>> </profiles>
>>>>>>>>>>>>>>> </settings>
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> And reference that in you maven commands via --settings
>>>>>>>>>>>>>>> path/to/settings.xml. This is useful for creating a quickstart
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> based
>>>>>> 
>>>>>>> on
>>>>>>>>>>> 
>>>>>>>>>>>> the
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> staged release and for building against the staged jars.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>> 
>>> 
> 

Reply via email to