+1 - Verified hashes and signatures - Ran example jobs on YARN with vanilla Hadoop vesions (on 4 GCE nodes): * 2.7.1 with Flink Hadoop 2.7 binary, Scala 2.10 and 11 * 2.6.2 with Flink Hadoop 2.6 binary, Scala 2.10 * 2.4.1 with Flink Hadoop 2.4 binary, Scala 2.10 * 2.3.0 with Flink Hadoop 2 binary, Scala 2.10 - Cancelled a restarting job via CLI and web interface - Ran simple Kafka read-write pipeline - Ran manual tests - Ran examples on local cluster
> On 25 Nov 2015, at 17:45, Aljoscha Krettek <aljos...@apache.org> wrote: > > +1 > > I ran an example with a custom operator that processes high-volume kafka > input/output and has a large state size. I ran this on 10 GCE nodes. > >> On 25 Nov 2015, at 14:58, Till Rohrmann <till.rohrm...@gmail.com> wrote: >> >> Alright, then I withdraw my remark concerning testdata.avro. >> >> On Wed, Nov 25, 2015 at 2:56 PM, Stephan Ewen <se...@apache.org> wrote: >> >>> @Till I think the avro test data file is okay, the "no binaries" policy >>> refers to binary executables, as far as I know. >>> >>> On Wed, Nov 25, 2015 at 2:54 PM, Till Rohrmann <till.rohrm...@gmail.com> >>> wrote: >>> >>>> Checked checksums for src release and Hadoop 2.7 Scala 2.10 release >>>> >>>> Checked binaries in source release >>>> - contains ./flink-staging/flink-avro/src/test/resources/testdata.avro >>>> >>>> License >>>> - no new files added which are relevant for licensing >>>> >>>> Build Flink and run tests from source release for Hadoop 2.5.1 >>>> >>>> Checked empty that log files don't contain exceptions and out files are >>>> empty >>>> >>>> Run all examples with Hadoop 2.7 Scala 2.10 binaries via FliRTT tool on 4 >>>> node standalone cluster and YARN cluster >>>> >>>> Tested planVisualizer >>>> >>>> Tested flink command line client >>>> - tested info command >>>> - tested -p option >>>> >>>> Tested cluster HA in standalone mode => working >>>> >>>> Tested cluster HA on YARN (2.7.1) => working >>>> >>>> Except for the avro testdata file which is contained in the source >>> release, >>>> I didn't find anything. >>>> >>>> +1 for releasing and removing the testdata file for the next release. >>>> >>>> On Wed, Nov 25, 2015 at 2:33 PM, Robert Metzger <rmetz...@apache.org> >>>> wrote: >>>> >>>>> +1 >>>>> >>>>> - Build a maven project with the staging repository >>>>> - started Flink on YARN on a CDH 5.4.5 / Hadoop 2.6.0-cdh5.4.5 cluster >>>> with >>>>> YARN and HDFS HA >>>>> - ran some kafka (0.8.2.0) read / write experiments >>>>> - job cancellation with yarn is working ;) >>>>> >>>>> I found the following issue while testing: >>>>> https://issues.apache.org/jira/browse/FLINK-3078 but it was already in >>>>> 0.10.0 and its not super critical bc the JobManager container will be >>>>> killed by YARN after a few minutes. >>>>> >>>>> >>>>> I'll extend the vote until tomorrow Thursday, November 26. >>>>> >>>>> >>>>> On Tue, Nov 24, 2015 at 1:54 PM, Stephan Ewen <se...@apache.org> >>> wrote: >>>>> >>>>>> @Gyula: I think it affects users, so should definitely be fixed very >>>> soon >>>>>> (either 0.10.1 or 0.10.2) >>>>>> >>>>>> Still checking whether Robert's current version fix solves it now, or >>>>>> not... >>>>>> >>>>>> On Tue, Nov 24, 2015 at 1:46 PM, Vyacheslav Zholudev < >>>>>> vyacheslav.zholu...@gmail.com> wrote: >>>>>> >>>>>>> I can confirm that the build works fine when increasing a max >>> number >>>> of >>>>>>> opened files. Sorry for confusion. >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> View this message in context: >>>>>>> >>>>>> >>>>> >>>> >>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9327.html >>>>>>> Sent from the Apache Flink Mailing List archive. mailing list >>> archive >>>>> at >>>>>>> Nabble.com. >>>>>>> >>>>>> >>>>> >>>> >>> >