Dear community,

I would like to vote +1, but during testing I've noted that we should
have reverted FLINK-4154 (correction of murmur hash) for this release.

We had a wrong murmur hash implementation for 1.0, which was fixed for
1.1. We reverted that fix, because we thought that it broke savepoint
compatibility between 1.0 and 1.1. That revert is part of RC1. It
turns out though that there are other problems with savepoint
compatibility which are independent of the hash function. Therefore I
would like to revert it again and create a new RC with only this extra
commit and extend the vote for one day.

Would you be OK with this? Most testing results should be applicable
to RC2, too.

I ran the following tests:

+ Check checksums and signatures
+ Verify no binaries in source release
+ Build (clean verify) with default Hadoop version
+ Build (clean verify) with Hadoop 2.6.1
+ Checked build for Scala 2.11
+ Checked all POMs
+ Read README.md
+ Examined OUT and LOG files
+ Checked paths with spaces (found non-blocking issue with YARN CLI)
+ Checked local, cluster mode, and multi-node cluster
+ Tested HDFS split assignment
+ Tested bin/flink command line
+ Tested recovery (master and worker failure) in standalone mode with
RocksDB and HDFS
+ Tested Scala/SBT giter8 template
+ Tested Metrics (user defined metrics, multiple JMX reporters, JM
metrics, user defined reporter)

– Ufuk


On Tue, Aug 2, 2016 at 10:13 AM, Till Rohrmann <trohrm...@apache.org> wrote:
> I can confirm Aljoscha's findings concerning building Flink with Hadoop
> version 2.6.0 using Maven 3.3.9. Aljoscha is right that it is indeed a
> Maven 3.3 issue. If you build flink-runtime twice, then everything goes
> through because the shaded curator Flink dependency is installed in during
> the first run.
>
> On Tue, Aug 2, 2016 at 5:09 AM, Aljoscha Krettek <aljos...@apache.org>
> wrote:
>
>> @Ufuk: 3.3.9, that's probably it because that messes with the shading,
>> right?
>>
>> @Stephan: Yes, even did a "rm -r .m2/repository". But the maven version is
>> most likely the reason.
>>
>> On Mon, 1 Aug 2016 at 10:59 Stephan Ewen <se...@apache.org> wrote:
>>
>> > @Aljoscha: Have you made sure you have a clean maven cache (remove the
>> > .m2/repository/org/apache/flink folder)?
>> >
>> > On Mon, Aug 1, 2016 at 5:56 PM, Aljoscha Krettek <aljos...@apache.org>
>> > wrote:
>> >
>> > > I tried it again now. I did:
>> > >
>> > > rm -r .m2/repository
>> > > mvn clean verify -Dhadoop.version=2.6.0
>> > >
>> > > failed again. Also with versions 2.6.1 and 2.6.3.
>> > >
>> > > On Mon, 1 Aug 2016 at 08:23 Maximilian Michels <m...@apache.org> wrote:
>> > >
>> > > > This is also a major issue for batch with off-heap memory and memory
>> > > > preallocation turned off:
>> > > > https://issues.apache.org/jira/browse/FLINK-4094
>> > > > Not hard to fix though as we simply need to reliably clear the direct
>> > > > memory instead of relying on garbage collection. Another possible fix
>> > > > is to maintain memory pools independently of the preallocation mode.
>> I
>> > > > think this is fine because preallocation:false suggests that no
>> memory
>> > > > will be preallocated but not that memory will be freed once acquired.
>> > > >
>> > >
>> >
>>

Reply via email to