The ScalaEnumSerializerConfigSnapshot would need a version bump regardless of 
whether or not the fixes are included in 1.3.1.
In other words, we still need to bump the version if we include it for 1.3.1.

I’m not against including FLINK-6921 and FLINK-6948 in for 1.3.1, but then as 
usual the argument would be that the problem had always been there and is not 
specific to this release.
I’m personally usually favorable of delaying the release a bit more to get 
fixes for issues we know of in.

I’ll look at the PRs for FLINK-6921 and FLINK-6948 now, and merge them soon. We 
could probably have a RC2 with a shorter vote duration?

Best,
Gordon

On 19 June 2017 at 7:10:11 PM, Till Rohrmann (trohrm...@apache.org) wrote:

I think the EnumValueSerializer [1, 2] is broken in the current RC. This  
basically means that Flink programs won’t properly notice that state  
migration is required and or fail with obscure exceptions at migration  
check or runtime.  

This definitely will be enough reason for another bug fix release if we  
don’t want to include fixes in 1.3.1. If we include the fixes in 1.3.2,  
then this will require a version bump for the  
ScalaEnumSerializerConfigSnapshot because we have to change the format.  
This also entails code for backwards compatibility.  

[1] https://issues.apache.org/jira/browse/FLINK-6921  
[2] https://issues.apache.org/jira/browse/FLINK-6948  

Cheers,  
Till  
​  

On Mon, Jun 19, 2017 at 10:34 AM, Dawid Wysakowicz <  
wysakowicz.da...@gmail.com> wrote:  

> +1  
>  
> - built from source (2.10, 2.11)  
> - checked aggregate function with AggregateFunction return type different  
> than stream type  
>  
> Z pozdrowieniami! / Cheers!  
>  
> Dawid Wysakowicz  
>  
> *Data/Software Engineer*  
>  
> Skype: dawid_wys | Twitter: @OneMoreCoder  
>  
> <http://getindata.com/>  
>  
> 2017-06-19 7:15 GMT+02:00 Tzu-Li (Gordon) Tai <tzuli...@apache.org>:  
>  
> > +1  
> >  
> > Tested the following blockers of 1.3.1:  
> >  
> > Serializers & checkpointing  
> > - Verified Scala jobs using Scala types as state (Scala collections, case  
> > classes, Either, Try, etc.) can restore from savepoints taken with Flink  
> > 1.2.1 & 1.3.1. Tested with Scala 2.10 & 2.11.  
> > - Tested restore of POJO types as state, behavior & error messages for  
> > changed POJO types consistent across different state backends  
> > - Tested stream join with checkpointing enabled  
> > - Sharing static state descriptor (w/ stateful KryoSerializer) across  
> > tasks did not reveal any issues  
> >  
> > Elasticsearch connector  
> > - ES 5 connector artifacts exists in staging repo  
> > - Tested cluster execution with ES sink (2.3.5, 2.4.1, 5.1.2), no  
> > dependency conflicts, successful  
> >  
> > Flink CEP  
> > - Out-of-order matched events is now resolved  
> >  
> > - Ran local build + test on MacOS (-Dscala-2.10, -Dscala-2.11),  
> successful  
> > - LICENSES untouched since 1.3.0  
> > - No new dependencies  
> >  
> > Best,  
> > Gordon  
> >  
> > On 14 June 2017 at 10:14:39 PM, Robert Metzger (rmetz...@apache.org)  
> > wrote:  
> >  
> > Dear Flink community,  
> >  
> > Please vote on releasing the following candidate as Apache Flink version  
> > 1.3.1.  
> >  
> > The commit to be voted on:  
> > http://git-wip-us.apache.org/repos/asf/flink/commit/7cfe62b9  
> >  
> > Branch:  
> > release-1.3.1-rc1  
> >  
> > The release artifacts to be voted on can be found at:  
> > *http://people.apache.org/~rmetzger/flink-1.3.1-rc1/  
> > <http://people.apache.org/~rmetzger/flink-1.3.1-rc1/>*  
> >  
> > The release artifacts are signed with the key with fingerprint D9839159:  
> > http://www.apache.org/dist/flink/KEYS  
> >  
> > The staging repository for this release can be found at:  
> > https://repository.apache.org/content/repositories/orgapacheflink-1124  
> >  
> >  
> > -------------------------------------------------------------  
> >  
> >  
> > The vote ends on Monday (5pm CEST), June 19rd, 2016.  
> >  
> > [ ] +1 Release this package as Apache Flink 1.3.1  
> > [ ] -1 Do not release this package, because ...  
> >  
>  

Reply via email to