Re: Remote TaskManager Connection Problem

2016-03-04 Thread Stephan Ewen
Hi!

This registration phase means that the TaskManager tries to tell the
JobManager that it is available.
If that fails, there can be two reasons

  1) Network communication not possible to the port
  1.1) JobManager IP really not reachable (not the case, as you
described)
  1.2) TaskManager selected a wrong network interface to work with
  2) JobManager not listening


To look into 1.2, can you check the TaskManager log at the beginning, where
it says what interface/hostname the TaskManager selected to use?

Thanks,
Stephan






On Fri, Mar 4, 2016 at 2:48 AM, Deepak Jha  wrote:

> Hi All,
> I've created 2 docker containers on my local machine, one running
> JM(192.168.99.104) and other running TM. I was expecting to see TM in the
> JM UI but it did not happen. On looking into the TM logs I see following
> lines
>
>
> 01:29:50,862 DEBUG org.apache.flink.runtime.taskmanager.TaskManager
>  - Starting TaskManager process reaper
> 01:29:50,868 INFO  org.apache.flink.runtime.filecache.FileCache
>  - User file cache uses directory
> /tmp/flink-dist-cache-be63f351-2bce-48ef-bbc4-fb0f40fecd49
> 01:29:51,093 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - Starting TaskManager actor at
> akka://flink/user/taskmanager#1222392284.
> 01:29:51,095 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - TaskManager data connection information: 140efeb188cc
> (dataPort=6122)
> 01:29:51,096 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - TaskManager has 1 task slot(s).
> 01:29:51,097 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - Memory usage stats: [HEAP: 386/494/494 MB, NON HEAP: 30/31/-1 MB
> (used/committed/max)]
> 01:29:51,104 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - Trying to register at JobManager akka.tcp://
> flink@192.168.99.104:6123/user/jobmanager (attempt 1, timeout: 500
> milliseconds)
> 01:29:51,633 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - Trying to register at JobManager akka.tcp://
> flink@192.168.99.104:6123/user/jobmanager (attempt 2, timeout: 1000
> milliseconds)
> 01:29:52,652 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - Trying to register at JobManager akka.tcp://
> flink@192.168.99.104:6123/user/jobmanager (attempt 3, timeout: 2000
> milliseconds)
> 01:29:54,672 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - Trying to register at JobManager akka.tcp://
> flink@192.168.99.104:6123/user/jobmanager (attempt 4, timeout: 4000
> milliseconds)
> 01:29:58,693 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - Trying to register at JobManager akka.tcp://
> flink@192.168.99.104:6123/user/jobmanager (attempt 5, timeout: 8000
> milliseconds)
> 01:30:06,702 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>  - Trying to register at JobManager akka.tcp://
> flink@192.168.99.104:6123/user/jobmanager (attempt 6, timeout: 16000
> milliseconds)
>
>
> However, from TM i am able to reach JM on port 6123
> root@140efeb188cc:/# nc -v 192.168.99.104 6123
> Connection to 192.168.99.104 6123 port [tcp/*] succeeded!
>
>
> masters file on TM contains
> 192.168.99.104:8080
>
> Did anyone face this issue with remote JM/TM combination ?
>
> --
> Thanks,
> Deepak Jha
>


Re: Remote TaskManager Connection Problem

2016-03-04 Thread Stephan Ewen
The  pull request https://github.com/apache/flink/pull/1758 should improve
the TaskManager's network interface selection.


On Fri, Mar 4, 2016 at 10:19 AM, Stephan Ewen  wrote:

> Hi!
>
> This registration phase means that the TaskManager tries to tell the
> JobManager that it is available.
> If that fails, there can be two reasons
>
>   1) Network communication not possible to the port
>   1.1) JobManager IP really not reachable (not the case, as you
> described)
>   1.2) TaskManager selected a wrong network interface to work with
>   2) JobManager not listening
>
>
> To look into 1.2, can you check the TaskManager log at the beginning,
> where it says what interface/hostname the TaskManager selected to use?
>
> Thanks,
> Stephan
>
>
>
>
>
>
> On Fri, Mar 4, 2016 at 2:48 AM, Deepak Jha  wrote:
>
>> Hi All,
>> I've created 2 docker containers on my local machine, one running
>> JM(192.168.99.104) and other running TM. I was expecting to see TM in the
>> JM UI but it did not happen. On looking into the TM logs I see following
>> lines
>>
>>
>> 01:29:50,862 DEBUG org.apache.flink.runtime.taskmanager.TaskManager
>>  - Starting TaskManager process reaper
>> 01:29:50,868 INFO  org.apache.flink.runtime.filecache.FileCache
>>  - User file cache uses directory
>> /tmp/flink-dist-cache-be63f351-2bce-48ef-bbc4-fb0f40fecd49
>> 01:29:51,093 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - Starting TaskManager actor at
>> akka://flink/user/taskmanager#1222392284.
>> 01:29:51,095 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - TaskManager data connection information: 140efeb188cc
>> (dataPort=6122)
>> 01:29:51,096 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - TaskManager has 1 task slot(s).
>> 01:29:51,097 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - Memory usage stats: [HEAP: 386/494/494 MB, NON HEAP: 30/31/-1 MB
>> (used/committed/max)]
>> 01:29:51,104 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - Trying to register at JobManager akka.tcp://
>> flink@192.168.99.104:6123/user/jobmanager (attempt 1, timeout: 500
>> milliseconds)
>> 01:29:51,633 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - Trying to register at JobManager akka.tcp://
>> flink@192.168.99.104:6123/user/jobmanager (attempt 2, timeout: 1000
>> milliseconds)
>> 01:29:52,652 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - Trying to register at JobManager akka.tcp://
>> flink@192.168.99.104:6123/user/jobmanager (attempt 3, timeout: 2000
>> milliseconds)
>> 01:29:54,672 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - Trying to register at JobManager akka.tcp://
>> flink@192.168.99.104:6123/user/jobmanager (attempt 4, timeout: 4000
>> milliseconds)
>> 01:29:58,693 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - Trying to register at JobManager akka.tcp://
>> flink@192.168.99.104:6123/user/jobmanager (attempt 5, timeout: 8000
>> milliseconds)
>> 01:30:06,702 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>  - Trying to register at JobManager akka.tcp://
>> flink@192.168.99.104:6123/user/jobmanager (attempt 6, timeout: 16000
>> milliseconds)
>>
>>
>> However, from TM i am able to reach JM on port 6123
>> root@140efeb188cc:/# nc -v 192.168.99.104 6123
>> Connection to 192.168.99.104 6123 port [tcp/*] succeeded!
>>
>>
>> masters file on TM contains
>> 192.168.99.104:8080
>>
>> Did anyone face this issue with remote JM/TM combination ?
>>
>> --
>> Thanks,
>> Deepak Jha
>>
>
>


Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Ufuk Celebi
+1

- Checked checksums and signatures
- Verified no binaries in source release
- Checked that source release is building properly
- Build for custom Hadoop version
- Ran start scripts
- Checked log and out files
- Tested in local mode
- Tested in cluster mode
- Tested on cluster with HDFS
- Tested recovery with StreamingStateMachine with Kafka, RocksDB and
in standalone HA mode


On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann  wrote:
> +1
>
> Checked that the sources don't contain binaries
> Tested cluster execution with flink/run and web client job submission
> Run all examples via FliRTT
> Tested Kafka 0.9
> Verified that quickstarts work with Eclipse and IntelliJ
> Run example with RemoteEnvironment
> Verified SBT quickstarts
>
> On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek 
> wrote:
>
>> +1
>>
>> I think we have a winner. :D
>>
>> The “boring” tests from the checklist should still hold for this RC and I
>> now ran a custom windowing job with state on RocksDB on Hadoop 2.7 with
>> Scala 2.11. I used the Yarn HA mode and shot down both JobManagers and
>> TaskManagers and the job restarted successfully. I also verified that
>> savepoints work in this setup.
>>
>> > On 03 Mar 2016, at 14:08, Robert Metzger  wrote:
>> >
>> > Apparently I was not careful enough when writing the email.
>> > The release branch is "release-1.0.0-rc5" and its the fifth RC.
>> >
>> > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger 
>> wrote:
>> >
>> >> Dear Flink community,
>> >>
>> >> Please vote on releasing the following candidate as Apache Flink version
>> >> 1.0.0.
>> >>
>> >> This is the fourth RC.
>> >> Here is a document to report on the testing and release verification:
>> >>
>> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit#heading=h.2v6zy51pgj33
>> >>
>> >>
>> >> The commit to be voted on (*
>> http://git-wip-us.apache.org/repos/asf/flink/commit/94cd554a
>> >> *)
>> >> 94cd554aee39413588bd30890dc7aed886b1c91d
>> >>
>> >> Branch:
>> >> release-1.0.0-rc4 (see
>> >>
>> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc5
>> >> )
>> >>
>> >> The release artifacts to be voted on can be found at:
>> >> *http://home.apache.org/~rmetzger/flink-1.0.0-rc5/
>> >> *
>> >>
>> >> The release artifacts are signed with the key with fingerprint D9839159:
>> >> http://www.apache.org/dist/flink/KEYS
>> >>
>> >> The staging repository for this release can be found at:
>> >> *https://repository.apache.org/content/repositories/orgapacheflink-1069
>> >> > >*
>> >>
>> >> -
>> >>
>> >> The vote is open until Friday and passes if a majority of at least three
>> >> +1 PMC votes are cast.
>> >>
>> >> The vote ends on Friday, March 4, 19:00 CET.
>> >>
>> >> [ ] +1 Release this package as Apache Flink 1.0.0
>> >> [ ] -1 Do not release this package because ...
>> >>
>> >>
>> >> --
>> >>
>> >> Changes since RC4:
>> >>
>> >> commit a79521fba60407ff5a800ec78fcfeee750d826d6
>> >> Author: Robert Metzger 
>> >> Date:   Thu Mar 3 09:32:40 2016 +0100
>> >>
>> >>[hotfix] Make 'force-shading' deployable
>> >>
>> >> commit 3adc51487aaae97469fc05e511be85d0a75a21d3
>> >> Author: Maximilian Michels 
>> >> Date:   Wed Mar 2 17:52:05 2016 +0100
>> >>
>> >>[maven] add module to force execution of Shade plugin
>> >>
>> >>This ensures that all properties of the root pom are properly
>> >>resolved by running the Shade plugin. Thus, our root pom does not
>> have
>> >>to depend on a Scala version just because it holds the Scala version
>> >>properties.
>> >>
>> >> commit b862fd0b3657d8b9026a54782bad5a1fb71c19f4
>> >> Author: Márton Balassi 
>> >> Date:   Sun Feb 21 23:01:00 2016 +0100
>> >>
>> >>[FLINK-3422][streaming] Update tests reliant on hashing
>> >>
>> >> commit a049d80e8aef7f0d23fbc06d263fb3e7a0f2f05f
>> >> Author: Gabor Horvath 
>> >> Date:   Sun Feb 21 14:54:44 2016 +0100
>> >>
>> >>[FLINK-3422][streaming][api-breaking] Scramble HashPartitioner
>> hashes.
>> >>
>> >>
>> >>
>>
>>


Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stephan Ewen
+1

Checked LICENSE and NOTICE files
Built against Hadoop 2.6, Scala 2.10, all tests are good
Run local pseudo cluster with examples
Log files look good, no exceptions
Tested File State Backend
Ran Storm Compatibility Examples
   -> minor issue, one example fails (no release blocker in my opinion)


On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann  wrote:

> +1
>
> Checked that the sources don't contain binaries
> Tested cluster execution with flink/run and web client job submission
> Run all examples via FliRTT
> Tested Kafka 0.9
> Verified that quickstarts work with Eclipse and IntelliJ
> Run example with RemoteEnvironment
> Verified SBT quickstarts
>
> On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek 
> wrote:
>
> > +1
> >
> > I think we have a winner. :D
> >
> > The “boring” tests from the checklist should still hold for this RC and I
> > now ran a custom windowing job with state on RocksDB on Hadoop 2.7 with
> > Scala 2.11. I used the Yarn HA mode and shot down both JobManagers and
> > TaskManagers and the job restarted successfully. I also verified that
> > savepoints work in this setup.
> >
> > > On 03 Mar 2016, at 14:08, Robert Metzger  wrote:
> > >
> > > Apparently I was not careful enough when writing the email.
> > > The release branch is "release-1.0.0-rc5" and its the fifth RC.
> > >
> > > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger 
> > wrote:
> > >
> > >> Dear Flink community,
> > >>
> > >> Please vote on releasing the following candidate as Apache Flink
> version
> > >> 1.0.0.
> > >>
> > >> This is the fourth RC.
> > >> Here is a document to report on the testing and release verification:
> > >>
> >
> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit#heading=h.2v6zy51pgj33
> > >>
> > >>
> > >> The commit to be voted on (*
> > http://git-wip-us.apache.org/repos/asf/flink/commit/94cd554a
> > >> *)
> > >> 94cd554aee39413588bd30890dc7aed886b1c91d
> > >>
> > >> Branch:
> > >> release-1.0.0-rc4 (see
> > >>
> >
> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc5
> > >> )
> > >>
> > >> The release artifacts to be voted on can be found at:
> > >> *http://home.apache.org/~rmetzger/flink-1.0.0-rc5/
> > >> *
> > >>
> > >> The release artifacts are signed with the key with fingerprint
> D9839159:
> > >> http://www.apache.org/dist/flink/KEYS
> > >>
> > >> The staging repository for this release can be found at:
> > >> *
> https://repository.apache.org/content/repositories/orgapacheflink-1069
> > >> <
> https://repository.apache.org/content/repositories/orgapacheflink-1069
> > >*
> > >>
> > >> -
> > >>
> > >> The vote is open until Friday and passes if a majority of at least
> three
> > >> +1 PMC votes are cast.
> > >>
> > >> The vote ends on Friday, March 4, 19:00 CET.
> > >>
> > >> [ ] +1 Release this package as Apache Flink 1.0.0
> > >> [ ] -1 Do not release this package because ...
> > >>
> > >>
> > >> --
> > >>
> > >> Changes since RC4:
> > >>
> > >> commit a79521fba60407ff5a800ec78fcfeee750d826d6
> > >> Author: Robert Metzger 
> > >> Date:   Thu Mar 3 09:32:40 2016 +0100
> > >>
> > >>[hotfix] Make 'force-shading' deployable
> > >>
> > >> commit 3adc51487aaae97469fc05e511be85d0a75a21d3
> > >> Author: Maximilian Michels 
> > >> Date:   Wed Mar 2 17:52:05 2016 +0100
> > >>
> > >>[maven] add module to force execution of Shade plugin
> > >>
> > >>This ensures that all properties of the root pom are properly
> > >>resolved by running the Shade plugin. Thus, our root pom does not
> > have
> > >>to depend on a Scala version just because it holds the Scala
> version
> > >>properties.
> > >>
> > >> commit b862fd0b3657d8b9026a54782bad5a1fb71c19f4
> > >> Author: Márton Balassi 
> > >> Date:   Sun Feb 21 23:01:00 2016 +0100
> > >>
> > >>[FLINK-3422][streaming] Update tests reliant on hashing
> > >>
> > >> commit a049d80e8aef7f0d23fbc06d263fb3e7a0f2f05f
> > >> Author: Gabor Horvath 
> > >> Date:   Sun Feb 21 14:54:44 2016 +0100
> > >>
> > >>[FLINK-3422][streaming][api-breaking] Scramble HashPartitioner
> > hashes.
> > >>
> > >>
> > >>
> >
> >
>


[jira] [Created] (FLINK-3577) Display anchor links when hovering over headers.

2016-03-04 Thread Jark Wu (JIRA)
Jark Wu created FLINK-3577:
--

 Summary: Display anchor links when hovering over headers.
 Key: FLINK-3577
 URL: https://issues.apache.org/jira/browse/FLINK-3577
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Jark Wu
Priority: Minor


This is useful to share the url if display anchor links when hovering over 
headers.  Currently we must scroll up to the TOC, find the section,click it, 
then copy  the url.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stefano Baghino
I won't cast a vote as I'm not entirely sure this is just a local problem
(and from the document the Scala 2.11 build has been checked), however I've
checked out the `release-1.0-rc5` branch and ran `mvn clean install
-DskipTests -Dscala.version=2.11.7`, with a failure on `flink-runtime`:

[ERROR]
/Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
error: can't expand macros compiled by previous versions of Scala
[ERROR]   assert(cachedGraph2.isArchived)
[ERROR]   ^
[ERROR] one error found

Is the 2.11 build still compiling successfully according to your latest
tests?
I've tried running a clean and re-running without skipping the tests but
the issue persists.

On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen  wrote:

> +1
>
> Checked LICENSE and NOTICE files
> Built against Hadoop 2.6, Scala 2.10, all tests are good
> Run local pseudo cluster with examples
> Log files look good, no exceptions
> Tested File State Backend
> Ran Storm Compatibility Examples
>-> minor issue, one example fails (no release blocker in my opinion)
>
>
> On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann 
> wrote:
>
> > +1
> >
> > Checked that the sources don't contain binaries
> > Tested cluster execution with flink/run and web client job submission
> > Run all examples via FliRTT
> > Tested Kafka 0.9
> > Verified that quickstarts work with Eclipse and IntelliJ
> > Run example with RemoteEnvironment
> > Verified SBT quickstarts
> >
> > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek 
> > wrote:
> >
> > > +1
> > >
> > > I think we have a winner. :D
> > >
> > > The “boring” tests from the checklist should still hold for this RC
> and I
> > > now ran a custom windowing job with state on RocksDB on Hadoop 2.7 with
> > > Scala 2.11. I used the Yarn HA mode and shot down both JobManagers and
> > > TaskManagers and the job restarted successfully. I also verified that
> > > savepoints work in this setup.
> > >
> > > > On 03 Mar 2016, at 14:08, Robert Metzger 
> wrote:
> > > >
> > > > Apparently I was not careful enough when writing the email.
> > > > The release branch is "release-1.0.0-rc5" and its the fifth RC.
> > > >
> > > > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger 
> > > wrote:
> > > >
> > > >> Dear Flink community,
> > > >>
> > > >> Please vote on releasing the following candidate as Apache Flink
> > version
> > > >> 1.0.0.
> > > >>
> > > >> This is the fourth RC.
> > > >> Here is a document to report on the testing and release
> verification:
> > > >>
> > >
> >
> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit#heading=h.2v6zy51pgj33
> > > >>
> > > >>
> > > >> The commit to be voted on (*
> > > http://git-wip-us.apache.org/repos/asf/flink/commit/94cd554a
> > > >> *)
> > > >> 94cd554aee39413588bd30890dc7aed886b1c91d
> > > >>
> > > >> Branch:
> > > >> release-1.0.0-rc4 (see
> > > >>
> > >
> >
> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc5
> > > >> )
> > > >>
> > > >> The release artifacts to be voted on can be found at:
> > > >> *http://home.apache.org/~rmetzger/flink-1.0.0-rc5/
> > > >> *
> > > >>
> > > >> The release artifacts are signed with the key with fingerprint
> > D9839159:
> > > >> http://www.apache.org/dist/flink/KEYS
> > > >>
> > > >> The staging repository for this release can be found at:
> > > >> *
> > https://repository.apache.org/content/repositories/orgapacheflink-1069
> > > >> <
> > https://repository.apache.org/content/repositories/orgapacheflink-1069
> > > >*
> > > >>
> > > >> -
> > > >>
> > > >> The vote is open until Friday and passes if a majority of at least
> > three
> > > >> +1 PMC votes are cast.
> > > >>
> > > >> The vote ends on Friday, March 4, 19:00 CET.
> > > >>
> > > >> [ ] +1 Release this package as Apache Flink 1.0.0
> > > >> [ ] -1 Do not release this package because ...
> > > >>
> > > >>
> > > >> --
> > > >>
> > > >> Changes since RC4:
> > > >>
> > > >> commit a79521fba60407ff5a800ec78fcfeee750d826d6
> > > >> Author: Robert Metzger 
> > > >> Date:   Thu Mar 3 09:32:40 2016 +0100
> > > >>
> > > >>[hotfix] Make 'force-shading' deployable
> > > >>
> > > >> commit 3adc51487aaae97469fc05e511be85d0a75a21d3
> > > >> Author: Maximilian Michels 
> > > >> Date:   Wed Mar 2 17:52:05 2016 +0100
> > > >>
> > > >>[maven] add module to force execution of Shade plugin
> > > >>
> > > >>This ensures that all properties of the root pom are properly
> > > >>resolved by running the Shade plugin. Thus, our root pom does not
> > > have
> > > >>to depend on a Scala version just because it holds the Scala
> > version
> > > >>properties.
> > > >>
> > > >> commit

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stephan Ewen
Hi!

To compile with Scala 2.11, please use the "-Dscala.version=2.11" flag.
Otherwise the 2.11 specific build profiles will not get properly activated.

Can you try that again?

Thanks,
Stephan


On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
stefano.bagh...@radicalbit.io> wrote:

> I won't cast a vote as I'm not entirely sure this is just a local problem
> (and from the document the Scala 2.11 build has been checked), however I've
> checked out the `release-1.0-rc5` branch and ran `mvn clean install
> -DskipTests -Dscala.version=2.11.7`, with a failure on `flink-runtime`:
>
> [ERROR]
>
> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
> error: can't expand macros compiled by previous versions of Scala
> [ERROR]   assert(cachedGraph2.isArchived)
> [ERROR]   ^
> [ERROR] one error found
>
> Is the 2.11 build still compiling successfully according to your latest
> tests?
> I've tried running a clean and re-running without skipping the tests but
> the issue persists.
>
> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen  wrote:
>
> > +1
> >
> > Checked LICENSE and NOTICE files
> > Built against Hadoop 2.6, Scala 2.10, all tests are good
> > Run local pseudo cluster with examples
> > Log files look good, no exceptions
> > Tested File State Backend
> > Ran Storm Compatibility Examples
> >-> minor issue, one example fails (no release blocker in my opinion)
> >
> >
> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann 
> > wrote:
> >
> > > +1
> > >
> > > Checked that the sources don't contain binaries
> > > Tested cluster execution with flink/run and web client job submission
> > > Run all examples via FliRTT
> > > Tested Kafka 0.9
> > > Verified that quickstarts work with Eclipse and IntelliJ
> > > Run example with RemoteEnvironment
> > > Verified SBT quickstarts
> > >
> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek 
> > > wrote:
> > >
> > > > +1
> > > >
> > > > I think we have a winner. :D
> > > >
> > > > The “boring” tests from the checklist should still hold for this RC
> > and I
> > > > now ran a custom windowing job with state on RocksDB on Hadoop 2.7
> with
> > > > Scala 2.11. I used the Yarn HA mode and shot down both JobManagers
> and
> > > > TaskManagers and the job restarted successfully. I also verified that
> > > > savepoints work in this setup.
> > > >
> > > > > On 03 Mar 2016, at 14:08, Robert Metzger 
> > wrote:
> > > > >
> > > > > Apparently I was not careful enough when writing the email.
> > > > > The release branch is "release-1.0.0-rc5" and its the fifth RC.
> > > > >
> > > > > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger <
> rmetz...@apache.org>
> > > > wrote:
> > > > >
> > > > >> Dear Flink community,
> > > > >>
> > > > >> Please vote on releasing the following candidate as Apache Flink
> > > version
> > > > >> 1.0.0.
> > > > >>
> > > > >> This is the fourth RC.
> > > > >> Here is a document to report on the testing and release
> > verification:
> > > > >>
> > > >
> > >
> >
> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit#heading=h.2v6zy51pgj33
> > > > >>
> > > > >>
> > > > >> The commit to be voted on (*
> > > > http://git-wip-us.apache.org/repos/asf/flink/commit/94cd554a
> > > > >> *)
> > > > >> 94cd554aee39413588bd30890dc7aed886b1c91d
> > > > >>
> > > > >> Branch:
> > > > >> release-1.0.0-rc4 (see
> > > > >>
> > > >
> > >
> >
> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc5
> > > > >> )
> > > > >>
> > > > >> The release artifacts to be voted on can be found at:
> > > > >> *http://home.apache.org/~rmetzger/flink-1.0.0-rc5/
> > > > >> *
> > > > >>
> > > > >> The release artifacts are signed with the key with fingerprint
> > > D9839159:
> > > > >> http://www.apache.org/dist/flink/KEYS
> > > > >>
> > > > >> The staging repository for this release can be found at:
> > > > >> *
> > > https://repository.apache.org/content/repositories/orgapacheflink-1069
> > > > >> <
> > > https://repository.apache.org/content/repositories/orgapacheflink-1069
> > > > >*
> > > > >>
> > > > >> -
> > > > >>
> > > > >> The vote is open until Friday and passes if a majority of at least
> > > three
> > > > >> +1 PMC votes are cast.
> > > > >>
> > > > >> The vote ends on Friday, March 4, 19:00 CET.
> > > > >>
> > > > >> [ ] +1 Release this package as Apache Flink 1.0.0
> > > > >> [ ] -1 Do not release this package because ...
> > > > >>
> > > > >>
> > > > >> --
> > > > >>
> > > > >> Changes since RC4:
> > > > >>
> > > > >> commit a79521fba60407ff5a800ec78fcfeee750d826d6
> > > > >> Author: Robert Metzger 
> > > > >> Date:   Thu Mar 3 09:32:40 2016 +0100
> > > > >>
> > > > 

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stephan Ewen
Sorry, the flag is "-Dscala-2.11"

On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen  wrote:

> Hi!
>
> To compile with Scala 2.11, please use the "-Dscala.version=2.11" flag.
> Otherwise the 2.11 specific build profiles will not get properly activated.
>
> Can you try that again?
>
> Thanks,
> Stephan
>
>
> On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
>> I won't cast a vote as I'm not entirely sure this is just a local problem
>> (and from the document the Scala 2.11 build has been checked), however
>> I've
>> checked out the `release-1.0-rc5` branch and ran `mvn clean install
>> -DskipTests -Dscala.version=2.11.7`, with a failure on `flink-runtime`:
>>
>> [ERROR]
>>
>> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
>> error: can't expand macros compiled by previous versions of Scala
>> [ERROR]   assert(cachedGraph2.isArchived)
>> [ERROR]   ^
>> [ERROR] one error found
>>
>> Is the 2.11 build still compiling successfully according to your latest
>> tests?
>> I've tried running a clean and re-running without skipping the tests but
>> the issue persists.
>>
>> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen  wrote:
>>
>> > +1
>> >
>> > Checked LICENSE and NOTICE files
>> > Built against Hadoop 2.6, Scala 2.10, all tests are good
>> > Run local pseudo cluster with examples
>> > Log files look good, no exceptions
>> > Tested File State Backend
>> > Ran Storm Compatibility Examples
>> >-> minor issue, one example fails (no release blocker in my opinion)
>> >
>> >
>> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann 
>> > wrote:
>> >
>> > > +1
>> > >
>> > > Checked that the sources don't contain binaries
>> > > Tested cluster execution with flink/run and web client job submission
>> > > Run all examples via FliRTT
>> > > Tested Kafka 0.9
>> > > Verified that quickstarts work with Eclipse and IntelliJ
>> > > Run example with RemoteEnvironment
>> > > Verified SBT quickstarts
>> > >
>> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek > >
>> > > wrote:
>> > >
>> > > > +1
>> > > >
>> > > > I think we have a winner. :D
>> > > >
>> > > > The “boring” tests from the checklist should still hold for this RC
>> > and I
>> > > > now ran a custom windowing job with state on RocksDB on Hadoop 2.7
>> with
>> > > > Scala 2.11. I used the Yarn HA mode and shot down both JobManagers
>> and
>> > > > TaskManagers and the job restarted successfully. I also verified
>> that
>> > > > savepoints work in this setup.
>> > > >
>> > > > > On 03 Mar 2016, at 14:08, Robert Metzger 
>> > wrote:
>> > > > >
>> > > > > Apparently I was not careful enough when writing the email.
>> > > > > The release branch is "release-1.0.0-rc5" and its the fifth RC.
>> > > > >
>> > > > > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger <
>> rmetz...@apache.org>
>> > > > wrote:
>> > > > >
>> > > > >> Dear Flink community,
>> > > > >>
>> > > > >> Please vote on releasing the following candidate as Apache Flink
>> > > version
>> > > > >> 1.0.0.
>> > > > >>
>> > > > >> This is the fourth RC.
>> > > > >> Here is a document to report on the testing and release
>> > verification:
>> > > > >>
>> > > >
>> > >
>> >
>> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit#heading=h.2v6zy51pgj33
>> > > > >>
>> > > > >>
>> > > > >> The commit to be voted on (*
>> > > > http://git-wip-us.apache.org/repos/asf/flink/commit/94cd554a
>> > > > >> *)
>> > > > >> 94cd554aee39413588bd30890dc7aed886b1c91d
>> > > > >>
>> > > > >> Branch:
>> > > > >> release-1.0.0-rc4 (see
>> > > > >>
>> > > >
>> > >
>> >
>> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc5
>> > > > >> )
>> > > > >>
>> > > > >> The release artifacts to be voted on can be found at:
>> > > > >> *http://home.apache.org/~rmetzger/flink-1.0.0-rc5/
>> > > > >> *
>> > > > >>
>> > > > >> The release artifacts are signed with the key with fingerprint
>> > > D9839159:
>> > > > >> http://www.apache.org/dist/flink/KEYS
>> > > > >>
>> > > > >> The staging repository for this release can be found at:
>> > > > >> *
>> > >
>> https://repository.apache.org/content/repositories/orgapacheflink-1069
>> > > > >> <
>> > >
>> https://repository.apache.org/content/repositories/orgapacheflink-1069
>> > > > >*
>> > > > >>
>> > > > >> -
>> > > > >>
>> > > > >> The vote is open until Friday and passes if a majority of at
>> least
>> > > three
>> > > > >> +1 PMC votes are cast.
>> > > > >>
>> > > > >> The vote ends on Friday, March 4, 19:00 CET.
>> > > > >>
>> > > > >> [ ] +1 Release this package as Apache Flink 1.0.0
>> > > > >> [ ] -1 Do not release this package because ...
>> > > > >>
>> > > > >>
>> > > > >> ---

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Chiwan Park
AFAIK, you should run `tools/change-scala-version.sh 2.11` before running `mvn 
clean install -DskipTests -Dscala-2.11`.

Regards,
Chiwan Park

> On Mar 4, 2016, at 7:20 PM, Stephan Ewen  wrote:
> 
> Sorry, the flag is "-Dscala-2.11"
> 
> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen  wrote:
> 
>> Hi!
>> 
>> To compile with Scala 2.11, please use the "-Dscala.version=2.11" flag.
>> Otherwise the 2.11 specific build profiles will not get properly activated.
>> 
>> Can you try that again?
>> 
>> Thanks,
>> Stephan
>> 
>> 
>> On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
>> stefano.bagh...@radicalbit.io> wrote:
>> 
>>> I won't cast a vote as I'm not entirely sure this is just a local problem
>>> (and from the document the Scala 2.11 build has been checked), however
>>> I've
>>> checked out the `release-1.0-rc5` branch and ran `mvn clean install
>>> -DskipTests -Dscala.version=2.11.7`, with a failure on `flink-runtime`:
>>> 
>>> [ERROR]
>>> 
>>> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
>>> error: can't expand macros compiled by previous versions of Scala
>>> [ERROR]   assert(cachedGraph2.isArchived)
>>> [ERROR]   ^
>>> [ERROR] one error found
>>> 
>>> Is the 2.11 build still compiling successfully according to your latest
>>> tests?
>>> I've tried running a clean and re-running without skipping the tests but
>>> the issue persists.
>>> 
>>> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen  wrote:
>>> 
 +1
 
 Checked LICENSE and NOTICE files
 Built against Hadoop 2.6, Scala 2.10, all tests are good
 Run local pseudo cluster with examples
 Log files look good, no exceptions
 Tested File State Backend
 Ran Storm Compatibility Examples
   -> minor issue, one example fails (no release blocker in my opinion)
 
 
 On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann 
 wrote:
 
> +1
> 
> Checked that the sources don't contain binaries
> Tested cluster execution with flink/run and web client job submission
> Run all examples via FliRTT
> Tested Kafka 0.9
> Verified that quickstarts work with Eclipse and IntelliJ
> Run example with RemoteEnvironment
> Verified SBT quickstarts
> 
> On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek >>> 
> wrote:
> 
>> +1
>> 
>> I think we have a winner. :D
>> 
>> The “boring” tests from the checklist should still hold for this RC
 and I
>> now ran a custom windowing job with state on RocksDB on Hadoop 2.7
>>> with
>> Scala 2.11. I used the Yarn HA mode and shot down both JobManagers
>>> and
>> TaskManagers and the job restarted successfully. I also verified
>>> that
>> savepoints work in this setup.
>> 
>>> On 03 Mar 2016, at 14:08, Robert Metzger 
 wrote:
>>> 
>>> Apparently I was not careful enough when writing the email.
>>> The release branch is "release-1.0.0-rc5" and its the fifth RC.
>>> 
>>> On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger <
>>> rmetz...@apache.org>
>> wrote:
>>> 
 Dear Flink community,
 
 Please vote on releasing the following candidate as Apache Flink
> version
 1.0.0.
 
 This is the fourth RC.
 Here is a document to report on the testing and release
 verification:
 
>> 
> 
 
>>> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit#heading=h.2v6zy51pgj33
 
 
 The commit to be voted on (*
>> http://git-wip-us.apache.org/repos/asf/flink/commit/94cd554a
 *)
 94cd554aee39413588bd30890dc7aed886b1c91d
 
 Branch:
 release-1.0.0-rc4 (see
 
>> 
> 
 
>>> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc5
 )
 
 The release artifacts to be voted on can be found at:
 *http://home.apache.org/~rmetzger/flink-1.0.0-rc5/
 *
 
 The release artifacts are signed with the key with fingerprint
> D9839159:
 http://www.apache.org/dist/flink/KEYS
 
 The staging repository for this release can be found at:
 *
> 
>>> https://repository.apache.org/content/repositories/orgapacheflink-1069
 <
> 
>>> https://repository.apache.org/content/repositories/orgapacheflink-1069
>>> *
 
 -
 
 The vote is open until Friday and passes if a majority of at
>>> least
> three
 +1 PMC votes are cast.
 
 The vote ends on Friday, March 4, 19:00 CET.
 
 [ ] +1 Release this package as Apache Flink 1.0.0

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stefano Baghino
I'll try it immediately, thanks for the quick feedback and sorry for the
intrusion. Should I add this to the docs? The flag seem to be
-Dscala.version=2.11.x on them:
https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions

On Fri, Mar 4, 2016 at 11:20 AM, Stephan Ewen  wrote:

> Sorry, the flag is "-Dscala-2.11"
>
> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen  wrote:
>
> > Hi!
> >
> > To compile with Scala 2.11, please use the "-Dscala.version=2.11" flag.
> > Otherwise the 2.11 specific build profiles will not get properly
> activated.
> >
> > Can you try that again?
> >
> > Thanks,
> > Stephan
> >
> >
> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
> > stefano.bagh...@radicalbit.io> wrote:
> >
> >> I won't cast a vote as I'm not entirely sure this is just a local
> problem
> >> (and from the document the Scala 2.11 build has been checked), however
> >> I've
> >> checked out the `release-1.0-rc5` branch and ran `mvn clean install
> >> -DskipTests -Dscala.version=2.11.7`, with a failure on `flink-runtime`:
> >>
> >> [ERROR]
> >>
> >>
> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
> >> error: can't expand macros compiled by previous versions of Scala
> >> [ERROR]   assert(cachedGraph2.isArchived)
> >> [ERROR]   ^
> >> [ERROR] one error found
> >>
> >> Is the 2.11 build still compiling successfully according to your latest
> >> tests?
> >> I've tried running a clean and re-running without skipping the tests but
> >> the issue persists.
> >>
> >> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen  wrote:
> >>
> >> > +1
> >> >
> >> > Checked LICENSE and NOTICE files
> >> > Built against Hadoop 2.6, Scala 2.10, all tests are good
> >> > Run local pseudo cluster with examples
> >> > Log files look good, no exceptions
> >> > Tested File State Backend
> >> > Ran Storm Compatibility Examples
> >> >-> minor issue, one example fails (no release blocker in my
> opinion)
> >> >
> >> >
> >> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann 
> >> > wrote:
> >> >
> >> > > +1
> >> > >
> >> > > Checked that the sources don't contain binaries
> >> > > Tested cluster execution with flink/run and web client job
> submission
> >> > > Run all examples via FliRTT
> >> > > Tested Kafka 0.9
> >> > > Verified that quickstarts work with Eclipse and IntelliJ
> >> > > Run example with RemoteEnvironment
> >> > > Verified SBT quickstarts
> >> > >
> >> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek <
> aljos...@apache.org
> >> >
> >> > > wrote:
> >> > >
> >> > > > +1
> >> > > >
> >> > > > I think we have a winner. :D
> >> > > >
> >> > > > The “boring” tests from the checklist should still hold for this
> RC
> >> > and I
> >> > > > now ran a custom windowing job with state on RocksDB on Hadoop 2.7
> >> with
> >> > > > Scala 2.11. I used the Yarn HA mode and shot down both JobManagers
> >> and
> >> > > > TaskManagers and the job restarted successfully. I also verified
> >> that
> >> > > > savepoints work in this setup.
> >> > > >
> >> > > > > On 03 Mar 2016, at 14:08, Robert Metzger 
> >> > wrote:
> >> > > > >
> >> > > > > Apparently I was not careful enough when writing the email.
> >> > > > > The release branch is "release-1.0.0-rc5" and its the fifth RC.
> >> > > > >
> >> > > > > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger <
> >> rmetz...@apache.org>
> >> > > > wrote:
> >> > > > >
> >> > > > >> Dear Flink community,
> >> > > > >>
> >> > > > >> Please vote on releasing the following candidate as Apache
> Flink
> >> > > version
> >> > > > >> 1.0.0.
> >> > > > >>
> >> > > > >> This is the fourth RC.
> >> > > > >> Here is a document to report on the testing and release
> >> > verification:
> >> > > > >>
> >> > > >
> >> > >
> >> >
> >>
> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit#heading=h.2v6zy51pgj33
> >> > > > >>
> >> > > > >>
> >> > > > >> The commit to be voted on (*
> >> > > > http://git-wip-us.apache.org/repos/asf/flink/commit/94cd554a
> >> > > > >>  >*)
> >> > > > >> 94cd554aee39413588bd30890dc7aed886b1c91d
> >> > > > >>
> >> > > > >> Branch:
> >> > > > >> release-1.0.0-rc4 (see
> >> > > > >>
> >> > > >
> >> > >
> >> >
> >>
> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc5
> >> > > > >> )
> >> > > > >>
> >> > > > >> The release artifacts to be voted on can be found at:
> >> > > > >> *http://home.apache.org/~rmetzger/flink-1.0.0-rc5/
> >> > > > >> *
> >> > > > >>
> >> > > > >> The release artifacts are signed with the key with fingerprint
> >> > > D9839159:
> >> > > > >> http://www.apache.org/dist/flink/KEYS
> >> > > > >>
> >> > > > >> The staging repository for this release can be found at:
> >> > > > >> *
> >> > >
> >> https://repository.apache.org/conten

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stefano Baghino
Build successful, thank you.

On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
stefano.bagh...@radicalbit.io> wrote:

> I'll try it immediately, thanks for the quick feedback and sorry for the
> intrusion. Should I add this to the docs? The flag seem to be
> -Dscala.version=2.11.x on them:
> https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
>
> On Fri, Mar 4, 2016 at 11:20 AM, Stephan Ewen  wrote:
>
>> Sorry, the flag is "-Dscala-2.11"
>>
>> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen  wrote:
>>
>> > Hi!
>> >
>> > To compile with Scala 2.11, please use the "-Dscala.version=2.11" flag.
>> > Otherwise the 2.11 specific build profiles will not get properly
>> activated.
>> >
>> > Can you try that again?
>> >
>> > Thanks,
>> > Stephan
>> >
>> >
>> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
>> > stefano.bagh...@radicalbit.io> wrote:
>> >
>> >> I won't cast a vote as I'm not entirely sure this is just a local
>> problem
>> >> (and from the document the Scala 2.11 build has been checked), however
>> >> I've
>> >> checked out the `release-1.0-rc5` branch and ran `mvn clean install
>> >> -DskipTests -Dscala.version=2.11.7`, with a failure on `flink-runtime`:
>> >>
>> >> [ERROR]
>> >>
>> >>
>> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
>> >> error: can't expand macros compiled by previous versions of Scala
>> >> [ERROR]   assert(cachedGraph2.isArchived)
>> >> [ERROR]   ^
>> >> [ERROR] one error found
>> >>
>> >> Is the 2.11 build still compiling successfully according to your latest
>> >> tests?
>> >> I've tried running a clean and re-running without skipping the tests
>> but
>> >> the issue persists.
>> >>
>> >> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen 
>> wrote:
>> >>
>> >> > +1
>> >> >
>> >> > Checked LICENSE and NOTICE files
>> >> > Built against Hadoop 2.6, Scala 2.10, all tests are good
>> >> > Run local pseudo cluster with examples
>> >> > Log files look good, no exceptions
>> >> > Tested File State Backend
>> >> > Ran Storm Compatibility Examples
>> >> >-> minor issue, one example fails (no release blocker in my
>> opinion)
>> >> >
>> >> >
>> >> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann 
>> >> > wrote:
>> >> >
>> >> > > +1
>> >> > >
>> >> > > Checked that the sources don't contain binaries
>> >> > > Tested cluster execution with flink/run and web client job
>> submission
>> >> > > Run all examples via FliRTT
>> >> > > Tested Kafka 0.9
>> >> > > Verified that quickstarts work with Eclipse and IntelliJ
>> >> > > Run example with RemoteEnvironment
>> >> > > Verified SBT quickstarts
>> >> > >
>> >> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek <
>> aljos...@apache.org
>> >> >
>> >> > > wrote:
>> >> > >
>> >> > > > +1
>> >> > > >
>> >> > > > I think we have a winner. :D
>> >> > > >
>> >> > > > The “boring” tests from the checklist should still hold for this
>> RC
>> >> > and I
>> >> > > > now ran a custom windowing job with state on RocksDB on Hadoop
>> 2.7
>> >> with
>> >> > > > Scala 2.11. I used the Yarn HA mode and shot down both
>> JobManagers
>> >> and
>> >> > > > TaskManagers and the job restarted successfully. I also verified
>> >> that
>> >> > > > savepoints work in this setup.
>> >> > > >
>> >> > > > > On 03 Mar 2016, at 14:08, Robert Metzger 
>> >> > wrote:
>> >> > > > >
>> >> > > > > Apparently I was not careful enough when writing the email.
>> >> > > > > The release branch is "release-1.0.0-rc5" and its the fifth RC.
>> >> > > > >
>> >> > > > > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger <
>> >> rmetz...@apache.org>
>> >> > > > wrote:
>> >> > > > >
>> >> > > > >> Dear Flink community,
>> >> > > > >>
>> >> > > > >> Please vote on releasing the following candidate as Apache
>> Flink
>> >> > > version
>> >> > > > >> 1.0.0.
>> >> > > > >>
>> >> > > > >> This is the fourth RC.
>> >> > > > >> Here is a document to report on the testing and release
>> >> > verification:
>> >> > > > >>
>> >> > > >
>> >> > >
>> >> >
>> >>
>> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit#heading=h.2v6zy51pgj33
>> >> > > > >>
>> >> > > > >>
>> >> > > > >> The commit to be voted on (*
>> >> > > > http://git-wip-us.apache.org/repos/asf/flink/commit/94cd554a
>> >> > > > >> > >*)
>> >> > > > >> 94cd554aee39413588bd30890dc7aed886b1c91d
>> >> > > > >>
>> >> > > > >> Branch:
>> >> > > > >> release-1.0.0-rc4 (see
>> >> > > > >>
>> >> > > >
>> >> > >
>> >> >
>> >>
>> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc5
>> >> > > > >> )
>> >> > > > >>
>> >> > > > >> The release artifacts to be voted on can be found at:
>> >> > > > >> *http://home.apache.org/~rmetzger/flink-1.0.0-rc5/
>> >> > > > >> *
>> >> > > > >>
>> >> > > > >> The 

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Ufuk Celebi
@Stefano: Yes, would be great to have a fix in the docs and pointers
on how to improve the docs for this.

On Fri, Mar 4, 2016 at 11:41 AM, Stefano Baghino
 wrote:
> Build successful, thank you.
>
> On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
>> I'll try it immediately, thanks for the quick feedback and sorry for the
>> intrusion. Should I add this to the docs? The flag seem to be
>> -Dscala.version=2.11.x on them:
>> https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
>>
>> On Fri, Mar 4, 2016 at 11:20 AM, Stephan Ewen  wrote:
>>
>>> Sorry, the flag is "-Dscala-2.11"
>>>
>>> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen  wrote:
>>>
>>> > Hi!
>>> >
>>> > To compile with Scala 2.11, please use the "-Dscala.version=2.11" flag.
>>> > Otherwise the 2.11 specific build profiles will not get properly
>>> activated.
>>> >
>>> > Can you try that again?
>>> >
>>> > Thanks,
>>> > Stephan
>>> >
>>> >
>>> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
>>> > stefano.bagh...@radicalbit.io> wrote:
>>> >
>>> >> I won't cast a vote as I'm not entirely sure this is just a local
>>> problem
>>> >> (and from the document the Scala 2.11 build has been checked), however
>>> >> I've
>>> >> checked out the `release-1.0-rc5` branch and ran `mvn clean install
>>> >> -DskipTests -Dscala.version=2.11.7`, with a failure on `flink-runtime`:
>>> >>
>>> >> [ERROR]
>>> >>
>>> >>
>>> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
>>> >> error: can't expand macros compiled by previous versions of Scala
>>> >> [ERROR]   assert(cachedGraph2.isArchived)
>>> >> [ERROR]   ^
>>> >> [ERROR] one error found
>>> >>
>>> >> Is the 2.11 build still compiling successfully according to your latest
>>> >> tests?
>>> >> I've tried running a clean and re-running without skipping the tests
>>> but
>>> >> the issue persists.
>>> >>
>>> >> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen 
>>> wrote:
>>> >>
>>> >> > +1
>>> >> >
>>> >> > Checked LICENSE and NOTICE files
>>> >> > Built against Hadoop 2.6, Scala 2.10, all tests are good
>>> >> > Run local pseudo cluster with examples
>>> >> > Log files look good, no exceptions
>>> >> > Tested File State Backend
>>> >> > Ran Storm Compatibility Examples
>>> >> >-> minor issue, one example fails (no release blocker in my
>>> opinion)
>>> >> >
>>> >> >
>>> >> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann 
>>> >> > wrote:
>>> >> >
>>> >> > > +1
>>> >> > >
>>> >> > > Checked that the sources don't contain binaries
>>> >> > > Tested cluster execution with flink/run and web client job
>>> submission
>>> >> > > Run all examples via FliRTT
>>> >> > > Tested Kafka 0.9
>>> >> > > Verified that quickstarts work with Eclipse and IntelliJ
>>> >> > > Run example with RemoteEnvironment
>>> >> > > Verified SBT quickstarts
>>> >> > >
>>> >> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek <
>>> aljos...@apache.org
>>> >> >
>>> >> > > wrote:
>>> >> > >
>>> >> > > > +1
>>> >> > > >
>>> >> > > > I think we have a winner. :D
>>> >> > > >
>>> >> > > > The “boring” tests from the checklist should still hold for this
>>> RC
>>> >> > and I
>>> >> > > > now ran a custom windowing job with state on RocksDB on Hadoop
>>> 2.7
>>> >> with
>>> >> > > > Scala 2.11. I used the Yarn HA mode and shot down both
>>> JobManagers
>>> >> and
>>> >> > > > TaskManagers and the job restarted successfully. I also verified
>>> >> that
>>> >> > > > savepoints work in this setup.
>>> >> > > >
>>> >> > > > > On 03 Mar 2016, at 14:08, Robert Metzger 
>>> >> > wrote:
>>> >> > > > >
>>> >> > > > > Apparently I was not careful enough when writing the email.
>>> >> > > > > The release branch is "release-1.0.0-rc5" and its the fifth RC.
>>> >> > > > >
>>> >> > > > > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger <
>>> >> rmetz...@apache.org>
>>> >> > > > wrote:
>>> >> > > > >
>>> >> > > > >> Dear Flink community,
>>> >> > > > >>
>>> >> > > > >> Please vote on releasing the following candidate as Apache
>>> Flink
>>> >> > > version
>>> >> > > > >> 1.0.0.
>>> >> > > > >>
>>> >> > > > >> This is the fourth RC.
>>> >> > > > >> Here is a document to report on the testing and release
>>> >> > verification:
>>> >> > > > >>
>>> >> > > >
>>> >> > >
>>> >> >
>>> >>
>>> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit#heading=h.2v6zy51pgj33
>>> >> > > > >>
>>> >> > > > >>
>>> >> > > > >> The commit to be voted on (*
>>> >> > > > http://git-wip-us.apache.org/repos/asf/flink/commit/94cd554a
>>> >> > > > >> >> >*)
>>> >> > > > >> 94cd554aee39413588bd30890dc7aed886b1c91d
>>> >> > > > >>
>>> >> > > > >> Branch:
>>> >> > > > >> release-1.0.0-rc4 (see
>>> >> > > > >>
>>> >> > > >
>>> >> > >
>>> >> >
>>> >>
>>> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.gi

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stephan Ewen
Are the docs actually wrong?

In the docs, it says to run the "tools/change-scala-version.sh 2.11" script
first (which implicitly adds the "-Dscala-2.11" flag.

I thought this problem arose because neither the flag was specified, nor
the script run.

On Fri, Mar 4, 2016 at 11:43 AM, Ufuk Celebi  wrote:

> @Stefano: Yes, would be great to have a fix in the docs and pointers
> on how to improve the docs for this.
>
> On Fri, Mar 4, 2016 at 11:41 AM, Stefano Baghino
>  wrote:
> > Build successful, thank you.
> >
> > On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
> > stefano.bagh...@radicalbit.io> wrote:
> >
> >> I'll try it immediately, thanks for the quick feedback and sorry for the
> >> intrusion. Should I add this to the docs? The flag seem to be
> >> -Dscala.version=2.11.x on them:
> >>
> https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
> >>
> >> On Fri, Mar 4, 2016 at 11:20 AM, Stephan Ewen  wrote:
> >>
> >>> Sorry, the flag is "-Dscala-2.11"
> >>>
> >>> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen 
> wrote:
> >>>
> >>> > Hi!
> >>> >
> >>> > To compile with Scala 2.11, please use the "-Dscala.version=2.11"
> flag.
> >>> > Otherwise the 2.11 specific build profiles will not get properly
> >>> activated.
> >>> >
> >>> > Can you try that again?
> >>> >
> >>> > Thanks,
> >>> > Stephan
> >>> >
> >>> >
> >>> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
> >>> > stefano.bagh...@radicalbit.io> wrote:
> >>> >
> >>> >> I won't cast a vote as I'm not entirely sure this is just a local
> >>> problem
> >>> >> (and from the document the Scala 2.11 build has been checked),
> however
> >>> >> I've
> >>> >> checked out the `release-1.0-rc5` branch and ran `mvn clean install
> >>> >> -DskipTests -Dscala.version=2.11.7`, with a failure on
> `flink-runtime`:
> >>> >>
> >>> >> [ERROR]
> >>> >>
> >>> >>
> >>>
> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
> >>> >> error: can't expand macros compiled by previous versions of Scala
> >>> >> [ERROR]   assert(cachedGraph2.isArchived)
> >>> >> [ERROR]   ^
> >>> >> [ERROR] one error found
> >>> >>
> >>> >> Is the 2.11 build still compiling successfully according to your
> latest
> >>> >> tests?
> >>> >> I've tried running a clean and re-running without skipping the tests
> >>> but
> >>> >> the issue persists.
> >>> >>
> >>> >> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen 
> >>> wrote:
> >>> >>
> >>> >> > +1
> >>> >> >
> >>> >> > Checked LICENSE and NOTICE files
> >>> >> > Built against Hadoop 2.6, Scala 2.10, all tests are good
> >>> >> > Run local pseudo cluster with examples
> >>> >> > Log files look good, no exceptions
> >>> >> > Tested File State Backend
> >>> >> > Ran Storm Compatibility Examples
> >>> >> >-> minor issue, one example fails (no release blocker in my
> >>> opinion)
> >>> >> >
> >>> >> >
> >>> >> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann <
> trohrm...@apache.org>
> >>> >> > wrote:
> >>> >> >
> >>> >> > > +1
> >>> >> > >
> >>> >> > > Checked that the sources don't contain binaries
> >>> >> > > Tested cluster execution with flink/run and web client job
> >>> submission
> >>> >> > > Run all examples via FliRTT
> >>> >> > > Tested Kafka 0.9
> >>> >> > > Verified that quickstarts work with Eclipse and IntelliJ
> >>> >> > > Run example with RemoteEnvironment
> >>> >> > > Verified SBT quickstarts
> >>> >> > >
> >>> >> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek <
> >>> aljos...@apache.org
> >>> >> >
> >>> >> > > wrote:
> >>> >> > >
> >>> >> > > > +1
> >>> >> > > >
> >>> >> > > > I think we have a winner. :D
> >>> >> > > >
> >>> >> > > > The “boring” tests from the checklist should still hold for
> this
> >>> RC
> >>> >> > and I
> >>> >> > > > now ran a custom windowing job with state on RocksDB on Hadoop
> >>> 2.7
> >>> >> with
> >>> >> > > > Scala 2.11. I used the Yarn HA mode and shot down both
> >>> JobManagers
> >>> >> and
> >>> >> > > > TaskManagers and the job restarted successfully. I also
> verified
> >>> >> that
> >>> >> > > > savepoints work in this setup.
> >>> >> > > >
> >>> >> > > > > On 03 Mar 2016, at 14:08, Robert Metzger <
> rmetz...@apache.org>
> >>> >> > wrote:
> >>> >> > > > >
> >>> >> > > > > Apparently I was not careful enough when writing the email.
> >>> >> > > > > The release branch is "release-1.0.0-rc5" and its the fifth
> RC.
> >>> >> > > > >
> >>> >> > > > > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger <
> >>> >> rmetz...@apache.org>
> >>> >> > > > wrote:
> >>> >> > > > >
> >>> >> > > > >> Dear Flink community,
> >>> >> > > > >>
> >>> >> > > > >> Please vote on releasing the following candidate as Apache
> >>> Flink
> >>> >> > > version
> >>> >> > > > >> 1.0.0.
> >>> >> > > > >>
> >>> >> > > > >> This is the fourth RC.
> >>> >> > > > >> Here is a document to report on the testing and release
> >>> >> > verification:
> >>> >> > > > >>
> >>> >> > > >

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stefano Baghino
I'll switch back to Scala 2.10 and try again, I was sure I ran the script
before running the build; maybe something went wrong and I didn't notice.

On Fri, Mar 4, 2016 at 11:51 AM, Stephan Ewen  wrote:

> Are the docs actually wrong?
>
> In the docs, it says to run the "tools/change-scala-version.sh 2.11" script
> first (which implicitly adds the "-Dscala-2.11" flag.
>
> I thought this problem arose because neither the flag was specified, nor
> the script run.
>
> On Fri, Mar 4, 2016 at 11:43 AM, Ufuk Celebi  wrote:
>
> > @Stefano: Yes, would be great to have a fix in the docs and pointers
> > on how to improve the docs for this.
> >
> > On Fri, Mar 4, 2016 at 11:41 AM, Stefano Baghino
> >  wrote:
> > > Build successful, thank you.
> > >
> > > On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
> > > stefano.bagh...@radicalbit.io> wrote:
> > >
> > >> I'll try it immediately, thanks for the quick feedback and sorry for
> the
> > >> intrusion. Should I add this to the docs? The flag seem to be
> > >> -Dscala.version=2.11.x on them:
> > >>
> >
> https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
> > >>
> > >> On Fri, Mar 4, 2016 at 11:20 AM, Stephan Ewen 
> wrote:
> > >>
> > >>> Sorry, the flag is "-Dscala-2.11"
> > >>>
> > >>> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen 
> > wrote:
> > >>>
> > >>> > Hi!
> > >>> >
> > >>> > To compile with Scala 2.11, please use the "-Dscala.version=2.11"
> > flag.
> > >>> > Otherwise the 2.11 specific build profiles will not get properly
> > >>> activated.
> > >>> >
> > >>> > Can you try that again?
> > >>> >
> > >>> > Thanks,
> > >>> > Stephan
> > >>> >
> > >>> >
> > >>> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
> > >>> > stefano.bagh...@radicalbit.io> wrote:
> > >>> >
> > >>> >> I won't cast a vote as I'm not entirely sure this is just a local
> > >>> problem
> > >>> >> (and from the document the Scala 2.11 build has been checked),
> > however
> > >>> >> I've
> > >>> >> checked out the `release-1.0-rc5` branch and ran `mvn clean
> install
> > >>> >> -DskipTests -Dscala.version=2.11.7`, with a failure on
> > `flink-runtime`:
> > >>> >>
> > >>> >> [ERROR]
> > >>> >>
> > >>> >>
> > >>>
> >
> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
> > >>> >> error: can't expand macros compiled by previous versions of Scala
> > >>> >> [ERROR]   assert(cachedGraph2.isArchived)
> > >>> >> [ERROR]   ^
> > >>> >> [ERROR] one error found
> > >>> >>
> > >>> >> Is the 2.11 build still compiling successfully according to your
> > latest
> > >>> >> tests?
> > >>> >> I've tried running a clean and re-running without skipping the
> tests
> > >>> but
> > >>> >> the issue persists.
> > >>> >>
> > >>> >> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen 
> > >>> wrote:
> > >>> >>
> > >>> >> > +1
> > >>> >> >
> > >>> >> > Checked LICENSE and NOTICE files
> > >>> >> > Built against Hadoop 2.6, Scala 2.10, all tests are good
> > >>> >> > Run local pseudo cluster with examples
> > >>> >> > Log files look good, no exceptions
> > >>> >> > Tested File State Backend
> > >>> >> > Ran Storm Compatibility Examples
> > >>> >> >-> minor issue, one example fails (no release blocker in my
> > >>> opinion)
> > >>> >> >
> > >>> >> >
> > >>> >> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann <
> > trohrm...@apache.org>
> > >>> >> > wrote:
> > >>> >> >
> > >>> >> > > +1
> > >>> >> > >
> > >>> >> > > Checked that the sources don't contain binaries
> > >>> >> > > Tested cluster execution with flink/run and web client job
> > >>> submission
> > >>> >> > > Run all examples via FliRTT
> > >>> >> > > Tested Kafka 0.9
> > >>> >> > > Verified that quickstarts work with Eclipse and IntelliJ
> > >>> >> > > Run example with RemoteEnvironment
> > >>> >> > > Verified SBT quickstarts
> > >>> >> > >
> > >>> >> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek <
> > >>> aljos...@apache.org
> > >>> >> >
> > >>> >> > > wrote:
> > >>> >> > >
> > >>> >> > > > +1
> > >>> >> > > >
> > >>> >> > > > I think we have a winner. :D
> > >>> >> > > >
> > >>> >> > > > The “boring” tests from the checklist should still hold for
> > this
> > >>> RC
> > >>> >> > and I
> > >>> >> > > > now ran a custom windowing job with state on RocksDB on
> Hadoop
> > >>> 2.7
> > >>> >> with
> > >>> >> > > > Scala 2.11. I used the Yarn HA mode and shot down both
> > >>> JobManagers
> > >>> >> and
> > >>> >> > > > TaskManagers and the job restarted successfully. I also
> > verified
> > >>> >> that
> > >>> >> > > > savepoints work in this setup.
> > >>> >> > > >
> > >>> >> > > > > On 03 Mar 2016, at 14:08, Robert Metzger <
> > rmetz...@apache.org>
> > >>> >> > wrote:
> > >>> >> > > > >
> > >>> >> > > > > Apparently I was not careful enough when writing the
> email.
> > >>> >> > > > > The release branch is "release-1.0.0-rc5" and its the
> fifth
> > RC.
> > >>> >> > > > >
> > >>> >> > > > > On Thu, 

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Ufuk Celebi
You are right. Just checked the docs. They are correct.

@Stefano: the docs say that you first change the binary version via
the script and then you can specify the language version via
scala.version.

On Fri, Mar 4, 2016 at 11:51 AM, Stephan Ewen  wrote:
> Are the docs actually wrong?
>
> In the docs, it says to run the "tools/change-scala-version.sh 2.11" script
> first (which implicitly adds the "-Dscala-2.11" flag.
>
> I thought this problem arose because neither the flag was specified, nor
> the script run.
>
> On Fri, Mar 4, 2016 at 11:43 AM, Ufuk Celebi  wrote:
>
>> @Stefano: Yes, would be great to have a fix in the docs and pointers
>> on how to improve the docs for this.
>>
>> On Fri, Mar 4, 2016 at 11:41 AM, Stefano Baghino
>>  wrote:
>> > Build successful, thank you.
>> >
>> > On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
>> > stefano.bagh...@radicalbit.io> wrote:
>> >
>> >> I'll try it immediately, thanks for the quick feedback and sorry for the
>> >> intrusion. Should I add this to the docs? The flag seem to be
>> >> -Dscala.version=2.11.x on them:
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
>> >>
>> >> On Fri, Mar 4, 2016 at 11:20 AM, Stephan Ewen  wrote:
>> >>
>> >>> Sorry, the flag is "-Dscala-2.11"
>> >>>
>> >>> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen 
>> wrote:
>> >>>
>> >>> > Hi!
>> >>> >
>> >>> > To compile with Scala 2.11, please use the "-Dscala.version=2.11"
>> flag.
>> >>> > Otherwise the 2.11 specific build profiles will not get properly
>> >>> activated.
>> >>> >
>> >>> > Can you try that again?
>> >>> >
>> >>> > Thanks,
>> >>> > Stephan
>> >>> >
>> >>> >
>> >>> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
>> >>> > stefano.bagh...@radicalbit.io> wrote:
>> >>> >
>> >>> >> I won't cast a vote as I'm not entirely sure this is just a local
>> >>> problem
>> >>> >> (and from the document the Scala 2.11 build has been checked),
>> however
>> >>> >> I've
>> >>> >> checked out the `release-1.0-rc5` branch and ran `mvn clean install
>> >>> >> -DskipTests -Dscala.version=2.11.7`, with a failure on
>> `flink-runtime`:
>> >>> >>
>> >>> >> [ERROR]
>> >>> >>
>> >>> >>
>> >>>
>> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
>> >>> >> error: can't expand macros compiled by previous versions of Scala
>> >>> >> [ERROR]   assert(cachedGraph2.isArchived)
>> >>> >> [ERROR]   ^
>> >>> >> [ERROR] one error found
>> >>> >>
>> >>> >> Is the 2.11 build still compiling successfully according to your
>> latest
>> >>> >> tests?
>> >>> >> I've tried running a clean and re-running without skipping the tests
>> >>> but
>> >>> >> the issue persists.
>> >>> >>
>> >>> >> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen 
>> >>> wrote:
>> >>> >>
>> >>> >> > +1
>> >>> >> >
>> >>> >> > Checked LICENSE and NOTICE files
>> >>> >> > Built against Hadoop 2.6, Scala 2.10, all tests are good
>> >>> >> > Run local pseudo cluster with examples
>> >>> >> > Log files look good, no exceptions
>> >>> >> > Tested File State Backend
>> >>> >> > Ran Storm Compatibility Examples
>> >>> >> >-> minor issue, one example fails (no release blocker in my
>> >>> opinion)
>> >>> >> >
>> >>> >> >
>> >>> >> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann <
>> trohrm...@apache.org>
>> >>> >> > wrote:
>> >>> >> >
>> >>> >> > > +1
>> >>> >> > >
>> >>> >> > > Checked that the sources don't contain binaries
>> >>> >> > > Tested cluster execution with flink/run and web client job
>> >>> submission
>> >>> >> > > Run all examples via FliRTT
>> >>> >> > > Tested Kafka 0.9
>> >>> >> > > Verified that quickstarts work with Eclipse and IntelliJ
>> >>> >> > > Run example with RemoteEnvironment
>> >>> >> > > Verified SBT quickstarts
>> >>> >> > >
>> >>> >> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek <
>> >>> aljos...@apache.org
>> >>> >> >
>> >>> >> > > wrote:
>> >>> >> > >
>> >>> >> > > > +1
>> >>> >> > > >
>> >>> >> > > > I think we have a winner. :D
>> >>> >> > > >
>> >>> >> > > > The “boring” tests from the checklist should still hold for
>> this
>> >>> RC
>> >>> >> > and I
>> >>> >> > > > now ran a custom windowing job with state on RocksDB on Hadoop
>> >>> 2.7
>> >>> >> with
>> >>> >> > > > Scala 2.11. I used the Yarn HA mode and shot down both
>> >>> JobManagers
>> >>> >> and
>> >>> >> > > > TaskManagers and the job restarted successfully. I also
>> verified
>> >>> >> that
>> >>> >> > > > savepoints work in this setup.
>> >>> >> > > >
>> >>> >> > > > > On 03 Mar 2016, at 14:08, Robert Metzger <
>> rmetz...@apache.org>
>> >>> >> > wrote:
>> >>> >> > > > >
>> >>> >> > > > > Apparently I was not careful enough when writing the email.
>> >>> >> > > > > The release branch is "release-1.0.0-rc5" and its the fifth
>> RC.
>> >>> >> > > > >
>> >>> >> > > > > On Thu, Mar 3, 2016 at 2:01 PM, Robert Metzger <
>> >>> >> rmetz...@apache.org>
>> >>> >> > > > wrote:

Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stefano Baghino
Ok, I switched back to 2.10 with the script and tried with both the
explicit call to the script back to 2.11 and with the implicit call via
-Dscala-11 worked, I really don't know what happened before. Thank you for
the help, sorry for disturbing the voting process.

On Fri, Mar 4, 2016 at 12:12 PM, Ufuk Celebi  wrote:

> You are right. Just checked the docs. They are correct.
>
> @Stefano: the docs say that you first change the binary version via
> the script and then you can specify the language version via
> scala.version.
>
> On Fri, Mar 4, 2016 at 11:51 AM, Stephan Ewen  wrote:
> > Are the docs actually wrong?
> >
> > In the docs, it says to run the "tools/change-scala-version.sh 2.11"
> script
> > first (which implicitly adds the "-Dscala-2.11" flag.
> >
> > I thought this problem arose because neither the flag was specified, nor
> > the script run.
> >
> > On Fri, Mar 4, 2016 at 11:43 AM, Ufuk Celebi  wrote:
> >
> >> @Stefano: Yes, would be great to have a fix in the docs and pointers
> >> on how to improve the docs for this.
> >>
> >> On Fri, Mar 4, 2016 at 11:41 AM, Stefano Baghino
> >>  wrote:
> >> > Build successful, thank you.
> >> >
> >> > On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
> >> > stefano.bagh...@radicalbit.io> wrote:
> >> >
> >> >> I'll try it immediately, thanks for the quick feedback and sorry for
> the
> >> >> intrusion. Should I add this to the docs? The flag seem to be
> >> >> -Dscala.version=2.11.x on them:
> >> >>
> >>
> https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
> >> >>
> >> >> On Fri, Mar 4, 2016 at 11:20 AM, Stephan Ewen 
> wrote:
> >> >>
> >> >>> Sorry, the flag is "-Dscala-2.11"
> >> >>>
> >> >>> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen 
> >> wrote:
> >> >>>
> >> >>> > Hi!
> >> >>> >
> >> >>> > To compile with Scala 2.11, please use the "-Dscala.version=2.11"
> >> flag.
> >> >>> > Otherwise the 2.11 specific build profiles will not get properly
> >> >>> activated.
> >> >>> >
> >> >>> > Can you try that again?
> >> >>> >
> >> >>> > Thanks,
> >> >>> > Stephan
> >> >>> >
> >> >>> >
> >> >>> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
> >> >>> > stefano.bagh...@radicalbit.io> wrote:
> >> >>> >
> >> >>> >> I won't cast a vote as I'm not entirely sure this is just a local
> >> >>> problem
> >> >>> >> (and from the document the Scala 2.11 build has been checked),
> >> however
> >> >>> >> I've
> >> >>> >> checked out the `release-1.0-rc5` branch and ran `mvn clean
> install
> >> >>> >> -DskipTests -Dscala.version=2.11.7`, with a failure on
> >> `flink-runtime`:
> >> >>> >>
> >> >>> >> [ERROR]
> >> >>> >>
> >> >>> >>
> >> >>>
> >>
> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
> >> >>> >> error: can't expand macros compiled by previous versions of Scala
> >> >>> >> [ERROR]   assert(cachedGraph2.isArchived)
> >> >>> >> [ERROR]   ^
> >> >>> >> [ERROR] one error found
> >> >>> >>
> >> >>> >> Is the 2.11 build still compiling successfully according to your
> >> latest
> >> >>> >> tests?
> >> >>> >> I've tried running a clean and re-running without skipping the
> tests
> >> >>> but
> >> >>> >> the issue persists.
> >> >>> >>
> >> >>> >> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen 
> >> >>> wrote:
> >> >>> >>
> >> >>> >> > +1
> >> >>> >> >
> >> >>> >> > Checked LICENSE and NOTICE files
> >> >>> >> > Built against Hadoop 2.6, Scala 2.10, all tests are good
> >> >>> >> > Run local pseudo cluster with examples
> >> >>> >> > Log files look good, no exceptions
> >> >>> >> > Tested File State Backend
> >> >>> >> > Ran Storm Compatibility Examples
> >> >>> >> >-> minor issue, one example fails (no release blocker in my
> >> >>> opinion)
> >> >>> >> >
> >> >>> >> >
> >> >>> >> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann <
> >> trohrm...@apache.org>
> >> >>> >> > wrote:
> >> >>> >> >
> >> >>> >> > > +1
> >> >>> >> > >
> >> >>> >> > > Checked that the sources don't contain binaries
> >> >>> >> > > Tested cluster execution with flink/run and web client job
> >> >>> submission
> >> >>> >> > > Run all examples via FliRTT
> >> >>> >> > > Tested Kafka 0.9
> >> >>> >> > > Verified that quickstarts work with Eclipse and IntelliJ
> >> >>> >> > > Run example with RemoteEnvironment
> >> >>> >> > > Verified SBT quickstarts
> >> >>> >> > >
> >> >>> >> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek <
> >> >>> aljos...@apache.org
> >> >>> >> >
> >> >>> >> > > wrote:
> >> >>> >> > >
> >> >>> >> > > > +1
> >> >>> >> > > >
> >> >>> >> > > > I think we have a winner. :D
> >> >>> >> > > >
> >> >>> >> > > > The “boring” tests from the checklist should still hold for
> >> this
> >> >>> RC
> >> >>> >> > and I
> >> >>> >> > > > now ran a custom windowing job with state on RocksDB on
> Hadoop
> >> >>> 2.7
> >> >>> >> with
> >> >>> >> > > > Scala 2.11. I used the Yarn HA mode and shot down both
> >> >>> JobManagers
> >>

[jira] [Created] (FLINK-3578) Scala DataStream API does not support Rich Window Functions

2016-03-04 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-3578:
---

 Summary: Scala DataStream API does not support Rich Window 
Functions
 Key: FLINK-3578
 URL: https://issues.apache.org/jira/browse/FLINK-3578
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Affects Versions: 1.0.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
Priority: Critical
 Fix For: 1.1.0, 1.0.1


The Scala Window functions are currently wrapped in a way that RichFunction 
method calls are not forwarded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Stephan Ewen
@Stefano: No problem. Always happy when people test releases :-)

On Fri, Mar 4, 2016 at 12:27 PM, Stefano Baghino <
stefano.bagh...@radicalbit.io> wrote:

> Ok, I switched back to 2.10 with the script and tried with both the
> explicit call to the script back to 2.11 and with the implicit call via
> -Dscala-11 worked, I really don't know what happened before. Thank you for
> the help, sorry for disturbing the voting process.
>
> On Fri, Mar 4, 2016 at 12:12 PM, Ufuk Celebi  wrote:
>
> > You are right. Just checked the docs. They are correct.
> >
> > @Stefano: the docs say that you first change the binary version via
> > the script and then you can specify the language version via
> > scala.version.
> >
> > On Fri, Mar 4, 2016 at 11:51 AM, Stephan Ewen  wrote:
> > > Are the docs actually wrong?
> > >
> > > In the docs, it says to run the "tools/change-scala-version.sh 2.11"
> > script
> > > first (which implicitly adds the "-Dscala-2.11" flag.
> > >
> > > I thought this problem arose because neither the flag was specified,
> nor
> > > the script run.
> > >
> > > On Fri, Mar 4, 2016 at 11:43 AM, Ufuk Celebi  wrote:
> > >
> > >> @Stefano: Yes, would be great to have a fix in the docs and pointers
> > >> on how to improve the docs for this.
> > >>
> > >> On Fri, Mar 4, 2016 at 11:41 AM, Stefano Baghino
> > >>  wrote:
> > >> > Build successful, thank you.
> > >> >
> > >> > On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
> > >> > stefano.bagh...@radicalbit.io> wrote:
> > >> >
> > >> >> I'll try it immediately, thanks for the quick feedback and sorry
> for
> > the
> > >> >> intrusion. Should I add this to the docs? The flag seem to be
> > >> >> -Dscala.version=2.11.x on them:
> > >> >>
> > >>
> >
> https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
> > >> >>
> > >> >> On Fri, Mar 4, 2016 at 11:20 AM, Stephan Ewen 
> > wrote:
> > >> >>
> > >> >>> Sorry, the flag is "-Dscala-2.11"
> > >> >>>
> > >> >>> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen 
> > >> wrote:
> > >> >>>
> > >> >>> > Hi!
> > >> >>> >
> > >> >>> > To compile with Scala 2.11, please use the
> "-Dscala.version=2.11"
> > >> flag.
> > >> >>> > Otherwise the 2.11 specific build profiles will not get properly
> > >> >>> activated.
> > >> >>> >
> > >> >>> > Can you try that again?
> > >> >>> >
> > >> >>> > Thanks,
> > >> >>> > Stephan
> > >> >>> >
> > >> >>> >
> > >> >>> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
> > >> >>> > stefano.bagh...@radicalbit.io> wrote:
> > >> >>> >
> > >> >>> >> I won't cast a vote as I'm not entirely sure this is just a
> local
> > >> >>> problem
> > >> >>> >> (and from the document the Scala 2.11 build has been checked),
> > >> however
> > >> >>> >> I've
> > >> >>> >> checked out the `release-1.0-rc5` branch and ran `mvn clean
> > install
> > >> >>> >> -DskipTests -Dscala.version=2.11.7`, with a failure on
> > >> `flink-runtime`:
> > >> >>> >>
> > >> >>> >> [ERROR]
> > >> >>> >>
> > >> >>> >>
> > >> >>>
> > >>
> >
> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
> > >> >>> >> error: can't expand macros compiled by previous versions of
> Scala
> > >> >>> >> [ERROR]   assert(cachedGraph2.isArchived)
> > >> >>> >> [ERROR]   ^
> > >> >>> >> [ERROR] one error found
> > >> >>> >>
> > >> >>> >> Is the 2.11 build still compiling successfully according to
> your
> > >> latest
> > >> >>> >> tests?
> > >> >>> >> I've tried running a clean and re-running without skipping the
> > tests
> > >> >>> but
> > >> >>> >> the issue persists.
> > >> >>> >>
> > >> >>> >> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen <
> se...@apache.org>
> > >> >>> wrote:
> > >> >>> >>
> > >> >>> >> > +1
> > >> >>> >> >
> > >> >>> >> > Checked LICENSE and NOTICE files
> > >> >>> >> > Built against Hadoop 2.6, Scala 2.10, all tests are good
> > >> >>> >> > Run local pseudo cluster with examples
> > >> >>> >> > Log files look good, no exceptions
> > >> >>> >> > Tested File State Backend
> > >> >>> >> > Ran Storm Compatibility Examples
> > >> >>> >> >-> minor issue, one example fails (no release blocker in
> my
> > >> >>> opinion)
> > >> >>> >> >
> > >> >>> >> >
> > >> >>> >> > On Thu, Mar 3, 2016 at 5:41 PM, Till Rohrmann <
> > >> trohrm...@apache.org>
> > >> >>> >> > wrote:
> > >> >>> >> >
> > >> >>> >> > > +1
> > >> >>> >> > >
> > >> >>> >> > > Checked that the sources don't contain binaries
> > >> >>> >> > > Tested cluster execution with flink/run and web client job
> > >> >>> submission
> > >> >>> >> > > Run all examples via FliRTT
> > >> >>> >> > > Tested Kafka 0.9
> > >> >>> >> > > Verified that quickstarts work with Eclipse and IntelliJ
> > >> >>> >> > > Run example with RemoteEnvironment
> > >> >>> >> > > Verified SBT quickstarts
> > >> >>> >> > >
> > >> >>> >> > > On Thu, Mar 3, 2016 at 3:43 PM, Aljoscha Krettek <
> > >> >>> aljos...@apache.org
> > >> >>> >> >
> > >> >>> >> > > wrote

[jira] [Created] (FLINK-3579) Improve String concatenation

2016-03-04 Thread Timo Walther (JIRA)
Timo Walther created FLINK-3579:
---

 Summary: Improve String concatenation
 Key: FLINK-3579
 URL: https://issues.apache.org/jira/browse/FLINK-3579
 Project: Flink
  Issue Type: Bug
  Components: Table API
Reporter: Timo Walther
Priority: Minor


Concatenation of a String and non-String does not work properly.

e.g. {{f0 + 42}} leads to RelBuilder Exception

ExpressionParser does not like {{f0 + 42.cast(STRING)}} either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3580) Reintroduce Date/Time and implement scalar functions for it

2016-03-04 Thread Timo Walther (JIRA)
Timo Walther created FLINK-3580:
---

 Summary: Reintroduce Date/Time and implement scalar functions for 
it
 Key: FLINK-3580
 URL: https://issues.apache.org/jira/browse/FLINK-3580
 Project: Flink
  Issue Type: Sub-task
  Components: Table API
Reporter: Timo Walther
Assignee: Timo Walther


This task includes:

{code}
DATETIME_PLUS
EXTRACT_DATE
FLOOR
CEIL
CURRENT_TIME
CURRENT_TIMESTAMP
LOCALTIME
LOCALTIMESTAMP
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3581) Add Non-Keyed Window Trigger

2016-03-04 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-3581:
---

 Summary: Add Non-Keyed Window Trigger 
 Key: FLINK-3581
 URL: https://issues.apache.org/jira/browse/FLINK-3581
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek


The current Window Trigger is per key. Meaning every window has a (logical) 
Trigger for every key in the window, i.e. there will be state and time triggers 
per key per window.

For some types of windows, i.e. based on time it is possible to use a single 
Trigger to fire for all keys at the same time. In that case we would save a lot 
of space on state and timers. Which makes state snapshots a lot smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3582) Add Iterator over State for All Keys in Partitioned State

2016-03-04 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-3582:
---

 Summary: Add Iterator over State for All Keys in Partitioned State
 Key: FLINK-3582
 URL: https://issues.apache.org/jira/browse/FLINK-3582
 Project: Flink
  Issue Type: Sub-task
  Components: Streaming
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek


Having a non-keyed trigger requires that we have a way to iterate over the 
state for all keys, so that we can emit window results.

This should only be for internal use, but maybe users also want to iterate over 
the state for all keys. 

As a corollary, we then also need a way to drop state for all keys at the same 
time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[RESULT][VOTE] Release Apache Flink 1.0.0 (RC5)

2016-03-04 Thread Robert Metzger
Thanks for voting! The vote passes.

The following votes have been cast:

+1 votes: 4

Aljoscha
Till
Ufuk
Stephan

No -1 votes.

I will now release the binaries to the mirrors and the artifacts to maven
central.

Others from the community are working on blog posts for announcing the
release next week (probably Tuesday).
Users can of course start using Flink 1.0.0, but I would suggest that we
wait with making a lot of noise around the release until we updated the
website and have a release announcement in the Flink blog. We also have to
wait until the all mirrors are synced and the artifacts are on Maven
central.




On Fri, Mar 4, 2016 at 12:40 PM, Stephan Ewen  wrote:

> @Stefano: No problem. Always happy when people test releases :-)
>
> On Fri, Mar 4, 2016 at 12:27 PM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
> > Ok, I switched back to 2.10 with the script and tried with both the
> > explicit call to the script back to 2.11 and with the implicit call via
> > -Dscala-11 worked, I really don't know what happened before. Thank you
> for
> > the help, sorry for disturbing the voting process.
> >
> > On Fri, Mar 4, 2016 at 12:12 PM, Ufuk Celebi  wrote:
> >
> > > You are right. Just checked the docs. They are correct.
> > >
> > > @Stefano: the docs say that you first change the binary version via
> > > the script and then you can specify the language version via
> > > scala.version.
> > >
> > > On Fri, Mar 4, 2016 at 11:51 AM, Stephan Ewen 
> wrote:
> > > > Are the docs actually wrong?
> > > >
> > > > In the docs, it says to run the "tools/change-scala-version.sh 2.11"
> > > script
> > > > first (which implicitly adds the "-Dscala-2.11" flag.
> > > >
> > > > I thought this problem arose because neither the flag was specified,
> > nor
> > > > the script run.
> > > >
> > > > On Fri, Mar 4, 2016 at 11:43 AM, Ufuk Celebi  wrote:
> > > >
> > > >> @Stefano: Yes, would be great to have a fix in the docs and pointers
> > > >> on how to improve the docs for this.
> > > >>
> > > >> On Fri, Mar 4, 2016 at 11:41 AM, Stefano Baghino
> > > >>  wrote:
> > > >> > Build successful, thank you.
> > > >> >
> > > >> > On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
> > > >> > stefano.bagh...@radicalbit.io> wrote:
> > > >> >
> > > >> >> I'll try it immediately, thanks for the quick feedback and sorry
> > for
> > > the
> > > >> >> intrusion. Should I add this to the docs? The flag seem to be
> > > >> >> -Dscala.version=2.11.x on them:
> > > >> >>
> > > >>
> > >
> >
> https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
> > > >> >>
> > > >> >> On Fri, Mar 4, 2016 at 11:20 AM, Stephan Ewen 
> > > wrote:
> > > >> >>
> > > >> >>> Sorry, the flag is "-Dscala-2.11"
> > > >> >>>
> > > >> >>> On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen  >
> > > >> wrote:
> > > >> >>>
> > > >> >>> > Hi!
> > > >> >>> >
> > > >> >>> > To compile with Scala 2.11, please use the
> > "-Dscala.version=2.11"
> > > >> flag.
> > > >> >>> > Otherwise the 2.11 specific build profiles will not get
> properly
> > > >> >>> activated.
> > > >> >>> >
> > > >> >>> > Can you try that again?
> > > >> >>> >
> > > >> >>> > Thanks,
> > > >> >>> > Stephan
> > > >> >>> >
> > > >> >>> >
> > > >> >>> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
> > > >> >>> > stefano.bagh...@radicalbit.io> wrote:
> > > >> >>> >
> > > >> >>> >> I won't cast a vote as I'm not entirely sure this is just a
> > local
> > > >> >>> problem
> > > >> >>> >> (and from the document the Scala 2.11 build has been
> checked),
> > > >> however
> > > >> >>> >> I've
> > > >> >>> >> checked out the `release-1.0-rc5` branch and ran `mvn clean
> > > install
> > > >> >>> >> -DskipTests -Dscala.version=2.11.7`, with a failure on
> > > >> `flink-runtime`:
> > > >> >>> >>
> > > >> >>> >> [ERROR]
> > > >> >>> >>
> > > >> >>> >>
> > > >> >>>
> > > >>
> > >
> >
> /Users/Stefano/Projects/flink/flink-runtime/src/test/scala/org/apache/flink/runtime/jobmanager/JobManagerITCase.scala:703:
> > > >> >>> >> error: can't expand macros compiled by previous versions of
> > Scala
> > > >> >>> >> [ERROR]   assert(cachedGraph2.isArchived)
> > > >> >>> >> [ERROR]   ^
> > > >> >>> >> [ERROR] one error found
> > > >> >>> >>
> > > >> >>> >> Is the 2.11 build still compiling successfully according to
> > your
> > > >> latest
> > > >> >>> >> tests?
> > > >> >>> >> I've tried running a clean and re-running without skipping
> the
> > > tests
> > > >> >>> but
> > > >> >>> >> the issue persists.
> > > >> >>> >>
> > > >> >>> >> On Fri, Mar 4, 2016 at 10:38 AM, Stephan Ewen <
> > se...@apache.org>
> > > >> >>> wrote:
> > > >> >>> >>
> > > >> >>> >> > +1
> > > >> >>> >> >
> > > >> >>> >> > Checked LICENSE and NOTICE files
> > > >> >>> >> > Built against Hadoop 2.6, Scala 2.10, all tests are good
> > > >> >>> >> > Run local pseudo cluster with examples
> > > >> >>> >> > Log files look good, no exceptions
> > > >> >>> >> > Tested File State Ba

[jira] [Created] (FLINK-3583) Configuration not visible in gui when job is running

2016-03-04 Thread JIRA
Michał Fijołek created FLINK-3583:
-

 Summary: Configuration not visible in gui when job is running
 Key: FLINK-3583
 URL: https://issues.apache.org/jira/browse/FLINK-3583
 Project: Flink
  Issue Type: Bug
  Components: Web Client, Webfrontend
Affects Versions: 1.0.0
Reporter: Michał Fijołek
Priority: Minor
 Fix For: 1.0.1


Hello.
I can see that configuration is not visible in frontend when job is running.
screenshot: http://imgur.com/9pwlcLz
When I cancel job, configuration appears but `User configuration` is still not 
visible, although server sends `user-config`.
screenshot: http://imgur.com/GNAk0ei
rest-call: http://localhost:8081/jobs/jobId/config returns
{code}
{
"jid": "71e47c0772c7d62b81f7e3385d429cca",
"name": "Flink Streaming Job",
"execution-config": {
"execution-mode": "PIPELINED",
"restart-strategy": "Restart with fixed delay (5000 ms). #3 restart 
attempts.",
"job-parallelism": 1,
"object-reuse-mode": false,
"user-config": {
"jobmanager.web.log.path": "./data/dummyLogFile.txt",
"local.start-webserver": "true"
}
}
}
{code}

So there are two problems:
_1. User cannot see configuration when job is running which is not really 
useful_
I can see that it happens because of
{code}
ExecutionConfig ec = graph.getExecutionConfig();
if (ec != null) { ... }
{code}
 in {{JobConfigHandler}}. Why {{executionConfig}} is marked as "// -- 
Fields that are only relevant for archived execution graphs " and 
can it be initialized earlier let's say in ExecutionGraph constructor?

_2. Even when configuration is visible, frontent probably parses it badly_
This should be simple to fix

I'm happy to implement both fixes if you don't mind



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Fix version

2016-03-04 Thread Greg Hogan
Hi Max,

You are right, there is no need to unlabel old fix versions.

My thought was to treat the fix version like "inbox zero". There is already
an emphasis on closing blockers, but few bugs and fewer features are that
severe. Pull requests can be long lived and require a ready resolution.

Comparing the number of scheduled and unscheduled issues, I think this is
already the common practice.

Greg

On Tue, Mar 1, 2016 at 5:20 AM, Maximilian Michels  wrote:

> Hi Greg,
>
> I agree that we should encourage people to use the "fix version" field
> more carefully. I think we need to agree on how we use "fix version".
> How about going through the existing "fix version" tagged issues
> instead of just removing the tag? I do think that the tagged issues
> represent overall more pressing issues than the non-tagged.
>
> Cheers,
> Max
>
> On Thu, Feb 25, 2016 at 10:21 AM, Robert Metzger 
> wrote:
> > Hi Greg,
> >
> > I agree with you that the "fix version" field for unresolved issues is
> > probably used by issue creators to express their wish for fast
> resolution.
> > I also saw some cases where issues were reopened.
> >
> > I agree with your suggestion to clear the "fix version" field once 1.0.0
> > has been released.
> >
> > On Mon, Feb 22, 2016 at 4:43 PM, Greg Hogan  wrote:
> >
> >> Hi,
> >>
> >> With 1.0.0 imminent there are 112 tickets with a "fix version" of 1.0.0,
> >> the earliest from 2014. From the ticket logs it looks like we typically
> >> bump the fix version once the target release has passed. Would it be
> better
> >> to wait to assign a fix version until achieving some combination of
> >> severity, acceptance, and imminence?
> >>
> >> For example, a new feature might go unscheduled until a pull request is
> >> available, whereas a blocker is by definition intended for the next
> >> release.
> >>
> >> A corollary would be to unschedule all open / in progress / reopened
> >> tickets once their "fix version" has been released. This would present a
> >> clean slate for the next round of commits.
> >>
> >> Greg
> >>
>


Tuple performance and the curious JIT compiler

2016-03-04 Thread Greg Hogan
I am noticing what looks like the same drop-off in performance when
introducing TupleN subclasses as expressed in "Understanding the JIT and
tuning the implementation" [1].

I start my single-node cluster, run an algorithm which relies purely on
Tuples, and measure the runtime. I execute a separate jar which executes
essentially the same algorithm but using Gelly's Edge (which subclasses
Tuple3 but does not add any extra fields) and now both the Tuple and Edge
algorithms take twice as long.

Has this been previously discussed? If not I can work up a demonstration.

[1] https://flink.apache.org/news/2015/09/16/off-heap-memory.html

Greg