[jira] [Created] (FLINK-3530) Kafka09ITCase.testBigRecordJob fails on Travis

2016-02-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-3530:


 Summary: Kafka09ITCase.testBigRecordJob fails on Travis
 Key: FLINK-3530
 URL: https://issues.apache.org/jira/browse/FLINK-3530
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.0.0
Reporter: Till Rohrmann


The test case {{Kafka09ITCase.testBigRecordJob}} failed on Travis.

https://s3.amazonaws.com/archive.travis-ci.org/jobs/112049279/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3531) KafkaShortRetention09ITCase.testAutoOffsetReset fails on Travis

2016-02-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-3531:


 Summary: KafkaShortRetention09ITCase.testAutoOffsetReset fails on 
Travis
 Key: FLINK-3531
 URL: https://issues.apache.org/jira/browse/FLINK-3531
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.0.0
Reporter: Till Rohrmann


The test case {{KafkaShortRetention09ITCase.testAutoOffsetReset}} failed on 
Travis.

https://s3.amazonaws.com/archive.travis-ci.org/jobs/112049279/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Inconvenient (unforeseen?) consequences of PR #1683

2016-02-29 Thread Till Rohrmann
Hi Vasia,

that is because you're missing the flink-gelly dependency. If you just
build flink-gelly-examples, then it won't contain flink-gelly because it is
not a fat jar. You either have to install flink-gelly on your cluster or
package it in the final user jar.

Cheers,
Till

On Sat, Feb 27, 2016 at 7:19 PM, Vasiliki Kalavri  wrote:

> Hi squirrels,
>
> sorry I've been slow to respond to this, but I'm now testing RC1 and I'm a
> bit confused with this change.
>
> So far, the easier way to run a Gelly example on a cluster was to package
> and submit the Gelly jar.
> Now, since the flink-gelly project doesn't contain the examples anymore, I
> tried running a Gelly example by the packaging flink-gelly-examples jar
> (mvn package). However, this gives me a ClassNotFoundException.
> e.g. for SingleSourceShortestPaths, the following:
>
> java.lang.RuntimeException: Could not look up the main(String[]) method
> from the class org.apache.flink.graph.examples.SingleSourceShortestPaths:
> org/apache/flink/graph/spargel/MessagingFunction
> at
>
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:478)
> at
>
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:216)
> at org.apache.flink.client.CliFrontend.buildProgram(CliFrontend.java:922)
> at org.apache.flink.client.CliFrontend.run(CliFrontend.java:301)
> at
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1189)
> at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1239)
> Caused by: java.lang.NoClassDefFoundError:
> org/apache/flink/graph/spargel/MessagingFunction
> at java.lang.Class.getDeclaredMethods0(Native Method)
> at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
> at java.lang.Class.getMethod0(Class.java:2774)
> at java.lang.Class.getMethod(Class.java:1663)
> at
>
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:473)
> ... 5 more
> Caused by: java.lang.ClassNotFoundException:
> org.apache.flink.graph.spargel.MessagingFunction
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 10 more
>
>
> What am I missing here?
>
> Thanks!
> -Vasia.
>
> On 26 February 2016 at 14:10, Márton Balassi 
> wrote:
>
> > Thanks, +1.
> >
> > On Fri, Feb 26, 2016 at 12:35 PM, Stephan Ewen  wrote:
> >
> > > Hi!
> > >
> > > I think it is a release blocker.
> > >
> > > That was a change to make it possible to use Flink with SBT (and make
> it
> > > easier to use it with Maven).
> > > Just had an offline chat with Till, and we suggest the following
> > solution:
> > >
> > > All libraries should be split into two projects:
> > >   - library
> > >   - library-examples
> > >
> > > The "library" project has the core flink dependencies as "provided"
> > > The "library-examples" project has both the "library" and the core
> flink
> > > dependencies with scope "compile"
> > >
> > > That way the example should run in the IDE out of the cox, and users
> that
> > > reference the libraries will still get the correct packaging (include
> the
> > > library in the user jar, but not additionally the core flink jars).
> > >
> > > Greetings,
> > > Stephan
> > >
> > >
> > > On Thu, Feb 25, 2016 at 3:45 PM, Márton Balassi <
> > balassi.mar...@gmail.com>
> > > wrote:
> > >
> > > > Issued JIRA ticket 3511 to make it referable in other discussions.
> [1]
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-3511
> > > >
> > > > On Thu, Feb 25, 2016 at 3:36 PM, Márton Balassi <
> > > balassi.mar...@gmail.com>
> > > > wrote:
> > > >
> > > > > Recent changes to the build [1] where many libraries got their core
> > > > > dependencies (the ones included in the flink-dist fat jar) moved to
> > the
> > > > > provided scope.
> > > > >
> > > > > The reasoning was that when submitting to the Flink cluster the
> > > > > application already has these dependencies, while when a user
> writes
> > a
> > > > > program against these libraries she will include the core
> > dependencies
> > > > > explicitly anyway.
> > > > >
> > > > > There is one other case of usage however, namely when someone is
> > trying
> > > > to
> > > > > run an application defined in these libraries depending on the core
> > > jars.
> > > > > To give an example if you were to run the Gelly ConnectedComponents
> > > > example
> > > > > [2] from an IDE after importing Flink (or running with java -jar
> > > without
> > > > > including the flink fat jar in the classpath) you would receive the
> > > > > following class not found exception as per the current master:
> > > > >
> > > > > Exception in thread "main" java.lang.NoClassDefFoundError:
> > > > > org/apache/flink/api/common/ProgramDescription

[jira] [Created] (FLINK-3532) Flink-gelly-examples jar contains an underscore instead of an hyphen

2016-02-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-3532:


 Summary: Flink-gelly-examples jar contains an underscore instead 
of an hyphen
 Key: FLINK-3532
 URL: https://issues.apache.org/jira/browse/FLINK-3532
 Project: Flink
  Issue Type: Bug
  Components: Gelly
Reporter: Till Rohrmann
Assignee: Till Rohrmann


The newly introduced {{flink-gelly-examples}} module uses an underscore to 
separate {{examples}} from {{flink-gelly}} in it's artifact id. This is not 
consistent and, thus, the artifact id {{flink-gelly_examples_2.10}} should be 
changed to {{flink-gelly-examples_2.10}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Inconvenient (unforeseen?) consequences of PR #1683

2016-02-29 Thread Vasiliki Kalavri
Thanks Till! Then, we'd better update the docs, too. Currently both links
to examples and cluster execution instructions are broken.
I'll create an issue.

-V.

On 29 February 2016 at 11:54, Till Rohrmann  wrote:

> Hi Vasia,
>
> that is because you're missing the flink-gelly dependency. If you just
> build flink-gelly-examples, then it won't contain flink-gelly because it is
> not a fat jar. You either have to install flink-gelly on your cluster or
> package it in the final user jar.
>
> Cheers,
> Till
>
> On Sat, Feb 27, 2016 at 7:19 PM, Vasiliki Kalavri <
> vasilikikala...@gmail.com
> > wrote:
>
> > Hi squirrels,
> >
> > sorry I've been slow to respond to this, but I'm now testing RC1 and I'm
> a
> > bit confused with this change.
> >
> > So far, the easier way to run a Gelly example on a cluster was to package
> > and submit the Gelly jar.
> > Now, since the flink-gelly project doesn't contain the examples anymore,
> I
> > tried running a Gelly example by the packaging flink-gelly-examples jar
> > (mvn package). However, this gives me a ClassNotFoundException.
> > e.g. for SingleSourceShortestPaths, the following:
> >
> > java.lang.RuntimeException: Could not look up the main(String[]) method
> > from the class org.apache.flink.graph.examples.SingleSourceShortestPaths:
> > org/apache/flink/graph/spargel/MessagingFunction
> > at
> >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:478)
> > at
> >
> >
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:216)
> > at org.apache.flink.client.CliFrontend.buildProgram(CliFrontend.java:922)
> > at org.apache.flink.client.CliFrontend.run(CliFrontend.java:301)
> > at
> >
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1189)
> > at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1239)
> > Caused by: java.lang.NoClassDefFoundError:
> > org/apache/flink/graph/spargel/MessagingFunction
> > at java.lang.Class.getDeclaredMethods0(Native Method)
> > at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
> > at java.lang.Class.getMethod0(Class.java:2774)
> > at java.lang.Class.getMethod(Class.java:1663)
> > at
> >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:473)
> > ... 5 more
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.flink.graph.spargel.MessagingFunction
> > at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> > at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> > at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> > at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> > ... 10 more
> >
> >
> > What am I missing here?
> >
> > Thanks!
> > -Vasia.
> >
> > On 26 February 2016 at 14:10, Márton Balassi 
> > wrote:
> >
> > > Thanks, +1.
> > >
> > > On Fri, Feb 26, 2016 at 12:35 PM, Stephan Ewen 
> wrote:
> > >
> > > > Hi!
> > > >
> > > > I think it is a release blocker.
> > > >
> > > > That was a change to make it possible to use Flink with SBT (and make
> > it
> > > > easier to use it with Maven).
> > > > Just had an offline chat with Till, and we suggest the following
> > > solution:
> > > >
> > > > All libraries should be split into two projects:
> > > >   - library
> > > >   - library-examples
> > > >
> > > > The "library" project has the core flink dependencies as "provided"
> > > > The "library-examples" project has both the "library" and the core
> > flink
> > > > dependencies with scope "compile"
> > > >
> > > > That way the example should run in the IDE out of the cox, and users
> > that
> > > > reference the libraries will still get the correct packaging (include
> > the
> > > > library in the user jar, but not additionally the core flink jars).
> > > >
> > > > Greetings,
> > > > Stephan
> > > >
> > > >
> > > > On Thu, Feb 25, 2016 at 3:45 PM, Márton Balassi <
> > > balassi.mar...@gmail.com>
> > > > wrote:
> > > >
> > > > > Issued JIRA ticket 3511 to make it referable in other discussions.
> > [1]
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/FLINK-3511
> > > > >
> > > > > On Thu, Feb 25, 2016 at 3:36 PM, Márton Balassi <
> > > > balassi.mar...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Recent changes to the build [1] where many libraries got their
> core
> > > > > > dependencies (the ones included in the flink-dist fat jar) moved
> to
> > > the
> > > > > > provided scope.
> > > > > >
> > > > > > The reasoning was that when submitting to the Flink cluster the
> > > > > > application already has these dependencies, while when a user
> > writes
> > > a
> > > > > > program against these libraries she will include the core
> > > dependencies
> > > > > > explicitly anyway.
> > > > > >
> > > > > > There is one other case of usage however, namely when someone is
> > > trying
> > > > > to
> > > > > > run an application defined in

Re: Inconvenient (unforeseen?) consequences of PR #1683

2016-02-29 Thread Till Rohrmann
Good catch :-) I mean we could also change the behaviour to include
flink-gelly in the flink-gelly-examples module.

On Mon, Feb 29, 2016 at 12:13 PM, Vasiliki Kalavri <
vasilikikala...@gmail.com> wrote:

> Thanks Till! Then, we'd better update the docs, too. Currently both links
> to examples and cluster execution instructions are broken.
> I'll create an issue.
>
> -V.
>
> On 29 February 2016 at 11:54, Till Rohrmann  wrote:
>
> > Hi Vasia,
> >
> > that is because you're missing the flink-gelly dependency. If you just
> > build flink-gelly-examples, then it won't contain flink-gelly because it
> is
> > not a fat jar. You either have to install flink-gelly on your cluster or
> > package it in the final user jar.
> >
> > Cheers,
> > Till
> >
> > On Sat, Feb 27, 2016 at 7:19 PM, Vasiliki Kalavri <
> > vasilikikala...@gmail.com
> > > wrote:
> >
> > > Hi squirrels,
> > >
> > > sorry I've been slow to respond to this, but I'm now testing RC1 and
> I'm
> > a
> > > bit confused with this change.
> > >
> > > So far, the easier way to run a Gelly example on a cluster was to
> package
> > > and submit the Gelly jar.
> > > Now, since the flink-gelly project doesn't contain the examples
> anymore,
> > I
> > > tried running a Gelly example by the packaging flink-gelly-examples jar
> > > (mvn package). However, this gives me a ClassNotFoundException.
> > > e.g. for SingleSourceShortestPaths, the following:
> > >
> > > java.lang.RuntimeException: Could not look up the main(String[]) method
> > > from the class
> org.apache.flink.graph.examples.SingleSourceShortestPaths:
> > > org/apache/flink/graph/spargel/MessagingFunction
> > > at
> > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:478)
> > > at
> > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:216)
> > > at
> org.apache.flink.client.CliFrontend.buildProgram(CliFrontend.java:922)
> > > at org.apache.flink.client.CliFrontend.run(CliFrontend.java:301)
> > > at
> > >
> >
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1189)
> > > at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1239)
> > > Caused by: java.lang.NoClassDefFoundError:
> > > org/apache/flink/graph/spargel/MessagingFunction
> > > at java.lang.Class.getDeclaredMethods0(Native Method)
> > > at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
> > > at java.lang.Class.getMethod0(Class.java:2774)
> > > at java.lang.Class.getMethod(Class.java:1663)
> > > at
> > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:473)
> > > ... 5 more
> > > Caused by: java.lang.ClassNotFoundException:
> > > org.apache.flink.graph.spargel.MessagingFunction
> > > at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> > > at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> > > at java.security.AccessController.doPrivileged(Native Method)
> > > at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> > > at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> > > at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> > > ... 10 more
> > >
> > >
> > > What am I missing here?
> > >
> > > Thanks!
> > > -Vasia.
> > >
> > > On 26 February 2016 at 14:10, Márton Balassi  >
> > > wrote:
> > >
> > > > Thanks, +1.
> > > >
> > > > On Fri, Feb 26, 2016 at 12:35 PM, Stephan Ewen 
> > wrote:
> > > >
> > > > > Hi!
> > > > >
> > > > > I think it is a release blocker.
> > > > >
> > > > > That was a change to make it possible to use Flink with SBT (and
> make
> > > it
> > > > > easier to use it with Maven).
> > > > > Just had an offline chat with Till, and we suggest the following
> > > > solution:
> > > > >
> > > > > All libraries should be split into two projects:
> > > > >   - library
> > > > >   - library-examples
> > > > >
> > > > > The "library" project has the core flink dependencies as "provided"
> > > > > The "library-examples" project has both the "library" and the core
> > > flink
> > > > > dependencies with scope "compile"
> > > > >
> > > > > That way the example should run in the IDE out of the cox, and
> users
> > > that
> > > > > reference the libraries will still get the correct packaging
> (include
> > > the
> > > > > library in the user jar, but not additionally the core flink jars).
> > > > >
> > > > > Greetings,
> > > > > Stephan
> > > > >
> > > > >
> > > > > On Thu, Feb 25, 2016 at 3:45 PM, Márton Balassi <
> > > > balassi.mar...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Issued JIRA ticket 3511 to make it referable in other
> discussions.
> > > [1]
> > > > > >
> > > > > > [1] https://issues.apache.org/jira/browse/FLINK-3511
> > > > > >
> > > > > > On Thu, Feb 25, 2016 at 3:36 PM, Márton Balassi <
> > > > > balassi.mar...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Recent changes to the build [1] where many libraries got their
> > core
> > > > > > > dependencies (the ones included in the flink-dist fat jar)
> moved

Re: Inconvenient (unforeseen?) consequences of PR #1683

2016-02-29 Thread Vasiliki Kalavri
In my opinion, the fat jar solution is easier than having to copy the Gelly
jar to all task managers.
I would be in favor of including flink-gelly in the flink-examples module,
but we can also simply document the current behavior nicely.

-V.

On 29 February 2016 at 12:15, Till Rohrmann  wrote:

> Good catch :-) I mean we could also change the behaviour to include
> flink-gelly in the flink-gelly-examples module.
>
> On Mon, Feb 29, 2016 at 12:13 PM, Vasiliki Kalavri <
> vasilikikala...@gmail.com> wrote:
>
> > Thanks Till! Then, we'd better update the docs, too. Currently both links
> > to examples and cluster execution instructions are broken.
> > I'll create an issue.
> >
> > -V.
> >
> > On 29 February 2016 at 11:54, Till Rohrmann 
> wrote:
> >
> > > Hi Vasia,
> > >
> > > that is because you're missing the flink-gelly dependency. If you just
> > > build flink-gelly-examples, then it won't contain flink-gelly because
> it
> > is
> > > not a fat jar. You either have to install flink-gelly on your cluster
> or
> > > package it in the final user jar.
> > >
> > > Cheers,
> > > Till
> > >
> > > On Sat, Feb 27, 2016 at 7:19 PM, Vasiliki Kalavri <
> > > vasilikikala...@gmail.com
> > > > wrote:
> > >
> > > > Hi squirrels,
> > > >
> > > > sorry I've been slow to respond to this, but I'm now testing RC1 and
> > I'm
> > > a
> > > > bit confused with this change.
> > > >
> > > > So far, the easier way to run a Gelly example on a cluster was to
> > package
> > > > and submit the Gelly jar.
> > > > Now, since the flink-gelly project doesn't contain the examples
> > anymore,
> > > I
> > > > tried running a Gelly example by the packaging flink-gelly-examples
> jar
> > > > (mvn package). However, this gives me a ClassNotFoundException.
> > > > e.g. for SingleSourceShortestPaths, the following:
> > > >
> > > > java.lang.RuntimeException: Could not look up the main(String[])
> method
> > > > from the class
> > org.apache.flink.graph.examples.SingleSourceShortestPaths:
> > > > org/apache/flink/graph/spargel/MessagingFunction
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:478)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:216)
> > > > at
> > org.apache.flink.client.CliFrontend.buildProgram(CliFrontend.java:922)
> > > > at org.apache.flink.client.CliFrontend.run(CliFrontend.java:301)
> > > > at
> > > >
> > >
> >
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1189)
> > > > at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1239)
> > > > Caused by: java.lang.NoClassDefFoundError:
> > > > org/apache/flink/graph/spargel/MessagingFunction
> > > > at java.lang.Class.getDeclaredMethods0(Native Method)
> > > > at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
> > > > at java.lang.Class.getMethod0(Class.java:2774)
> > > > at java.lang.Class.getMethod(Class.java:1663)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:473)
> > > > ... 5 more
> > > > Caused by: java.lang.ClassNotFoundException:
> > > > org.apache.flink.graph.spargel.MessagingFunction
> > > > at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> > > > at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> > > > at java.security.AccessController.doPrivileged(Native Method)
> > > > at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> > > > ... 10 more
> > > >
> > > >
> > > > What am I missing here?
> > > >
> > > > Thanks!
> > > > -Vasia.
> > > >
> > > > On 26 February 2016 at 14:10, Márton Balassi <
> balassi.mar...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Thanks, +1.
> > > > >
> > > > > On Fri, Feb 26, 2016 at 12:35 PM, Stephan Ewen 
> > > wrote:
> > > > >
> > > > > > Hi!
> > > > > >
> > > > > > I think it is a release blocker.
> > > > > >
> > > > > > That was a change to make it possible to use Flink with SBT (and
> > make
> > > > it
> > > > > > easier to use it with Maven).
> > > > > > Just had an offline chat with Till, and we suggest the following
> > > > > solution:
> > > > > >
> > > > > > All libraries should be split into two projects:
> > > > > >   - library
> > > > > >   - library-examples
> > > > > >
> > > > > > The "library" project has the core flink dependencies as
> "provided"
> > > > > > The "library-examples" project has both the "library" and the
> core
> > > > flink
> > > > > > dependencies with scope "compile"
> > > > > >
> > > > > > That way the example should run in the IDE out of the cox, and
> > users
> > > > that
> > > > > > reference the libraries will still get the correct packaging
> > (include
> > > > the
> > > > > > library in the user jar, but not additionally the core flink
> jars).
> > > > > >
> > > > > > Greetings,
> > > > > > Stepha

[jira] [Created] (FLINK-3533) Update the Gelly docs wrt examples and cluster execution

2016-02-29 Thread Vasia Kalavri (JIRA)
Vasia Kalavri created FLINK-3533:


 Summary: Update the Gelly docs wrt examples and cluster execution
 Key: FLINK-3533
 URL: https://issues.apache.org/jira/browse/FLINK-3533
 Project: Flink
  Issue Type: Bug
  Components: Documentation, Gelly
Affects Versions: 1.0.0
Reporter: Vasia Kalavri


Links to examples and cluster execution instructions are currently broken in 
the Gelly docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3534) Cancelling a running job can lead to restart instead of stopping

2016-02-29 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-3534:
-

 Summary: Cancelling a running job can lead to restart instead of 
stopping
 Key: FLINK-3534
 URL: https://issues.apache.org/jira/browse/FLINK-3534
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Robert Metzger
Priority: Critical


I just tried cancelling a regularly running job. Instead of the job stopping, 
it restarted.


{code}
2016-02-29 10:39:28,415 INFO  org.apache.flink.yarn.YarnJobManager  
- Trying to cancel job with ID 5c0604694c8469cfbb89daaa990068df.
2016-02-29 10:39:28,416 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Source: Out of 
order data generator -> (Flat Map, Timestamps/Watermarks) (1/1) 
(e3b0ab0e373defb925898de9f200) switched from RUNNING to CANCELING

2016-02-29 10:39:28,488 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- 
TriggerWindow(TumblingTimeWindows(6), 
FoldingStateDescriptor{name=window-contents, 
defaultValue=(0,9223372036854775807,0), serializer=null}, EventTimeTrigger(), 
WindowedStream.apply(WindowedStream.java:397)) (19/24) 
(c1be31b0be596d2521073b2d78ffa60a) switched from CANCELING to CANCELED
2016-02-29 10:40:08,468 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Source: Out of 
order data generator -> (Flat Map, Timestamps/Watermarks) (1/1) 
(e3b0ab0e373defb925898de9f200) switched from CANCELING to FAILED
2016-02-29 10:40:08,468 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- 
TriggerWindow(TumblingTimeWindows(6), 
FoldingStateDescriptor{name=window-contents, 
defaultValue=(0,9223372036854775807,0), serializer=null}, EventTimeTrigger(), 
WindowedStream.apply(WindowedStream.java:397)) (1/24) 
(5ad172ec9932b24d5a98377a2c82b0b3) switched from CANCELING to FAILED
2016-02-29 10:40:08,472 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- 
TriggerWindow(TumblingTimeWindows(6), 
FoldingStateDescriptor{name=window-contents, 
defaultValue=(0,9223372036854775807,0), serializer=null}, EventTimeTrigger(), 
WindowedStream.apply(WindowedStream.java:397)) (2/24) 
(5404ca28ac7cf23b67dff30ef2309078) switched from CANCELING to FAILED
2016-02-29 10:40:08,473 INFO  org.apache.flink.yarn.YarnJobManager  
- Status of job 5c0604694c8469cfbb89daaa990068df (Event counter: 
{auto.offset.reset=earliest, rocksdb=hdfs:///user/robert/rocksdb, 
generateInPlace=soTrue, parallelism=24, bootstrap.servers=cdh544-worker-0:9092, 
topic=eventsGenerator, eventsPerKeyPerGenerator=2, numKeys=10, 
zookeeper.connect=cdh544-worker-0:2181, timeSliceSize=6, eventsKerPey=1, 
genPar=1}) changed to FAILING.
java.lang.Exception: Task could not be canceled.
at 
org.apache.flink.runtime.executiongraph.Execution$5.onComplete(Execution.java:902)
at akka.dispatch.OnComplete.internal(Future.scala:246)
at akka.dispatch.OnComplete.internal(Future.scala:244)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:174)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:171)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at 
scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
[Actor[akka.tcp://flink@10.240.242.143:50119/user/taskmanager#640539146]] after 
[1 ms]
at 
akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
at 
scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
at 
scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)
at 
akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)
at 
akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)
at 
akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)
at 
akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:745)
2016-02-29 10:40:08,477 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- 
TriggerWindow(TumblingTimeWindows(6), 
FoldingStateDescriptor{name=window-contents, 
defaultValue=(0,9223372036854775807,0), serializer=null}, EventTimeTrigger(), 
WindowedStream.appl

Re: Inconvenient (unforeseen?) consequences of PR #1683

2016-02-29 Thread Stephan Ewen
How about adding to "flink-gelly-examples" that it packages the examples in
JAR files. We can then add them even to the "examples/gelly" folder in the
binary distribution.

On Mon, Feb 29, 2016 at 12:24 PM, Vasiliki Kalavri <
vasilikikala...@gmail.com> wrote:

> In my opinion, the fat jar solution is easier than having to copy the Gelly
> jar to all task managers.
> I would be in favor of including flink-gelly in the flink-examples module,
> but we can also simply document the current behavior nicely.
>
> -V.
>
> On 29 February 2016 at 12:15, Till Rohrmann  wrote:
>
> > Good catch :-) I mean we could also change the behaviour to include
> > flink-gelly in the flink-gelly-examples module.
> >
> > On Mon, Feb 29, 2016 at 12:13 PM, Vasiliki Kalavri <
> > vasilikikala...@gmail.com> wrote:
> >
> > > Thanks Till! Then, we'd better update the docs, too. Currently both
> links
> > > to examples and cluster execution instructions are broken.
> > > I'll create an issue.
> > >
> > > -V.
> > >
> > > On 29 February 2016 at 11:54, Till Rohrmann 
> > wrote:
> > >
> > > > Hi Vasia,
> > > >
> > > > that is because you're missing the flink-gelly dependency. If you
> just
> > > > build flink-gelly-examples, then it won't contain flink-gelly because
> > it
> > > is
> > > > not a fat jar. You either have to install flink-gelly on your cluster
> > or
> > > > package it in the final user jar.
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Sat, Feb 27, 2016 at 7:19 PM, Vasiliki Kalavri <
> > > > vasilikikala...@gmail.com
> > > > > wrote:
> > > >
> > > > > Hi squirrels,
> > > > >
> > > > > sorry I've been slow to respond to this, but I'm now testing RC1
> and
> > > I'm
> > > > a
> > > > > bit confused with this change.
> > > > >
> > > > > So far, the easier way to run a Gelly example on a cluster was to
> > > package
> > > > > and submit the Gelly jar.
> > > > > Now, since the flink-gelly project doesn't contain the examples
> > > anymore,
> > > > I
> > > > > tried running a Gelly example by the packaging flink-gelly-examples
> > jar
> > > > > (mvn package). However, this gives me a ClassNotFoundException.
> > > > > e.g. for SingleSourceShortestPaths, the following:
> > > > >
> > > > > java.lang.RuntimeException: Could not look up the main(String[])
> > method
> > > > > from the class
> > > org.apache.flink.graph.examples.SingleSourceShortestPaths:
> > > > > org/apache/flink/graph/spargel/MessagingFunction
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:478)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:216)
> > > > > at
> > > org.apache.flink.client.CliFrontend.buildProgram(CliFrontend.java:922)
> > > > > at org.apache.flink.client.CliFrontend.run(CliFrontend.java:301)
> > > > > at
> > > > >
> > > >
> > >
> >
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1189)
> > > > > at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1239)
> > > > > Caused by: java.lang.NoClassDefFoundError:
> > > > > org/apache/flink/graph/spargel/MessagingFunction
> > > > > at java.lang.Class.getDeclaredMethods0(Native Method)
> > > > > at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
> > > > > at java.lang.Class.getMethod0(Class.java:2774)
> > > > > at java.lang.Class.getMethod(Class.java:1663)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:473)
> > > > > ... 5 more
> > > > > Caused by: java.lang.ClassNotFoundException:
> > > > > org.apache.flink.graph.spargel.MessagingFunction
> > > > > at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> > > > > at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> > > > > at java.security.AccessController.doPrivileged(Native Method)
> > > > > at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> > > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> > > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> > > > > ... 10 more
> > > > >
> > > > >
> > > > > What am I missing here?
> > > > >
> > > > > Thanks!
> > > > > -Vasia.
> > > > >
> > > > > On 26 February 2016 at 14:10, Márton Balassi <
> > balassi.mar...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Thanks, +1.
> > > > > >
> > > > > > On Fri, Feb 26, 2016 at 12:35 PM, Stephan Ewen  >
> > > > wrote:
> > > > > >
> > > > > > > Hi!
> > > > > > >
> > > > > > > I think it is a release blocker.
> > > > > > >
> > > > > > > That was a change to make it possible to use Flink with SBT
> (and
> > > make
> > > > > it
> > > > > > > easier to use it with Maven).
> > > > > > > Just had an offline chat with Till, and we suggest the
> following
> > > > > > solution:
> > > > > > >
> > > > > > > All libraries should be split into two projects:
> > > > > > >   - library
> > > > > > >   - library-examples
> > > > > > >
> > > > > 

[jira] [Created] (FLINK-3535) Decrease logging verbosity of StackTraceSampleCoordinator

2016-02-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-3535:
--

 Summary: Decrease logging verbosity of StackTraceSampleCoordinator
 Key: FLINK-3535
 URL: https://issues.apache.org/jira/browse/FLINK-3535
 Project: Flink
  Issue Type: Improvement
  Components: Webfrontend
Reporter: Ufuk Celebi
Assignee: Ufuk Celebi
Priority: Minor


Logging in StackTraceSampleCoordinator is verbose and can possible confuse 
users, for example if a task manager dies, you see time out messages. Let's log 
these on DEBUG level as they are only relevant if something fishy is going on 
and the user has an interest in understanding what the issue is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3536) Make clearer distinction between event time and processing time

2016-02-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-3536:


 Summary: Make clearer distinction between event time and 
processing time
 Key: FLINK-3536
 URL: https://issues.apache.org/jira/browse/FLINK-3536
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API
Reporter: Till Rohrmann
Priority: Minor


If you define your own windows it is easy to mix up the time characteristic 
with the wrong set of predefined {{WindowAssigners}}. You cannot use processing 
time with a {{TumblingTimeWindows}} window assigner, for example.

Neither from the name of {{TumblingTimeWindows}} nor from the JavaDocs it is 
clearly obvious that this {{WindowAssigner}} can only be used with event time. 
I think it would be better to rename the event time window assigner to 
something like {{TumblingEventTimeWindows}}. Additionally, we could extend the 
JavaDocs a bit, since not everyone knows that "based on the timestamps" means 
based on event time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Flink 1.0.0 (RC2)

2016-02-29 Thread Robert Metzger
I think I have to cancel the vote, because the
"flink-1.0.0-bin-hadoop26-scala_2.10.tgz" binary release contains unshaded
guava classes in lib/flink-dist.jar"

I'm currently investigating why only this version is affected.

Tested
- "mvn clean install" (without tests) from source package
- Build a Flink job from the staging repository
- Started Flink on YARN on CDH5.4.4. Flink also handles ResourceManager
fail-overs
- Ran a job using RocksDB, tumbling windows and out of order events on a
cluster.
  - TaskManager failures were properly handled
  - (!) Cancel requests to the job some times lead to restarts. We should
address this in the next bugfix release:
https://issues.apache.org/jira/browse/FLINK-3534
- Checked all quickstart artifacts for the right scala version (always
2.10) and flink version (1.0.0 or 1.0.0-hadoop1)




On Sat, Feb 27, 2016 at 10:54 AM, Robert Metzger 
wrote:

> Small correction: The vote ends on Friday (or if a -1 occurs), not on
> Tuesday, as stated in the email.
>
> On Sat, Feb 27, 2016 at 10:25 AM, Robert Metzger 
> wrote:
>
>> Dear Flink community,
>>
>> Please vote on releasing the following candidate as Apache Flink version
>> 1.0.0.
>>
>> This is the second RC.
>>
>>
>> The commit to be voted on 
>> (*http://git-wip-us.apache.org/repos/asf/flink/commit/6895fd92
>> *)
>> 6895fd92386ec1b6181af7dba553116b028259f2
>>
>> Branch:
>> release-1.0.0-rc2 (see
>> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc2
>> )
>>
>> The release artifacts to be voted on can be found at:
>> *http://people.apache.org/~rmetzger/flink-1.0.0-rc2/
>> *
>>
>> The release artifacts are signed with the key with fingerprint D9839159:
>> http://www.apache.org/dist/flink/KEYS
>>
>> The staging repository for this release can be found at:
>> *https://repository.apache.org/content/repositories/orgapacheflink-1064
>> *
>>
>> -
>>
>> The vote is open until Tuesday and passes if a majority of at least three
>> +1 PMC votes are cast.
>>
>> The vote ends on Friday, March 4, 11:00 CET.
>>
>> [ ] +1 Release this package as Apache Flink 1.0.0
>> [ ] -1 Do not release this package because ...
>>
>
>


[jira] [Created] (FLINK-3537) Faulty code generation for disjunctions

2016-02-29 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-3537:


 Summary: Faulty code generation for disjunctions
 Key: FLINK-3537
 URL: https://issues.apache.org/jira/browse/FLINK-3537
 Project: Flink
  Issue Type: Bug
  Components: Table API
Reporter: Fabian Hueske
Assignee: Fabian Hueske
Priority: Critical


The Table API generates conjunction (&&) code for disjunctions (||).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3538) DataStream join API does not enforce consistent usage

2016-02-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-3538:


 Summary: DataStream join API does not enforce consistent usage
 Key: FLINK-3538
 URL: https://issues.apache.org/jira/browse/FLINK-3538
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API, Scala API
Affects Versions: 1.0.0
Reporter: Till Rohrmann


In the Scala DataStream API the {{join}} operation does not enforce that the 
user has specified a {{KeySelector}} for both input sides before applying a 
window function. Moreover, the order of the {{where}} and {{equalTo}} clause is 
not fixed and it is possible to specify multiple {{where}} and {{equalTo}} 
clauses. In the latter case, it is not clear which {{KeySelector}} will 
eventually be used by the system.

So the following Flink programs compile without a compilation problem (the 
first two lines will only fail at runtime):
{code}
inputA.join(inputB).equalTo{x => 
x}.window(TumblingProcessingTimeWindows.of(Time.seconds(10)))
  .apply(new DefaultFlatJoinFunction[String, String]()).print()

inputA.join(inputB).where{x => 
x}.window(TumblingProcessingTimeWindows.of(Time.seconds(10)))
  .apply(new DefaultFlatJoinFunction[String, String]()).print()

inputA.join(inputB).equalTo{x => x}.where{x => x}.where{x => "1"}.equalTo{x => 
"42"}.window(TumblingProcessingTimeWindows.of(Time.seconds(10)))
  .apply(new DefaultFlatJoinFunction[String, String]()).print()
{code}

This is unlike the Java DataStream API where a clear pattern of {{join}} then 
{{where}} and then {{equalTo}} is enforced. I would propose to do the same for 
the Scala API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3539) Sync running Execution and Task instances via heartbeats

2016-02-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-3539:
--

 Summary: Sync running Execution and Task instances via heartbeats
 Key: FLINK-3539
 URL: https://issues.apache.org/jira/browse/FLINK-3539
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Runtime
Reporter: Ufuk Celebi


[~StephanEwen] pointed out that it is possible for the job manager and task 
manager state to get out of sync. If for example a cancel message from the job 
manager to the task manager is not delivered, the Execution will be failed at 
the job manager, but the task will keep on running at the task manager.

A simple way to prevent such situations is the following:
- The task manager and job manager heartbeats add information about currently 
running tasks/executions
- If a task manager reports a task, which is not a running Execution, that task 
is cancelled
- If a job manager reports a running execution, which is not a running task, 
the execution is failed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3540) Hadoop 2.6.3 build contains /com/google/common (guava) classes in flink-dist.jar

2016-02-29 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-3540:
-

 Summary: Hadoop 2.6.3 build contains /com/google/common (guava) 
classes in flink-dist.jar
 Key: FLINK-3540
 URL: https://issues.apache.org/jira/browse/FLINK-3540
 Project: Flink
  Issue Type: Bug
  Components: Build System
Reporter: Robert Metzger
Assignee: Robert Metzger
Priority: Blocker


While testing the 1.0.0 RC2, I found that the "flink-dist.jar" contains 
unshaded guava classes.

The dependency tree of "flink-dist" shows where the dependency is coming from
{code}
[INFO] |  \- org.apache.flink:flink-shaded-hadoop2:jar:1.1-SNAPSHOT:compile
[INFO] | +- xmlenc:xmlenc:jar:0.52:compile
[INFO] | +- commons-codec:commons-codec:jar:1.4:compile
[INFO] | +- commons-io:commons-io:jar:2.4:compile
[INFO] | +- commons-net:commons-net:jar:3.1:compile
[INFO] | +- commons-collections:commons-collections:jar:3.2.2:compile
[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile
[INFO] | +- org.mortbay.jetty:jetty-util:jar:6.1.26:compile
[INFO] | +- com.sun.jersey:jersey-core:jar:1.9:compile
[INFO] | +- commons-el:commons-el:jar:1.0:runtime
[INFO] | +- commons-logging:commons-logging:jar:1.1.3:compile
[INFO] | +- com.jamesmurty.utils:java-xmlbuilder:jar:0.4:compile
[INFO] | +- commons-lang:commons-lang:jar:2.6:compile
[INFO] | +- commons-configuration:commons-configuration:jar:1.7:compile
[INFO] | +- commons-digester:commons-digester:jar:1.8.1:compile
[INFO] | +- org.xerial.snappy:snappy-java:jar:1.0.5:compile
[INFO] | +- com.google.code.gson:gson:jar:2.2.4:compile
[INFO] | +- 
org.apache.directory.server:apacheds-kerberos-codec:jar:2.0.0-M15:compile
[INFO] | +- org.apache.directory.server:apacheds-i18n:jar:2.0.0-M15:compile
[INFO] | +- org.apache.directory.api:api-asn1-api:jar:1.0.0-M20:compile
[INFO] | +- org.apache.directory.api:api-util:jar:1.0.0-M20:compile
[INFO] | +- com.jcraft:jsch:jar:0.1.42:compile
[INFO] | +- org.htrace:htrace-core:jar:3.0.4:compile
[INFO] | |  \- com.google.guava:guava:jar:12.0.1:compile
[INFO] | | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3541) Clean up workaround in FlinkKafkaConsumer09

2016-02-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-3541:


 Summary: Clean up workaround in FlinkKafkaConsumer09 
 Key: FLINK-3541
 URL: https://issues.apache.org/jira/browse/FLINK-3541
 Project: Flink
  Issue Type: Improvement
  Components: Kafka Connector
Affects Versions: 1.0.0
Reporter: Till Rohrmann
Priority: Minor


In the current {{FlinkKafkaConsumer09}} implementation, we repeatedly start a 
new {{KafkaConsumer}} if the method {{KafkaConsumer.partitionsFor}} returns a 
NPE. This is due to a bug with the Kafka version 0.9.0.0. See 
https://issues.apache.org/jira/browse/KAFKA-2880.

However, the problem is marked as fixed for version 0.9.0.1, which we also use 
for the flink-connector-kafka. Therefore, we should be able to get rid of the 
workaround.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3542) FlinkKafkaConsumer09 cannot handle changing number of partitions

2016-02-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-3542:


 Summary: FlinkKafkaConsumer09 cannot handle changing number of 
partitions
 Key: FLINK-3542
 URL: https://issues.apache.org/jira/browse/FLINK-3542
 Project: Flink
  Issue Type: Improvement
  Components: Kafka Connector
Affects Versions: 1.0.0
Reporter: Till Rohrmann
Priority: Minor


The current {{FlinkKafkaConsumer09}} cannot handle increasing the number of 
partitions of a topic while running. The consumer will simply leave the newly 
created partitions out and thus miss all data which is written to the new 
partitions. The reason seems to be a static assignment of partitions to 
consumer tasks when the job is started.

We should either fix this behaviour or clearly document it in the online and 
code docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3543) Introduce ResourceManager component

2016-02-29 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-3543:
-

 Summary: Introduce ResourceManager component
 Key: FLINK-3543
 URL: https://issues.apache.org/jira/browse/FLINK-3543
 Project: Flink
  Issue Type: New Feature
  Components: ResourceManager, JobManager, TaskManager
Affects Versions: 1.1.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels
 Fix For: 1.1.0


So far the JobManager has been the central instance which is responsible for 
resource management and allocation.

While thinking about how to integrate Mesos support in Flink, people from the 
Flink community realized that it would be nice to delegate resource allocation 
to a dedicated process. This process may run independently of the JobManager 
which is a requirement for proper integration of cluster allocation frameworks 
like Mesos.

This has led to the idea of creating a new component called the 
{{ResourceManager}}. Its task is to allocate and maintain resources requested 
by the {{JobManager}}. The ResourceManager has a very abstract notion of 
resources.

Initially, we thought we could make the ResourceManager deal with resource 
allocation and the registration/supervision of the TaskManagers. However, this 
approach proved to add unnecessary complexity to the runtime. Registration 
state of TaskManagers had to be kept in sync at both the JobManager and the 
ResourceManager.

That's why [~StephanEwen] and me changed the ResourceManager's role to simply 
deal with the resource acquisition. The TaskManagers still register with the 
JobManager which informs the ResourceManager about the successful registration 
of a TaskManager. The ResourceManager may inform the JobManager of failed 
TaskManagers. Due to the insight which the ResourceManager has in the resource 
health, it may detect failed TaskManagers much earlier than the heartbeat-based 
monitoring of the JobManager.

At this stage, the ResourceManager is an optional component. That means the 
JobManager doesn't depend on the ResourceManager as long as it has enough 
resources to perform the computation. All bookkeeping is performed by the 
JobManager. When the ResourceManager connects to the JobManager, it receives 
the current resources, i.e. task manager instances, and allocates more 
containers if necessary. The JobManager adjusts the number of containers 
through the {{SetWorkerPoolSize}} method.

In standalone mode, the ResourceManager may be deactivated or simply use the 
StandaloneResourceManager which does practically nothing because we don't need 
to allocate resources in standalone mode.

In YARN mode, the ResourceManager takes care of communicating with the Yarn 
resource manager. When containers fail, it informs the JobManager and tries to 
allocate new containers. The ResourceManager runs as an actor within the same 
actor system as the JobManager. It could, however, also run independently. The 
independent mode would be the default behavior for Mesos where the framework 
master is expected to just deal with resource allocation.

The attached figures depict the message flow between ResourceManager, 
JobManager, and TaskManager.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3544) ResourceManager runtime components

2016-02-29 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-3544:
-

 Summary: ResourceManager runtime components
 Key: FLINK-3544
 URL: https://issues.apache.org/jira/browse/FLINK-3544
 Project: Flink
  Issue Type: Sub-task
  Components: ResourceManager
Affects Versions: 1.1.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3545) ResourceManager: YARN integration

2016-02-29 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-3545:
-

 Summary: ResourceManager: YARN integration
 Key: FLINK-3545
 URL: https://issues.apache.org/jira/browse/FLINK-3545
 Project: Flink
  Issue Type: Sub-task
  Components: ResourceManager, YARN Client
Affects Versions: 1.1.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels


This integrates YARN support with the ResourceManager abstraction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3546) Streaming Table API

2016-02-29 Thread Vasia Kalavri (JIRA)
Vasia Kalavri created FLINK-3546:


 Summary: Streaming Table API
 Key: FLINK-3546
 URL: https://issues.apache.org/jira/browse/FLINK-3546
 Project: Flink
  Issue Type: New Feature
  Components: Table API
Reporter: Vasia Kalavri


This is an umbrella issue for streaming support in the Table API.
[~fhueske] has already shared [this design 
document|https://docs.google.com/document/d/19kSOAOINKCSWLBCKRq2WvNtmuaA9o3AyCh2ePqr3V5E/edit#heading=h.iliuyes2h2xk]
 in the mailing list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3547) Add support for streaming projection, selection, and union

2016-02-29 Thread Vasia Kalavri (JIRA)
Vasia Kalavri created FLINK-3547:


 Summary: Add support for streaming projection, selection, and union
 Key: FLINK-3547
 URL: https://issues.apache.org/jira/browse/FLINK-3547
 Project: Flink
  Issue Type: Sub-task
  Components: Table API
Reporter: Vasia Kalavri
Assignee: Vasia Kalavri






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3548) Remove unnecessary generic parameter from SingleOutputStreamOperator

2016-02-29 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-3548:
---

 Summary: Remove unnecessary generic parameter from 
SingleOutputStreamOperator
 Key: FLINK-3548
 URL: https://issues.apache.org/jira/browse/FLINK-3548
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[CANCELLED][VOTE] Release Apache Flink 1.0.0 (RC2)

2016-02-29 Thread Robert Metzger
This vote has been cancelled in favor of a new RC.

I resolved the issue I've found in
https://issues.apache.org/jira/browse/FLINK-3540.

I'll now create the next RC.

On Mon, Feb 29, 2016 at 2:49 PM, Robert Metzger  wrote:

> I think I have to cancel the vote, because the
> "flink-1.0.0-bin-hadoop26-scala_2.10.tgz" binary release contains unshaded
> guava classes in lib/flink-dist.jar"
>
> I'm currently investigating why only this version is affected.
>
> Tested
> - "mvn clean install" (without tests) from source package
> - Build a Flink job from the staging repository
> - Started Flink on YARN on CDH5.4.4. Flink also handles ResourceManager
> fail-overs
> - Ran a job using RocksDB, tumbling windows and out of order events on a
> cluster.
>   - TaskManager failures were properly handled
>   - (!) Cancel requests to the job some times lead to restarts. We should
> address this in the next bugfix release:
> https://issues.apache.org/jira/browse/FLINK-3534
> - Checked all quickstart artifacts for the right scala version (always
> 2.10) and flink version (1.0.0 or 1.0.0-hadoop1)
>
>
>
>
> On Sat, Feb 27, 2016 at 10:54 AM, Robert Metzger 
> wrote:
>
>> Small correction: The vote ends on Friday (or if a -1 occurs), not on
>> Tuesday, as stated in the email.
>>
>> On Sat, Feb 27, 2016 at 10:25 AM, Robert Metzger 
>> wrote:
>>
>>> Dear Flink community,
>>>
>>> Please vote on releasing the following candidate as Apache Flink
>>>  version 1.0.0.
>>>
>>> This is the second RC.
>>>
>>>
>>> The commit to be voted on 
>>> (*http://git-wip-us.apache.org/repos/asf/flink/commit/6895fd92
>>> *)
>>> 6895fd92386ec1b6181af7dba553116b028259f2
>>>
>>> Branch:
>>> release-1.0.0-rc2 (see
>>> https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=shortlog;h=refs/heads/release-1.0.0-rc2
>>> )
>>>
>>> The release artifacts to be voted on can be found at:
>>> *http://people.apache.org/~rmetzger/flink-1.0.0-rc2/
>>> *
>>>
>>> The release artifacts are signed with the key with fingerprint D9839159:
>>> http://www.apache.org/dist/flink/KEYS
>>>
>>> The staging repository for this release can be found at:
>>> *https://repository.apache.org/content/repositories/orgapacheflink-1064
>>> *
>>>
>>> -
>>>
>>> The vote is open until Tuesday and passes if a majority of at least three
>>> +1 PMC votes are cast.
>>>
>>> The vote ends on Friday, March 4, 11:00 CET.
>>>
>>> [ ] +1 Release this package as Apache Flink 1.0.0
>>> [ ] -1 Do not release this package because ...
>>>
>>
>>
>


[jira] [Created] (FLINK-3549) Make streaming examples more streaming

2016-02-29 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-3549:
---

 Summary: Make streaming examples more streaming
 Key: FLINK-3549
 URL: https://issues.apache.org/jira/browse/FLINK-3549
 Project: Flink
  Issue Type: Improvement
  Components: Examples
Affects Versions: 1.0.0
Reporter: Stephan Ewen
 Fix For: 1.0.1


The following choices make the examples "not streaming"
  - Many of the streaming example are currently using a small bounded data set. 
The data set is immediately consumed, the program finishes
  - Many examples read text or csv input files

I suggest to rework the examples to use not static data sets and files, but 
infinite streams and throttled generated streams (via iterators).

Command line parameters can be used to influence their behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3550) Rework stream join example

2016-02-29 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-3550:
---

 Summary: Rework stream join example
 Key: FLINK-3550
 URL: https://issues.apache.org/jira/browse/FLINK-3550
 Project: Flink
  Issue Type: Sub-task
  Components: Examples
Affects Versions: 1.0.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.0.1


The example should be reworked with generated streams that show the continuous 
nature of the window join



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3551) Sync Scala and Java Streaming Examples

2016-02-29 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-3551:
---

 Summary: Sync Scala and Java Streaming Examples
 Key: FLINK-3551
 URL: https://issues.apache.org/jira/browse/FLINK-3551
 Project: Flink
  Issue Type: Sub-task
  Components: Examples
Affects Versions: 1.0.0
Reporter: Stephan Ewen
 Fix For: 1.0.1


The Scala Examples lack behind the Java Examples



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3552) Change socket WordCount to be properly windowed

2016-02-29 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-3552:
---

 Summary: Change socket WordCount to be properly windowed
 Key: FLINK-3552
 URL: https://issues.apache.org/jira/browse/FLINK-3552
 Project: Flink
  Issue Type: Sub-task
  Components: Examples
Affects Versions: 1.0.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.0.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Inconvenient (unforeseen?) consequences of PR #1683

2016-02-29 Thread Vasiliki Kalavri
/me likes this idea :)

On 29 February 2016 at 12:38, Stephan Ewen  wrote:

> How about adding to "flink-gelly-examples" that it packages the examples in
> JAR files. We can then add them even to the "examples/gelly" folder in the
> binary distribution.
>
> On Mon, Feb 29, 2016 at 12:24 PM, Vasiliki Kalavri <
> vasilikikala...@gmail.com> wrote:
>
> > In my opinion, the fat jar solution is easier than having to copy the
> Gelly
> > jar to all task managers.
> > I would be in favor of including flink-gelly in the flink-examples
> module,
> > but we can also simply document the current behavior nicely.
> >
> > -V.
> >
> > On 29 February 2016 at 12:15, Till Rohrmann 
> wrote:
> >
> > > Good catch :-) I mean we could also change the behaviour to include
> > > flink-gelly in the flink-gelly-examples module.
> > >
> > > On Mon, Feb 29, 2016 at 12:13 PM, Vasiliki Kalavri <
> > > vasilikikala...@gmail.com> wrote:
> > >
> > > > Thanks Till! Then, we'd better update the docs, too. Currently both
> > links
> > > > to examples and cluster execution instructions are broken.
> > > > I'll create an issue.
> > > >
> > > > -V.
> > > >
> > > > On 29 February 2016 at 11:54, Till Rohrmann 
> > > wrote:
> > > >
> > > > > Hi Vasia,
> > > > >
> > > > > that is because you're missing the flink-gelly dependency. If you
> > just
> > > > > build flink-gelly-examples, then it won't contain flink-gelly
> because
> > > it
> > > > is
> > > > > not a fat jar. You either have to install flink-gelly on your
> cluster
> > > or
> > > > > package it in the final user jar.
> > > > >
> > > > > Cheers,
> > > > > Till
> > > > >
> > > > > On Sat, Feb 27, 2016 at 7:19 PM, Vasiliki Kalavri <
> > > > > vasilikikala...@gmail.com
> > > > > > wrote:
> > > > >
> > > > > > Hi squirrels,
> > > > > >
> > > > > > sorry I've been slow to respond to this, but I'm now testing RC1
> > and
> > > > I'm
> > > > > a
> > > > > > bit confused with this change.
> > > > > >
> > > > > > So far, the easier way to run a Gelly example on a cluster was to
> > > > package
> > > > > > and submit the Gelly jar.
> > > > > > Now, since the flink-gelly project doesn't contain the examples
> > > > anymore,
> > > > > I
> > > > > > tried running a Gelly example by the packaging
> flink-gelly-examples
> > > jar
> > > > > > (mvn package). However, this gives me a ClassNotFoundException.
> > > > > > e.g. for SingleSourceShortestPaths, the following:
> > > > > >
> > > > > > java.lang.RuntimeException: Could not look up the main(String[])
> > > method
> > > > > > from the class
> > > > org.apache.flink.graph.examples.SingleSourceShortestPaths:
> > > > > > org/apache/flink/graph/spargel/MessagingFunction
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:478)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:216)
> > > > > > at
> > > >
> org.apache.flink.client.CliFrontend.buildProgram(CliFrontend.java:922)
> > > > > > at org.apache.flink.client.CliFrontend.run(CliFrontend.java:301)
> > > > > > at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1189)
> > > > > > at
> org.apache.flink.client.CliFrontend.main(CliFrontend.java:1239)
> > > > > > Caused by: java.lang.NoClassDefFoundError:
> > > > > > org/apache/flink/graph/spargel/MessagingFunction
> > > > > > at java.lang.Class.getDeclaredMethods0(Native Method)
> > > > > > at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
> > > > > > at java.lang.Class.getMethod0(Class.java:2774)
> > > > > > at java.lang.Class.getMethod(Class.java:1663)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.flink.client.program.PackagedProgram.hasMainMethod(PackagedProgram.java:473)
> > > > > > ... 5 more
> > > > > > Caused by: java.lang.ClassNotFoundException:
> > > > > > org.apache.flink.graph.spargel.MessagingFunction
> > > > > > at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> > > > > > at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> > > > > > at java.security.AccessController.doPrivileged(Native Method)
> > > > > > at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> > > > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> > > > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> > > > > > ... 10 more
> > > > > >
> > > > > >
> > > > > > What am I missing here?
> > > > > >
> > > > > > Thanks!
> > > > > > -Vasia.
> > > > > >
> > > > > > On 26 February 2016 at 14:10, Márton Balassi <
> > > balassi.mar...@gmail.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Thanks, +1.
> > > > > > >
> > > > > > > On Fri, Feb 26, 2016 at 12:35 PM, Stephan Ewen <
> se...@apache.org
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi!
> > > > > > > >
> > > > > > > > I think it is a release blocker.
> > > > > > > >
> > > > 

Re: Running stream examples (newbie)

2016-02-29 Thread Tara Athan
This did the trick, thanks. I am able to run the streaming examples 
after invalidating caches, restarting and rebuild project.


BTW I see a message that says 100 errors during compilation. Is that to 
be expected?


Tara

On 2/26/16 4:02 AM, Till Rohrmann wrote:

I just tested executing a streaming example on the current master and
everything worked. Can you try clearing the IntelliJ cache and rebuild the
project?

Cheers,
Till

On Thu, Feb 25, 2016 at 5:13 PM, Tara Athan  wrote:


Hi, I am just exploring Flink, and have run into a curious issue. I have
cloned from github, checked out the release-1.0.0-rc1 branch, and built
from command line - no errors. I am using IntelliJ. I first tried running
some of the batch examples, and those run fine. Then I tried stream
examples (flink-examples-streaming, e.g. java/WordCount or
scala/WindowJoin), and I get this error:

/Users/taraathan/Repositories/IntelliJIDEA/flink/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scalaError:(22,
35) object completeness is not a member of package
org.apache.flink.api.scala
import org.apache.flink.api.scala.completeness.ScalaAPICompletenessTestBase

It is the same on every streaming example. What am I doing wrong?

Thanks, Tara