Re: [DISCUSS] Make a release to be announced at ApacheCon

2015-03-04 Thread Robert Metzger
+1 for Marton as a release manager. Thank you!


On Tue, Mar 3, 2015 at 7:56 PM, Henry Saputra 
wrote:

> Ah, thanks Márton.
>
> So we are chartering to the similar concept of Spark RRD staging execution
> =P
> I suppose there will be a runtime configuration or hint to tell the
> Flink Job manager to indicate which execution is preferred?
>
>
> - Henry
>
> On Tue, Mar 3, 2015 at 2:09 AM, Márton Balassi 
> wrote:
> > Hi Henry,
> >
> > Batch mode is a new execution mode for batch Flink jobs where instead of
> > pipelining the whole execution the job is scheduled in stages, thus
> > materializing the intermediate result before continuing to the next
> > operators. For implications see [1].
> >
> > [1] http://www.slideshare.net/KostasTzoumas/flink-internals, page 18-21.
> >
> >
> > On Mon, Mar 2, 2015 at 11:39 PM, Henry Saputra 
> > wrote:
> >
> >> HI Stephan,
> >>
> >> What is "Batch mode" feature in the list?
> >>
> >> - Henry
> >>
> >> On Mon, Mar 2, 2015 at 5:03 AM, Stephan Ewen  wrote:
> >> > Hi all!
> >> >
> >> > ApacheCon is coming up and it is the 15th anniversary of the Apache
> >> > Software Foundation.
> >> >
> >> > In the course of the conference, Apache would like to make a series of
> >> > announcements. If we manage to make a release during (or shortly
> before)
> >> > ApacheCon, they will announce it through their channels.
> >> >
> >> > I am very much in favor of doing this, under the strong condition
> that we
> >> > are very confident that the master has grown to be stable enough
> (there
> >> are
> >> > major changes in the distributed runtime since version 0.8 that we are
> >> > still stabilizing). No use in a widely announced build that does not
> have
> >> > the quality.
> >> >
> >> > Flink has now many new features that warrant a release soon (once we
> >> fixed
> >> > the last quirks in the new distributed runtime).
> >> >
> >> > Notable new features are:
> >> >  - Gelly
> >> >  - Streaming windows
> >> >  - Flink on Tez
> >> >  - Expression API
> >> >  - Distributed Runtime on Akka
> >> >  - Batch mode
> >> >  - Maybe even a first ML library version
> >> >  - Some streaming fault tolerance
> >> >
> >> > Robert proposed to have a feature freeze mid Match for that. His
> >> > cornerpoints were:
> >> >
> >> > Feature freeze (forking off "release-0.9"): March 17
> >> > RC1 vote: March 24
> >> >
> >> > The RC1 vote is 20 days before the ApacheCon (13. April).
> >> > For the last three releases, the average voting time was 20 days:
> >> > R 0.8.0 --> 14 days
> >> > R 0.7.0 --> 22 days
> >> > R 0.6   --> 26 days
> >> >
> >> > Please share your opinion on this!
> >> >
> >> >
> >> > Greetings,
> >> > Stephan
> >>
>


[jira] [Created] (FLINK-1641) Make projection operator chainable

2015-03-04 Thread Gyula Fora (JIRA)
Gyula Fora created FLINK-1641:
-

 Summary: Make projection operator chainable
 Key: FLINK-1641
 URL: https://issues.apache.org/jira/browse/FLINK-1641
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Reporter: Gyula Fora


The ProjectInvokable currently doesn't extend the ChainableInvokable class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: About FLINK-1537 Machine learning with Apache Flink

2015-03-04 Thread Stephan Ewen
@Till is working on bootstrapping a machine learning Library, he can
probably give you some points...
Am 04.03.2015 04:34 schrieb "Dulaj Viduranga" :

> Great. So can you assign me someone to brainstorm the project idea and
> make a proposal that would benefit flink the most?
>
> On Mar 04, 2015, at 02:08 AM, Stephan Ewen  wrote:
>
> Hey Dulaj!
>
> I think that the issue is only marked as "low priority", because it is not
> on the immediate roadmap for any committer and we are looking for outside
> contributors.
>
> If that is actually what you are interested in, I think you can pick this
> topic!
>
> Stephan
>
>
> On Tue, Mar 3, 2015 at 4:55 PM, Dulaj Viduranga 
> wrote:
>
> Hi,
>
> I would really like to know what are your high priority GSoC projects and
>
> that is it possible for me to have one to contribute this summer.
>
> I’ve played around and went through the flink system sources and I’m
>
> really interested in contributing to this project.
>
> I was interested in working on [FLINK-1537] Machine learning with Apache
>
> Flink project for GSoC, becase I have a past neural networks knowledge. But
>
> I see it is a Minor priority one. I'm wondering if you could help me to
>
> select a good high priority project.
>
> Thank you.
>
> > On Feb 20, 2015, at 10:33 PM, Till Rohrmann 
>
> wrote:
>
> >
>
> > Good to hear.
>
> >
>
> > Cheers,
>
> >
>
> > Till
>
> >
>
> > On Fri, Feb 20, 2015 at 5:31 PM, Dulaj Viduranga 
>
> > wrote:
>
> > Great. I have already started to go through documentation. I’ll follow
>
> your directions and contact you when I’m up and ready. :)
>
> >
>
> > Best regards.
>
> > Dulaj
>
> >
>
> >
>
> >> On Feb 19, 2015, at 10:49 PM, Till Rohrmann 
>
> > wrote:
>
> >>
>
> >> Hi Dulaj,
>
> >>
>
> >> first of all, sorry for my late reply.
>
> >>
>
> >> Yes, we're definitely making the "Machine learning with Apache Flink" a
>
> GSoC project and it's great to hear that you are interested in this topic.
>
> At the moment, I'm about to set Flink's machine learning library up with
>
> some basic algorithms. There is still some work to do in order to figure
>
> out in which direction it should finally head. But we are confident that we
>
> can provide a highly usable and still performant tool for many data
>
> scientists.
>
> >>
>
> >> Implementing algorithms with Apache Flink is a little bit different
>
> from regular programming, because the programming model is a little bit
>
> more restrictive. Therefore, I'd recommend to familiarize yourself a little
>
> bit with the system by by reading the documentation [1], going through the
>
> example jobs contained in the repo [2] and maybe even try to implement one
>
> job yourself. That is the best way to understand Flink.
>
> >>
>
> >> The next step would be try to tackle down one of the starter issues in
>
> order to get to know how the community works and to become visible for the
>
> community. You can find the starter issues here [3].
>
> >>
>
> >> Afterwards we can brainstorm a little bit what you could do in the
>
> context of the GSoC project so that we can make a proposal.
>
> >>
>
> >> It would be great, but not strictly required, to have some knowledge of
>
> Scala, because some of the machine learning library will probably be
>
> implemented in Scala.
>
> >>
>
> >> Cheers,
>
> >>
>
> >> Till
>
> >>
>
> >> [1] http://flink.apache.org/docs/0.8/index.html <
>
> http://flink.apache.org/docs/0.8/index.html>
>
> >> [2] https://github.com/apache/flink/tree/master/flink-examples <
>
> https://github.com/apache/flink/tree/master/flink-examples>
>
> >> [3]
>
>
> https://issues.apache.org/jira/browse/FLINK-1582?jql=project%20%3D%20FLINK%20AND%20labels%20%3D%20Starter
>
>
> <
>
>
> https://issues.apache.org/jira/browse/FLINK-1582?jql=project%20%3D%20FLINK%20AND%20labels%20%3D%20Starter
>
>
> >
>
> >>
>
> >> On Tue, Feb 17, 2015 at 5:00 PM, Dulaj Viduranga 
>
> > wrote:
>
> >> Hi,
>
> >> I'm Dulaj Viduranga and I’m a 3rd year Computer Science and
>
> Engineering student at University of Moratuwa, Sri Lanka.
>
> >> I’m really interested about "Machine learning with Apache
>
> Flink” Project and wondering if you are planing to make this a GSoC
>
> project. I have a 5 years Java experience and great knowledge and interest
>
> about machine learning. I haven’t used Flink or it’s machine learning
>
> library but I have a good experience with pylearn2 python neural network
>
> library and also I have made a neural network with java se from the scratch
>
> in my 2nd year.
>
> >> If you think I’m up for it, please let me know.
>
> >> Thank you.
>
> >> Dulaj Viduranga.
>
> >>
>
> >
>
> >
>
>


Re: About FLINK-1537 Machine learning with Apache Flink

2015-03-04 Thread Dulaj Viduranga

Great. Thanks.

On Mar 04, 2015, at 02:48 PM, Stephan Ewen  wrote:


@Till is working on bootstrapping a machine learning Library, he can
probably give you some points...
Am 04.03.2015 04:34 schrieb "Dulaj Viduranga" :


Great. So can you assign me someone to brainstorm the project idea and
make a proposal that would benefit flink the most?
On Mar 04, 2015, at 02:08 AM, Stephan Ewen  wrote:
Hey Dulaj!
I think that the issue is only marked as "low priority", because it is not
on the immediate roadmap for any committer and we are looking for outside
contributors.
If that is actually what you are interested in, I think you can pick this
topic!
Stephan
On Tue, Mar 3, 2015 at 4:55 PM, Dulaj Viduranga 
wrote:
Hi,
I would really like to know what are your high priority GSoC projects and
that is it possible for me to have one to contribute this summer.
I’ve played around and went through the flink system sources and I’m
really interested in contributing to this project.
I was interested in working on [FLINK-1537] Machine learning with Apache
Flink project for GSoC, becase I have a past neural networks knowledge. But
I see it is a Minor priority one. I'm wondering if you could help me to
select a good high priority project.
Thank you.
> On Feb 20, 2015, at 10:33 PM, Till Rohrmann 
wrote:
>
> Good to hear.
>
> Cheers,
>
> Till
>
> On Fri, Feb 20, 2015 at 5:31 PM, Dulaj Viduranga  
> wrote:

> Great. I have already started to go through documentation. I’ll follow
your directions and contact you when I’m up and ready. :)
>
> Best regards.
> Dulaj
>
>
>> On Feb 19, 2015, at 10:49 PM, Till Rohrmann  
> wrote:

>>
>> Hi Dulaj,
>>
>> first of all, sorry for my late reply.
>>
>> Yes, we're definitely making the "Machine learning with Apache Flink" a
GSoC project and it's great to hear that you are interested in this topic.
At the moment, I'm about to set Flink's machine learning library up with
some basic algorithms. There is still some work to do in order to figure
out in which direction it should finally head. But we are confident that we
can provide a highly usable and still performant tool for many data
scientists.
>>
>> Implementing algorithms with Apache Flink is a little bit different
from regular programming, because the programming model is a little bit
more restrictive. Therefore, I'd recommend to familiarize yourself a little
bit with the system by by reading the documentation [1], going through the
example jobs contained in the repo [2] and maybe even try to implement one
job yourself. That is the best way to understand Flink.
>>
>> The next step would be try to tackle down one of the starter issues in
order to get to know how the community works and to become visible for the
community. You can find the starter issues here [3].
>>
>> Afterwards we can brainstorm a little bit what you could do in the
context of the GSoC project so that we can make a proposal.
>>
>> It would be great, but not strictly required, to have some knowledge of
Scala, because some of the machine learning library will probably be
implemented in Scala.
>>
>> Cheers,
>>
>> Till
>>
>> [1] http://flink.apache.org/docs/0.8/index.html <
http://flink.apache.org/docs/0.8/index.html>
>> [2] https://github.com/apache/flink/tree/master/flink-examples <
https://github.com/apache/flink/tree/master/flink-examples>
>> [3]
https://issues.apache.org/jira/browse/FLINK-1582?jql=project%20%3D%20FLINK%20AND%20labels%20%3D%20Starter
 
<

https://issues.apache.org/jira/browse/FLINK-1582?jql=project%20%3D%20FLINK%20AND%20labels%20%3D%20Starter
 
>

>>
>> On Tue, Feb 17, 2015 at 5:00 PM, Dulaj Viduranga  
> wrote:

>> Hi,
>> I'm Dulaj Viduranga and I’m a 3rd year Computer Science and
Engineering student at University of Moratuwa, Sri Lanka.
>> I’m really interested about "Machine learning with Apache
Flink” Project and wondering if you are planing to make this a GSoC
project. I have a 5 years Java experience and great knowledge and interest
about machine learning. I haven’t used Flink or it’s machine learning
library but I have a good experience with pylearn2 python neural network
library and also I have made a neural network with java se from the scratch
in my 2nd year.
>> If you think I’m up for it, please let me know.
>> Thank you.
>> Dulaj Viduranga.
>>
>
>

[jira] [Created] (FLINK-1642) Flakey YARNSessionCapacitySchedulerITCase

2015-03-04 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-1642:


 Summary: Flakey YARNSessionCapacitySchedulerITCase
 Key: FLINK-1642
 URL: https://issues.apache.org/jira/browse/FLINK-1642
 Project: Flink
  Issue Type: Bug
Reporter: Till Rohrmann
Assignee: Robert Metzger


The {{YARNSessionCapacitySchedulerITCase}} spuriously fails on Travis. The 
error is

{code}
Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 13.058 sec <<< 
FAILURE! - in org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase
testClientStartup(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase)  
Time elapsed: 6.669 sec  <<< FAILURE!
java.lang.AssertionError: Runner thread died before the test was finished. 
Return value = 1
at org.junit.Assert.fail(Assert.java:88)
at org.apache.flink.yarn.YarnTestBase.runWithArgs(YarnTestBase.java:311)
at 
org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase.testClientStartup(YARNSessionCapacitySchedulerITCase.java:53)

testNonexistingQueue(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase)  
Time elapsed: 0.504 sec  <<< FAILURE!
java.lang.AssertionError: There is at least one application on the cluster is 
not finished
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.flink.yarn.YarnTestBase.checkClusterEmpty(YarnTestBase.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}

The test results are

{code}
YARNSessionCapacitySchedulerITCase.testClientStartup:53->YarnTestBase.runWithArgs:311
 Runner thread died before the test was finished. Return value = 1
  YARNSessionCapacitySchedulerITCase>YarnTestBase.checkClusterEmpty:147 There 
is at least one application on the cluster is not finished
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Access flink-conf.yaml data

2015-03-04 Thread Kirschnick, Johannes
Hi Stephan,

I just came across the same problem in accessing the constants and in 
particular setting custom properties.
In particular I noticed that the Minicluster started in the Local Environment 
cannot easily be customized as it does not take into account any custom 
environment variables - no way to pass them.
I tried to fix that locally and suggested a pull request - does that make sense?
https://github.com/apache/flink/pull/448


Johannes

-Ursprüngliche Nachricht-
Von: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] Im Auftrag von 
Stephan Ewen
Gesendet: Dienstag, 3. März 2015 10:03
An: dev@flink.apache.org
Betreff: Re: Access flink-conf.yaml data

Hey Dulaj!

As Chiwan said, the GlobalConfiguration object is used to load them initially.

You can always use that to access the values (it works as a singleton
internally) - but we are starting to move away from singletons, as they make 
test setups and embedding more difficult.
In the JobManager and TaskManager setup, we pass a Configuration object around, 
which has all the values from the global configuration.

Stephan



On Tue, Mar 3, 2015 at 6:08 AM, Chiwan Park  wrote:

> I think that you can use
> `org.apache.flink.configuration.GlobalConfiguration` to obtain 
> configuration object.
>
> Regards.
> Chiwan Park (Sent with iPhone)
>
>
> > On Mar 3, 2015, at 12:17 PM, Dulaj Viduranga 
> wrote:
> >
> > Hi,
> > Can someone help me on how to access the flink-conf.yaml 
> > configuration
> values inside the flink sources? Are these readily available as a map 
> somewhere?
> >
> > Thanks.
>
>


Re: Access flink-conf.yaml data

2015-03-04 Thread Robert Metzger
Hi Johannes,

This change will allow users to pass a custom configuration to the
LocalExecutor: https://github.com/apache/flink/pull/427.
Is that what you're looking for?

On Wed, Mar 4, 2015 at 11:46 AM, Kirschnick, Johannes <
johannes.kirschn...@tu-berlin.de> wrote:

> Hi Stephan,
>
> I just came across the same problem in accessing the constants and in
> particular setting custom properties.
> In particular I noticed that the Minicluster started in the Local
> Environment cannot easily be customized as it does not take into account
> any custom environment variables - no way to pass them.
> I tried to fix that locally and suggested a pull request - does that make
> sense?
> https://github.com/apache/flink/pull/448
>
>
> Johannes
>
> -Ursprüngliche Nachricht-
> Von: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] Im Auftrag von
> Stephan Ewen
> Gesendet: Dienstag, 3. März 2015 10:03
> An: dev@flink.apache.org
> Betreff: Re: Access flink-conf.yaml data
>
> Hey Dulaj!
>
> As Chiwan said, the GlobalConfiguration object is used to load them
> initially.
>
> You can always use that to access the values (it works as a singleton
> internally) - but we are starting to move away from singletons, as they
> make test setups and embedding more difficult.
> In the JobManager and TaskManager setup, we pass a Configuration object
> around, which has all the values from the global configuration.
>
> Stephan
>
>
>
> On Tue, Mar 3, 2015 at 6:08 AM, Chiwan Park  wrote:
>
> > I think that you can use
> > `org.apache.flink.configuration.GlobalConfiguration` to obtain
> > configuration object.
> >
> > Regards.
> > Chiwan Park (Sent with iPhone)
> >
> >
> > > On Mar 3, 2015, at 12:17 PM, Dulaj Viduranga 
> > wrote:
> > >
> > > Hi,
> > > Can someone help me on how to access the flink-conf.yaml
> > > configuration
> > values inside the flink sources? Are these readily available as a map
> > somewhere?
> > >
> > > Thanks.
> >
> >
>


[jira] [Created] (FLINK-1643) Detect tumbling policies where trigger and eviction match

2015-03-04 Thread Gyula Fora (JIRA)
Gyula Fora created FLINK-1643:
-

 Summary: Detect tumbling policies where trigger and eviction match
 Key: FLINK-1643
 URL: https://issues.apache.org/jira/browse/FLINK-1643
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Reporter: Gyula Fora


The windowing api should automatically detect matching trigger and eviction 
policies so it can apply optimizations for tumbling policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Access flink-conf.yaml data

2015-03-04 Thread Stephan Ewen
I think that #427 is a good way to do this. It bypasses the singleton
GlobalConfiguration that I personally hope to get rid of.

On Wed, Mar 4, 2015 at 11:49 AM, Robert Metzger  wrote:

> Hi Johannes,
>
> This change will allow users to pass a custom configuration to the
> LocalExecutor: https://github.com/apache/flink/pull/427.
> Is that what you're looking for?
>
> On Wed, Mar 4, 2015 at 11:46 AM, Kirschnick, Johannes <
> johannes.kirschn...@tu-berlin.de> wrote:
>
> > Hi Stephan,
> >
> > I just came across the same problem in accessing the constants and in
> > particular setting custom properties.
> > In particular I noticed that the Minicluster started in the Local
> > Environment cannot easily be customized as it does not take into account
> > any custom environment variables - no way to pass them.
> > I tried to fix that locally and suggested a pull request - does that make
> > sense?
> > https://github.com/apache/flink/pull/448
> >
> >
> > Johannes
> >
> > -Ursprüngliche Nachricht-
> > Von: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] Im Auftrag von
> > Stephan Ewen
> > Gesendet: Dienstag, 3. März 2015 10:03
> > An: dev@flink.apache.org
> > Betreff: Re: Access flink-conf.yaml data
> >
> > Hey Dulaj!
> >
> > As Chiwan said, the GlobalConfiguration object is used to load them
> > initially.
> >
> > You can always use that to access the values (it works as a singleton
> > internally) - but we are starting to move away from singletons, as they
> > make test setups and embedding more difficult.
> > In the JobManager and TaskManager setup, we pass a Configuration object
> > around, which has all the values from the global configuration.
> >
> > Stephan
> >
> >
> >
> > On Tue, Mar 3, 2015 at 6:08 AM, Chiwan Park 
> wrote:
> >
> > > I think that you can use
> > > `org.apache.flink.configuration.GlobalConfiguration` to obtain
> > > configuration object.
> > >
> > > Regards.
> > > Chiwan Park (Sent with iPhone)
> > >
> > >
> > > > On Mar 3, 2015, at 12:17 PM, Dulaj Viduranga 
> > > wrote:
> > > >
> > > > Hi,
> > > > Can someone help me on how to access the flink-conf.yaml
> > > > configuration
> > > values inside the flink sources? Are these readily available as a map
> > > somewhere?
> > > >
> > > > Thanks.
> > >
> > >
> >
>


[jira] [Created] (FLINK-1644) WebClient dies when no ExecutionEnvironment in main method

2015-03-04 Thread Jonathan Hasenburg (JIRA)
Jonathan Hasenburg created FLINK-1644:
-

 Summary: WebClient dies when no ExecutionEnvironment in main method
 Key: FLINK-1644
 URL: https://issues.apache.org/jira/browse/FLINK-1644
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 0.8.1
Reporter: Jonathan Hasenburg
Priority: Minor


When clicking on the check box next to a job in the WebClient, the client dies 
if no ExecutionEnvironment is in the main method (probably because it tries to 
generate the plan).

This can be reproduced easily if a not flink related jar is uploaded.

This is a problem, when your main method only contains code to extract the 
parameters and then creates the corresponding job class for those parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Access flink-conf.yaml data

2015-03-04 Thread Kirschnick, Johannes
Good point #427 looks more complete - will wait for the merge and just use my 
workaround locally in the meantime

Johannes

-Ursprüngliche Nachricht-
Von: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] Im Auftrag von 
Stephan Ewen
Gesendet: Mittwoch, 4. März 2015 12:42
An: dev@flink.apache.org
Betreff: Re: Access flink-conf.yaml data

I think that #427 is a good way to do this. It bypasses the singleton 
GlobalConfiguration that I personally hope to get rid of.

On Wed, Mar 4, 2015 at 11:49 AM, Robert Metzger  wrote:

> Hi Johannes,
>
> This change will allow users to pass a custom configuration to the
> LocalExecutor: https://github.com/apache/flink/pull/427.
> Is that what you're looking for?
>
> On Wed, Mar 4, 2015 at 11:46 AM, Kirschnick, Johannes < 
> johannes.kirschn...@tu-berlin.de> wrote:
>
> > Hi Stephan,
> >
> > I just came across the same problem in accessing the constants and 
> > in particular setting custom properties.
> > In particular I noticed that the Minicluster started in the Local 
> > Environment cannot easily be customized as it does not take into 
> > account any custom environment variables - no way to pass them.
> > I tried to fix that locally and suggested a pull request - does that 
> > make sense?
> > https://github.com/apache/flink/pull/448
> >
> >
> > Johannes
> >
> > -Ursprüngliche Nachricht-
> > Von: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] Im Auftrag 
> > von Stephan Ewen
> > Gesendet: Dienstag, 3. März 2015 10:03
> > An: dev@flink.apache.org
> > Betreff: Re: Access flink-conf.yaml data
> >
> > Hey Dulaj!
> >
> > As Chiwan said, the GlobalConfiguration object is used to load them 
> > initially.
> >
> > You can always use that to access the values (it works as a 
> > singleton
> > internally) - but we are starting to move away from singletons, as 
> > they make test setups and embedding more difficult.
> > In the JobManager and TaskManager setup, we pass a Configuration 
> > object around, which has all the values from the global configuration.
> >
> > Stephan
> >
> >
> >
> > On Tue, Mar 3, 2015 at 6:08 AM, Chiwan Park 
> wrote:
> >
> > > I think that you can use
> > > `org.apache.flink.configuration.GlobalConfiguration` to obtain 
> > > configuration object.
> > >
> > > Regards.
> > > Chiwan Park (Sent with iPhone)
> > >
> > >
> > > > On Mar 3, 2015, at 12:17 PM, Dulaj Viduranga 
> > > > 
> > > wrote:
> > > >
> > > > Hi,
> > > > Can someone help me on how to access the flink-conf.yaml 
> > > > configuration
> > > values inside the flink sources? Are these readily available as a 
> > > map somewhere?
> > > >
> > > > Thanks.
> > >
> > >
> >
>


Re: Access flink-conf.yaml data

2015-03-04 Thread Robert Metzger
Cool.
I'll merge #427 as soon as https://github.com/apache/flink/pull/410 is in
master.

On Wed, Mar 4, 2015 at 12:48 PM, Kirschnick, Johannes <
johannes.kirschn...@tu-berlin.de> wrote:

> Good point #427 looks more complete - will wait for the merge and just use
> my workaround locally in the meantime
>
> Johannes
>
> -Ursprüngliche Nachricht-
> Von: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] Im Auftrag von
> Stephan Ewen
> Gesendet: Mittwoch, 4. März 2015 12:42
> An: dev@flink.apache.org
> Betreff: Re: Access flink-conf.yaml data
>
> I think that #427 is a good way to do this. It bypasses the singleton
> GlobalConfiguration that I personally hope to get rid of.
>
> On Wed, Mar 4, 2015 at 11:49 AM, Robert Metzger 
> wrote:
>
> > Hi Johannes,
> >
> > This change will allow users to pass a custom configuration to the
> > LocalExecutor: https://github.com/apache/flink/pull/427.
> > Is that what you're looking for?
> >
> > On Wed, Mar 4, 2015 at 11:46 AM, Kirschnick, Johannes <
> > johannes.kirschn...@tu-berlin.de> wrote:
> >
> > > Hi Stephan,
> > >
> > > I just came across the same problem in accessing the constants and
> > > in particular setting custom properties.
> > > In particular I noticed that the Minicluster started in the Local
> > > Environment cannot easily be customized as it does not take into
> > > account any custom environment variables - no way to pass them.
> > > I tried to fix that locally and suggested a pull request - does that
> > > make sense?
> > > https://github.com/apache/flink/pull/448
> > >
> > >
> > > Johannes
> > >
> > > -Ursprüngliche Nachricht-
> > > Von: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] Im Auftrag
> > > von Stephan Ewen
> > > Gesendet: Dienstag, 3. März 2015 10:03
> > > An: dev@flink.apache.org
> > > Betreff: Re: Access flink-conf.yaml data
> > >
> > > Hey Dulaj!
> > >
> > > As Chiwan said, the GlobalConfiguration object is used to load them
> > > initially.
> > >
> > > You can always use that to access the values (it works as a
> > > singleton
> > > internally) - but we are starting to move away from singletons, as
> > > they make test setups and embedding more difficult.
> > > In the JobManager and TaskManager setup, we pass a Configuration
> > > object around, which has all the values from the global configuration.
> > >
> > > Stephan
> > >
> > >
> > >
> > > On Tue, Mar 3, 2015 at 6:08 AM, Chiwan Park 
> > wrote:
> > >
> > > > I think that you can use
> > > > `org.apache.flink.configuration.GlobalConfiguration` to obtain
> > > > configuration object.
> > > >
> > > > Regards.
> > > > Chiwan Park (Sent with iPhone)
> > > >
> > > >
> > > > > On Mar 3, 2015, at 12:17 PM, Dulaj Viduranga
> > > > > 
> > > > wrote:
> > > > >
> > > > > Hi,
> > > > > Can someone help me on how to access the flink-conf.yaml
> > > > > configuration
> > > > values inside the flink sources? Are these readily available as a
> > > > map somewhere?
> > > > >
> > > > > Thanks.
> > > >
> > > >
> > >
> >
>


[jira] [Created] (FLINK-1645) Move StreamingClassloaderITCase to flink-streaming

2015-03-04 Thread JIRA
Márton Balassi created FLINK-1645:
-

 Summary: Move StreamingClassloaderITCase to flink-streaming
 Key: FLINK-1645
 URL: https://issues.apache.org/jira/browse/FLINK-1645
 Project: Flink
  Issue Type: Task
  Components: Streaming
Reporter: Márton Balassi
Priority: Minor


StreamingClassloaderITCase is contained in the flink-tests module and it should 
ideally be under flink-streaming.

Moving it requires some care: there is a StreamingProgram class that is built 
by an assembly for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1646) Add name of required configuration value into the "Insufficient number of network buffers" exception

2015-03-04 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-1646:
-

 Summary: Add name of required configuration value into the 
"Insufficient number of network buffers" exception
 Key: FLINK-1646
 URL: https://issues.apache.org/jira/browse/FLINK-1646
 Project: Flink
  Issue Type: Improvement
  Components: TaskManager
Affects Versions: 0.9
Reporter: Robert Metzger
Priority: Minor


As per 
http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/Exception-Insufficient-number-of-network-buffers-required-120-but-only-2-of-2048-available-td746.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Documentation in the master

2015-03-04 Thread Stephan Ewen
Great, thanks Max!

Concerning (1), in the snapshot master, we can have stubs, IMHO.

Concerning (2), how do we fix the SVG files?

On Wed, Mar 4, 2015 at 3:30 PM, Maximilian Michels  wrote:

> Hey there,
>
> Just letting you know, we now have the documentation of all releases,
> including the latest snapshot version, available on the Flink website.
>
> While checking out the latest master documentation, Marton and me were
> wondering if some of the documentation should really be available for
> the general public. In particular, [1] which is just a stub. I think
> it is ok to have work-in-progress documentation in the latest snapshot
> but maybe that is up for debate.
>
> Also, I came across some problems with SVG files in the documentation
> [2]. The files are correctly embedded and linked but are corrupted.
> Would be great if that could be fixed. Firefox complains "Attempt to
> use XML processing instruction in HTML" in the first line of the SVG.
>
> Best regards,
> Max
>
> [1]
> http://ci.apache.org/projects/flink/flink-docs-master/internal_distributed_akka.html
> [2]
> http://ci.apache.org/projects/flink/flink-docs-master/internal_general_arch.html
>


Documentation in the master

2015-03-04 Thread Maximilian Michels
Hey there,

Just letting you know, we now have the documentation of all releases,
including the latest snapshot version, available on the Flink website.

While checking out the latest master documentation, Marton and me were
wondering if some of the documentation should really be available for
the general public. In particular, [1] which is just a stub. I think
it is ok to have work-in-progress documentation in the latest snapshot
but maybe that is up for debate.

Also, I came across some problems with SVG files in the documentation
[2]. The files are correctly embedded and linked but are corrupted.
Would be great if that could be fixed. Firefox complains "Attempt to
use XML processing instruction in HTML" in the first line of the SVG.

Best regards,
Max

[1] 
http://ci.apache.org/projects/flink/flink-docs-master/internal_distributed_akka.html
[2] 
http://ci.apache.org/projects/flink/flink-docs-master/internal_general_arch.html


Re: Could not build up connection to JobManager

2015-03-04 Thread Stephan Ewen
If I recall correctly, we only hardcode "localhost" in the local mini
cluster - do you think it is problematic there as well?

Have you found any other places?

On Mon, Mar 2, 2015 at 10:26 AM, Dulaj Viduranga 
wrote:

> In some places of the code, "localhost" is hard coded. When it is resolved
> by the DNS, it is posible to be directed  to a different IP other than
> 127.0.0.1 (like private range 10.0.0.0/8). I changed those places to
> 127.0.0.1 and it works like a charm.
> But hard coding 127.0.0.1 is not a good option because when the jobmanager
> ip is changed, this becomes an issue again. I'm thinking of setting
> jobmanager ip from the config.yaml to these places.
> If you have a better idea on doing this with your experience, please let
> me know.
>
> Best.
>


[MultipleProgramsTestBase][Cluster vs. Collection mode] Inconsistent Behavior

2015-03-04 Thread Andra Lungu
Hello,

I have implemented a Bulk Synchronous Version of Triangle Count. The code
can be found here:
https://github.com/andralungu/gelly-partitioning/tree/triangles

In this algorithm, the messages sent differ as the superstep differs. In
order to distinguish between superstep numbers, I used the
getSuperstepNumber() function.

In order to test the overall implementation, I have extended
MultipleProgramsTestBase... nothing unusual until here. The problem is that
in CLUSTER mode, the test passes and the result is the one expected because
the superstep number changes, as can be seen below:
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Messenger]Step 2
[Messenger]Step 2
[Messenger]Step 2
[Messenger]Step 2
[Messenger]Step 2
[Messenger]Step 2
[Update]Step 2
[Update]Step 2
[Update]Step 2
[Update]Step 2
[Update]Step 2
[Messenger]Step 3
[Messenger]Step 3
[Messenger]Step 3
[Messenger]Step 3
[Messenger]Step 3
[Update]Step 3
[Update]Step 3
[Update]Step 3

For COLLECTION, the superstep number remains 1, and the result is obviously
not the one I expected.
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Messenger]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1
[Update]Step 1

Does anyone have an idea what could have triggered this behaviour?

Thanks in advance!
Andra


Re: Documentation in the master

2015-03-04 Thread Maximilian Michels
(2) I found out, it's actually not the SVG but the Webserver which
sets the wrong content type for svg files.

I checked on the ci.apache.org server. It serves SVGs as
"Content-Type: text/html" which lets the browser assume html is in the
SVG file but it is XML. The correct content type would be something
like "image/svg+xml".

I'll ask Infra to fix that.

On Wed, Mar 4, 2015 at 3:36 PM, Stephan Ewen  wrote:
> Great, thanks Max!
>
> Concerning (1), in the snapshot master, we can have stubs, IMHO.
>
> Concerning (2), how do we fix the SVG files?
>
> On Wed, Mar 4, 2015 at 3:30 PM, Maximilian Michels  wrote:
>
>> Hey there,
>>
>> Just letting you know, we now have the documentation of all releases,
>> including the latest snapshot version, available on the Flink website.
>>
>> While checking out the latest master documentation, Marton and me were
>> wondering if some of the documentation should really be available for
>> the general public. In particular, [1] which is just a stub. I think
>> it is ok to have work-in-progress documentation in the latest snapshot
>> but maybe that is up for debate.
>>
>> Also, I came across some problems with SVG files in the documentation
>> [2]. The files are correctly embedded and linked but are corrupted.
>> Would be great if that could be fixed. Firefox complains "Attempt to
>> use XML processing instruction in HTML" in the first line of the SVG.
>>
>> Best regards,
>> Max
>>
>> [1]
>> http://ci.apache.org/projects/flink/flink-docs-master/internal_distributed_akka.html
>> [2]
>> http://ci.apache.org/projects/flink/flink-docs-master/internal_general_arch.html
>>


[jira] [Created] (FLINK-1647) Push master documentation (latest) to Flink website

2015-03-04 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1647:


 Summary: Push master documentation (latest) to Flink website
 Key: FLINK-1647
 URL: https://issues.apache.org/jira/browse/FLINK-1647
 Project: Flink
  Issue Type: Task
Reporter: Henry Saputra
Assignee: Max Michels


Per discussions in dev ist we would like to push latest (master) doc to Flink 
website.

This will help new contributors to follow what we are cooking in master branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Documentation in the master

2015-03-04 Thread Henry Saputra
This is great news, Max!

Thanks for picking up the task.

Sorry I was late filing issue for this but did it anyway: FLINK-1647
I have assigned it to you so feel free to mark it as Resolved.

Just small comment, next time please do let everyone know via dev@
list that you are working on it to make sure no duplicate effort
happening.

Great job.

- Henry

On Wed, Mar 4, 2015 at 6:30 AM, Maximilian Michels  wrote:
> Hey there,
>
> Just letting you know, we now have the documentation of all releases,
> including the latest snapshot version, available on the Flink website.
>
> While checking out the latest master documentation, Marton and me were
> wondering if some of the documentation should really be available for
> the general public. In particular, [1] which is just a stub. I think
> it is ok to have work-in-progress documentation in the latest snapshot
> but maybe that is up for debate.
>
> Also, I came across some problems with SVG files in the documentation
> [2]. The files are correctly embedded and linked but are corrupted.
> Would be great if that could be fixed. Firefox complains "Attempt to
> use XML processing instruction in HTML" in the first line of the SVG.
>
> Best regards,
> Max
>
> [1] 
> http://ci.apache.org/projects/flink/flink-docs-master/internal_distributed_akka.html
> [2] 
> http://ci.apache.org/projects/flink/flink-docs-master/internal_general_arch.html


Re: Documentation in the master

2015-03-04 Thread Maximilian Michels
Hi Henry,

There wasn't a specific issue for pushing the latest documentation to
the website but it was covered by the issue of automatically building
the latest documentation [1].

I assigned the issue [1] to myself. So I think it's ok to skip the dev
mailing list then.

Kind regards,
Max

[1] https://issues.apache.org/jira/browse/FLINK-1370

On Wed, Mar 4, 2015 at 4:57 PM, Henry Saputra  wrote:
> This is great news, Max!
>
> Thanks for picking up the task.
>
> Sorry I was late filing issue for this but did it anyway: FLINK-1647
> I have assigned it to you so feel free to mark it as Resolved.
>
> Just small comment, next time please do let everyone know via dev@
> list that you are working on it to make sure no duplicate effort
> happening.
>
> Great job.
>
> - Henry
>
> On Wed, Mar 4, 2015 at 6:30 AM, Maximilian Michels  wrote:
>> Hey there,
>>
>> Just letting you know, we now have the documentation of all releases,
>> including the latest snapshot version, available on the Flink website.
>>
>> While checking out the latest master documentation, Marton and me were
>> wondering if some of the documentation should really be available for
>> the general public. In particular, [1] which is just a stub. I think
>> it is ok to have work-in-progress documentation in the latest snapshot
>> but maybe that is up for debate.
>>
>> Also, I came across some problems with SVG files in the documentation
>> [2]. The files are correctly embedded and linked but are corrupted.
>> Would be great if that could be fixed. Firefox complains "Attempt to
>> use XML processing instruction in HTML" in the first line of the SVG.
>>
>> Best regards,
>> Max
>>
>> [1] 
>> http://ci.apache.org/projects/flink/flink-docs-master/internal_distributed_akka.html
>> [2] 
>> http://ci.apache.org/projects/flink/flink-docs-master/internal_general_arch.html


Re: Documentation in the master

2015-03-04 Thread Henry Saputra
Hi Max,

Sounds good to me. Thanks again for picking up the task.

- Henry

On Wed, Mar 4, 2015 at 8:07 AM, Maximilian Michels  wrote:
> Hi Henry,
>
> There wasn't a specific issue for pushing the latest documentation to
> the website but it was covered by the issue of automatically building
> the latest documentation [1].
>
> I assigned the issue [1] to myself. So I think it's ok to skip the dev
> mailing list then.
>
> Kind regards,
> Max
>
> [1] https://issues.apache.org/jira/browse/FLINK-1370
>
> On Wed, Mar 4, 2015 at 4:57 PM, Henry Saputra  wrote:
>> This is great news, Max!
>>
>> Thanks for picking up the task.
>>
>> Sorry I was late filing issue for this but did it anyway: FLINK-1647
>> I have assigned it to you so feel free to mark it as Resolved.
>>
>> Just small comment, next time please do let everyone know via dev@
>> list that you are working on it to make sure no duplicate effort
>> happening.
>>
>> Great job.
>>
>> - Henry
>>
>> On Wed, Mar 4, 2015 at 6:30 AM, Maximilian Michels  wrote:
>>> Hey there,
>>>
>>> Just letting you know, we now have the documentation of all releases,
>>> including the latest snapshot version, available on the Flink website.
>>>
>>> While checking out the latest master documentation, Marton and me were
>>> wondering if some of the documentation should really be available for
>>> the general public. In particular, [1] which is just a stub. I think
>>> it is ok to have work-in-progress documentation in the latest snapshot
>>> but maybe that is up for debate.
>>>
>>> Also, I came across some problems with SVG files in the documentation
>>> [2]. The files are correctly embedded and linked but are corrupted.
>>> Would be great if that could be fixed. Firefox complains "Attempt to
>>> use XML processing instruction in HTML" in the first line of the SVG.
>>>
>>> Best regards,
>>> Max
>>>
>>> [1] 
>>> http://ci.apache.org/projects/flink/flink-docs-master/internal_distributed_akka.html
>>> [2] 
>>> http://ci.apache.org/projects/flink/flink-docs-master/internal_general_arch.html


Re: [MultipleProgramsTestBase][Cluster vs. Collection mode] Inconsistent Behavior

2015-03-04 Thread Vasiliki Kalavri
Hi Andra,

judging from the output, it seems that all 3 supersteps are executed in the
second case as well,
but getSuperstepNumber() is returning the wrong superstep number.
I confirmed that this is also the case
in VertexCentricConnectedComponentsITCase
and SpargelConnectedComponentsITCase, i.e. the superstep number is wrong,
but the results produced are correct.
I'll try to find out what's wrong.

-V.

On 4 March 2015 at 16:31, Andra Lungu  wrote:

> Hello,
>
> I have implemented a Bulk Synchronous Version of Triangle Count. The code
> can be found here:
> https://github.com/andralungu/gelly-partitioning/tree/triangles
>
> In this algorithm, the messages sent differ as the superstep differs. In
> order to distinguish between superstep numbers, I used the
> getSuperstepNumber() function.
>
> In order to test the overall implementation, I have extended
> MultipleProgramsTestBase... nothing unusual until here. The problem is that
> in CLUSTER mode, the test passes and the result is the one expected because
> the superstep number changes, as can be seen below:
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Messenger]Step 2
> [Messenger]Step 2
> [Messenger]Step 2
> [Messenger]Step 2
> [Messenger]Step 2
> [Messenger]Step 2
> [Update]Step 2
> [Update]Step 2
> [Update]Step 2
> [Update]Step 2
> [Update]Step 2
> [Messenger]Step 3
> [Messenger]Step 3
> [Messenger]Step 3
> [Messenger]Step 3
> [Messenger]Step 3
> [Update]Step 3
> [Update]Step 3
> [Update]Step 3
>
> For COLLECTION, the superstep number remains 1, and the result is obviously
> not the one I expected.
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Messenger]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
> [Update]Step 1
>
> Does anyone have an idea what could have triggered this behaviour?
>
> Thanks in advance!
> Andra
>


Re: Documentation in the master

2015-03-04 Thread Ufuk Celebi
Nice, Max! :)

On 04 Mar 2015, at 15:36, Stephan Ewen  wrote:

> Great, thanks Max!
> 
> Concerning (1), in the snapshot master, we can have stubs, IMHO.

I agree :)


[jira] [Created] (FLINK-1648) Add a mode where the system automatically sets the parallelism to the available task slots

2015-03-04 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-1648:
---

 Summary: Add a mode where the system automatically sets the 
parallelism to the available task slots
 Key: FLINK-1648
 URL: https://issues.apache.org/jira/browse/FLINK-1648
 Project: Flink
  Issue Type: New Feature
  Components: JobManager
Affects Versions: 0.9
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 0.9


This is basically a port of this code form the 0.8 release:

https://github.com/apache/flink/pull/410



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [MultipleProgramsTestBase][Cluster vs. Collection mode] Inconsistent Behavior

2015-03-04 Thread Vasiliki Kalavri
Hi,

I have found the issue, but will need some help to resolve it :)

In CollectionExecutor, if the superstep number is > 0 (I guess this means
iteration?)
a new IterationRuntimeUDFContext is created for every unary and binary
operator, being assigned this superstep number in its constructor.

However, in VertexCentricIteration, MessagingFunction and
VertexUpdateFunction are assigned a unique IterationRuntimeContext in the
open method of their wrappers, like this:

public void open(Configuration parameters) throws Exception {
if (getIterationRuntimeContext().getSuperstepNumber() == 1) {
this.vertexUpdateFunction.init(getIterationRuntimeContext());
}
}

There is no comment, but I suppose the above was written having something
in mind that I'm not aware of...
Is the IterationRuntimeContext supposed to be unique and the
CollectionExecutor has to make sure to update the superstep number
accordingly (like it does with the aggregators) or shall we assign the new
context in every superstep?

Thanks!

-Vasia.

On 4 March 2015 at 17:46, Vasiliki Kalavri 
wrote:

> Hi Andra,
>
> judging from the output, it seems that all 3 supersteps are executed in
> the second case as well,
> but getSuperstepNumber() is returning the wrong superstep number.
> I confirmed that this is also the case
> in VertexCentricConnectedComponentsITCase
> and SpargelConnectedComponentsITCase, i.e. the superstep number is wrong,
> but the results produced are correct.
> I'll try to find out what's wrong.
>
> -V.
>
> On 4 March 2015 at 16:31, Andra Lungu  wrote:
>
>> Hello,
>>
>> I have implemented a Bulk Synchronous Version of Triangle Count. The code
>> can be found here:
>> https://github.com/andralungu/gelly-partitioning/tree/triangles
>>
>> In this algorithm, the messages sent differ as the superstep differs. In
>> order to distinguish between superstep numbers, I used the
>> getSuperstepNumber() function.
>>
>> In order to test the overall implementation, I have extended
>> MultipleProgramsTestBase... nothing unusual until here. The problem is
>> that
>> in CLUSTER mode, the test passes and the result is the one expected
>> because
>> the superstep number changes, as can be seen below:
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Messenger]Step 2
>> [Messenger]Step 2
>> [Messenger]Step 2
>> [Messenger]Step 2
>> [Messenger]Step 2
>> [Messenger]Step 2
>> [Update]Step 2
>> [Update]Step 2
>> [Update]Step 2
>> [Update]Step 2
>> [Update]Step 2
>> [Messenger]Step 3
>> [Messenger]Step 3
>> [Messenger]Step 3
>> [Messenger]Step 3
>> [Messenger]Step 3
>> [Update]Step 3
>> [Update]Step 3
>> [Update]Step 3
>>
>> For COLLECTION, the superstep number remains 1, and the result is
>> obviously
>> not the one I expected.
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Messenger]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>> [Update]Step 1
>>
>> Does anyone have an idea what could have triggered this behaviour?
>>
>> Thanks in advance!
>> Andra
>>
>
>


Re: [MultipleProgramsTestBase][Cluster vs. Collection mode] Inconsistent Behavior

2015-03-04 Thread Stephan Ewen
My gut feeling would be that the IterationRuntimeContext should be the same
across all iterations.

It only needs to support returning a different superstep number in each
superstep.

On Wed, Mar 4, 2015 at 7:13 PM, Vasiliki Kalavri 
wrote:

> Hi,
>
> I have found the issue, but will need some help to resolve it :)
>
> In CollectionExecutor, if the superstep number is > 0 (I guess this means
> iteration?)
> a new IterationRuntimeUDFContext is created for every unary and binary
> operator, being assigned this superstep number in its constructor.
>
> However, in VertexCentricIteration, MessagingFunction and
> VertexUpdateFunction are assigned a unique IterationRuntimeContext in the
> open method of their wrappers, like this:
>
> public void open(Configuration parameters) throws Exception {
> if (getIterationRuntimeContext().getSuperstepNumber() == 1) {
> this.vertexUpdateFunction.init(getIterationRuntimeContext());
> }
> }
>
> There is no comment, but I suppose the above was written having something
> in mind that I'm not aware of...
> Is the IterationRuntimeContext supposed to be unique and the
> CollectionExecutor has to make sure to update the superstep number
> accordingly (like it does with the aggregators) or shall we assign the new
> context in every superstep?
>
> Thanks!
>
> -Vasia.
>
> On 4 March 2015 at 17:46, Vasiliki Kalavri 
> wrote:
>
> > Hi Andra,
> >
> > judging from the output, it seems that all 3 supersteps are executed in
> > the second case as well,
> > but getSuperstepNumber() is returning the wrong superstep number.
> > I confirmed that this is also the case
> > in VertexCentricConnectedComponentsITCase
> > and SpargelConnectedComponentsITCase, i.e. the superstep number is wrong,
> > but the results produced are correct.
> > I'll try to find out what's wrong.
> >
> > -V.
> >
> > On 4 March 2015 at 16:31, Andra Lungu  wrote:
> >
> >> Hello,
> >>
> >> I have implemented a Bulk Synchronous Version of Triangle Count. The
> code
> >> can be found here:
> >> https://github.com/andralungu/gelly-partitioning/tree/triangles
> >>
> >> In this algorithm, the messages sent differ as the superstep differs. In
> >> order to distinguish between superstep numbers, I used the
> >> getSuperstepNumber() function.
> >>
> >> In order to test the overall implementation, I have extended
> >> MultipleProgramsTestBase... nothing unusual until here. The problem is
> >> that
> >> in CLUSTER mode, the test passes and the result is the one expected
> >> because
> >> the superstep number changes, as can be seen below:
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Messenger]Step 2
> >> [Messenger]Step 2
> >> [Messenger]Step 2
> >> [Messenger]Step 2
> >> [Messenger]Step 2
> >> [Messenger]Step 2
> >> [Update]Step 2
> >> [Update]Step 2
> >> [Update]Step 2
> >> [Update]Step 2
> >> [Update]Step 2
> >> [Messenger]Step 3
> >> [Messenger]Step 3
> >> [Messenger]Step 3
> >> [Messenger]Step 3
> >> [Messenger]Step 3
> >> [Update]Step 3
> >> [Update]Step 3
> >> [Update]Step 3
> >>
> >> For COLLECTION, the superstep number remains 1, and the result is
> >> obviously
> >> not the one I expected.
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Messenger]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >> [Update]Step 1
> >>
> >> Does anyone have an idea what could have triggered this behaviour?
> >>
> >> Thanks in advance!
> >> Andra
> >>
> >
> >
>


[jira] [Created] (FLINK-1650) Suppress Akka's Netty Shutdown Errors through the log config

2015-03-04 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-1650:
---

 Summary: Suppress Akka's Netty Shutdown Errors through the log 
config
 Key: FLINK-1650
 URL: https://issues.apache.org/jira/browse/FLINK-1650
 Project: Flink
  Issue Type: Bug
  Components: other
Affects Versions: 0.9
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 0.9


I suggest to set the logging for 
`org.jboss.netty.channel.DefaultChannelPipeline` to error, in order to get rid 
of the misleading stack trace caused by an akka/netty hickup on shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1649) Give a good error message when a user program emits a null record

2015-03-04 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-1649:
---

 Summary: Give a good error message when a user program emits a 
null record
 Key: FLINK-1649
 URL: https://issues.apache.org/jira/browse/FLINK-1649
 Project: Flink
  Issue Type: Bug
  Components: Local Runtime
Affects Versions: 0.9
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 0.9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1651) Running mvn test got stuck

2015-03-04 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1651:


 Summary: Running mvn test got stuck
 Key: FLINK-1651
 URL: https://issues.apache.org/jira/browse/FLINK-1651
 Project: Flink
  Issue Type: Bug
  Components: test
Reporter: Henry Saputra


I keep getting my test stuck at this state:

...

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 sec - in 
org.apache.flink.runtime.types.TypeTest
Running org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 sec - in 
org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
Running org.apache.flink.runtime.util.DataInputOutputSerializerTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.848 sec - in 
org.apache.flink.runtime.operators.DataSourceTaskTest
Running org.apache.flink.runtime.util.DelegatingConfigurationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec - in 
org.apache.flink.runtime.util.DelegatingConfigurationTest
Running org.apache.flink.runtime.util.EnvironmentInformationTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.563 sec - in 
org.apache.flink.runtime.io.network.serialization.LargeRecordsTest
Running org.apache.flink.runtime.util.event.TaskEventHandlerTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec - in 
org.apache.flink.runtime.util.event.TaskEventHandlerTest
Running org.apache.flink.runtime.util.LRUCacheMapTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012 sec - in 
org.apache.flink.runtime.util.LRUCacheMapTest
Running org.apache.flink.runtime.util.MathUtilTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec - in 
org.apache.flink.runtime.util.MathUtilTest
Running org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.064 sec - in 
org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
Running org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec - in 
org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.238 sec - in 
org.apache.flink.runtime.taskmanager.TaskManagerTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.616 sec - in 
org.apache.flink.runtime.profiling.impl.InstanceProfilerTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.303 sec - in 
org.apache.flink.runtime.util.DataInputOutputSerializerTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.488 sec - in 
org.apache.flink.runtime.util.EnvironmentInformationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.81 sec - in 
org.apache.flink.runtime.taskmanager.TaskManagerProcessReapingTest
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.653 sec - 
in org.apache.flink.runtime.operators.MatchTaskTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.071 sec - in 
org.apache.flink.runtime.operators.sort.LargeRecordHandlerTest
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.534 sec - in 
org.apache.flink.runtime.operators.DataSinkTaskTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.98 sec - in 
org.apache.flink.runtime.operators.sort.NormalizedKeySorterTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.017 sec - in 
org.apache.flink.runtime.io.disk.ChannelViewsTest


After this seemed like nothing happen. And the program just hang.

I am using MacOSX with Java version 1.7.0_71





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1652) Wrong superstep number in VertexCentricIteration in Collection mode

2015-03-04 Thread Vasia Kalavri (JIRA)
Vasia Kalavri created FLINK-1652:


 Summary: Wrong superstep number in VertexCentricIteration in 
Collection mode
 Key: FLINK-1652
 URL: https://issues.apache.org/jira/browse/FLINK-1652
 Project: Flink
  Issue Type: Bug
  Components: Spargel, Gelly, Iterations
Reporter: Vasia Kalavri


When in collection execution mode, the superstep number is not correctly 
updated for Spargel's and Gelly's VertexCentricIteration. There seems to be to 
problem with DeltaIteration.

See also relevant [discussion in dev@ | 
https://mail-archives.apache.org/mod_mbox/flink-dev/201503.mbox/%3CCAK5ODX4YiNqqSXAYrK0PAwvEDYm%2Bjakvvu8%3Dvup62H4Vwc_uMQ%40mail.gmail.com%3E].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [MultipleProgramsTestBase][Cluster vs. Collection mode] Inconsistent Behavior

2015-03-04 Thread Vasiliki Kalavri
Thanks!

I created an issue [1] and will try to look into this during the weekend
(unless someone gets to it faster :))

Cheers,
V.

[1]: http://issues.apache.org/jira/browse/FLINK-1652

On 4 March 2015 at 20:29, Stephan Ewen  wrote:

> My gut feeling would be that the IterationRuntimeContext should be the same
> across all iterations.
>
> It only needs to support returning a different superstep number in each
> superstep.
>
> On Wed, Mar 4, 2015 at 7:13 PM, Vasiliki Kalavri <
> vasilikikala...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I have found the issue, but will need some help to resolve it :)
> >
> > In CollectionExecutor, if the superstep number is > 0 (I guess this means
> > iteration?)
> > a new IterationRuntimeUDFContext is created for every unary and binary
> > operator, being assigned this superstep number in its constructor.
> >
> > However, in VertexCentricIteration, MessagingFunction and
> > VertexUpdateFunction are assigned a unique IterationRuntimeContext in the
> > open method of their wrappers, like this:
> >
> > public void open(Configuration parameters) throws Exception {
> > if (getIterationRuntimeContext().getSuperstepNumber() == 1) {
> > this.vertexUpdateFunction.init(getIterationRuntimeContext());
> > }
> > }
> >
> > There is no comment, but I suppose the above was written having something
> > in mind that I'm not aware of...
> > Is the IterationRuntimeContext supposed to be unique and the
> > CollectionExecutor has to make sure to update the superstep number
> > accordingly (like it does with the aggregators) or shall we assign the
> new
> > context in every superstep?
> >
> > Thanks!
> >
> > -Vasia.
> >
> > On 4 March 2015 at 17:46, Vasiliki Kalavri 
> > wrote:
> >
> > > Hi Andra,
> > >
> > > judging from the output, it seems that all 3 supersteps are executed in
> > > the second case as well,
> > > but getSuperstepNumber() is returning the wrong superstep number.
> > > I confirmed that this is also the case
> > > in VertexCentricConnectedComponentsITCase
> > > and SpargelConnectedComponentsITCase, i.e. the superstep number is
> wrong,
> > > but the results produced are correct.
> > > I'll try to find out what's wrong.
> > >
> > > -V.
> > >
> > > On 4 March 2015 at 16:31, Andra Lungu  wrote:
> > >
> > >> Hello,
> > >>
> > >> I have implemented a Bulk Synchronous Version of Triangle Count. The
> > code
> > >> can be found here:
> > >> https://github.com/andralungu/gelly-partitioning/tree/triangles
> > >>
> > >> In this algorithm, the messages sent differ as the superstep differs.
> In
> > >> order to distinguish between superstep numbers, I used the
> > >> getSuperstepNumber() function.
> > >>
> > >> In order to test the overall implementation, I have extended
> > >> MultipleProgramsTestBase... nothing unusual until here. The problem is
> > >> that
> > >> in CLUSTER mode, the test passes and the result is the one expected
> > >> because
> > >> the superstep number changes, as can be seen below:
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Messenger]Step 2
> > >> [Messenger]Step 2
> > >> [Messenger]Step 2
> > >> [Messenger]Step 2
> > >> [Messenger]Step 2
> > >> [Messenger]Step 2
> > >> [Update]Step 2
> > >> [Update]Step 2
> > >> [Update]Step 2
> > >> [Update]Step 2
> > >> [Update]Step 2
> > >> [Messenger]Step 3
> > >> [Messenger]Step 3
> > >> [Messenger]Step 3
> > >> [Messenger]Step 3
> > >> [Messenger]Step 3
> > >> [Update]Step 3
> > >> [Update]Step 3
> > >> [Update]Step 3
> > >>
> > >> For COLLECTION, the superstep number remains 1, and the result is
> > >> obviously
> > >> not the one I expected.
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Messenger]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >> [Update]Step 1
> > >>
> > >> Does anyone have an idea what could have triggered this behaviour?
> > >>
> > >> Thanks in advance!
> > >> Andra
> > >>
> > >
> > >
> >
>


[jira] [Created] (FLINK-1653) Setting up Apache Jenkins CI for continuous tests

2015-03-04 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1653:


 Summary: Setting up Apache Jenkins CI for continuous tests
 Key: FLINK-1653
 URL: https://issues.apache.org/jira/browse/FLINK-1653
 Project: Flink
  Issue Type: Task
  Components: Build System
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor


We already have Travis build for Apache Flink Github mirror.

This task is used to track effort to setup Flink Jenkins CI in ASF environment 
[1]


[1] https://builds.apache.org



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1654) Wrong scala example of POJO type in documentation

2015-03-04 Thread Chiwan Park (JIRA)
Chiwan Park created FLINK-1654:
--

 Summary: Wrong scala example of POJO type in documentation
 Key: FLINK-1654
 URL: https://issues.apache.org/jira/browse/FLINK-1654
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 0.9
Reporter: Chiwan Park
Priority: Trivial


In 
[documentation|https://github.com/chiwanpark/flink/blob/master/docs/programming_guide.md#pojos],
 there is a scala example of POJO

{code}
class WordWithCount(val word: String, val count: Int) {
  def this() {
this(null, -1)
  }
}
{code}

I think that this is wrong because Flink POJO required public fields or private 
fields with getter and setter. Fields in scala class is private in default. We 
should change the field declarations to use `var` keyword or class declaration 
to case class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Could not build up connection to JobManager

2015-03-04 Thread Dulaj Viduranga
Hi,
I found many other places “localhost” is hard coded. I changed them in a better 
way I think. I made a pull request. Please review. b7da22a 


> On Mar 4, 2015, at 8:17 PM, Stephan Ewen  wrote:
> 
> If I recall correctly, we only hardcode "localhost" in the local mini
> cluster - do you think it is problematic there as well?
> 
> Have you found any other places?
> 
> On Mon, Mar 2, 2015 at 10:26 AM, Dulaj Viduranga 
> wrote:
> 
>> In some places of the code, "localhost" is hard coded. When it is resolved
>> by the DNS, it is posible to be directed  to a different IP other than
>> 127.0.0.1 (like private range 10.0.0.0/8). I changed those places to
>> 127.0.0.1 and it works like a charm.
>> But hard coding 127.0.0.1 is not a good option because when the jobmanager
>> ip is changed, this becomes an issue again. I'm thinking of setting
>> jobmanager ip from the config.yaml to these places.
>> If you have a better idea on doing this with your experience, please let
>> me know.
>> 
>> Best.
>> 



Re: Could not build up connection to JobManager

2015-03-04 Thread Dulaj Viduranga
The every change in the commit b7da22a is not required but I thought they are 
appropriate.

> On Mar 5, 2015, at 8:11 AM, Dulaj Viduranga  wrote:
> 
> Hi,
> I found many other places “localhost” is hard coded. I changed them in a 
> better way I think. I made a pull request. Please review. b7da22a 
> 
> 
>> On Mar 4, 2015, at 8:17 PM, Stephan Ewen  wrote:
>> 
>> If I recall correctly, we only hardcode "localhost" in the local mini
>> cluster - do you think it is problematic there as well?
>> 
>> Have you found any other places?
>> 
>> On Mon, Mar 2, 2015 at 10:26 AM, Dulaj Viduranga 
>> wrote:
>> 
>>> In some places of the code, "localhost" is hard coded. When it is resolved
>>> by the DNS, it is posible to be directed  to a different IP other than
>>> 127.0.0.1 (like private range 10.0.0.0/8). I changed those places to
>>> 127.0.0.1 and it works like a charm.
>>> But hard coding 127.0.0.1 is not a good option because when the jobmanager
>>> ip is changed, this becomes an issue again. I'm thinking of setting
>>> jobmanager ip from the config.yaml to these places.
>>> If you have a better idea on doing this with your experience, please let
>>> me know.
>>> 
>>> Best.
>>> 
> 



Not Flink related: IntelliJ cannot open maven projects

2015-03-04 Thread Dulaj Viduranga
Hello,

Not Flink related.
IntelliJ cannot open maven projects. It is stuck on “Looking for available 
profiles….”. I can open the project in eclipse without any problem.
Any ideas?
No rush.

Thanks. 

Re: [jira] [Created] (FLINK-1651) Running mvn test got stuck

2015-03-04 Thread Till Rohrmann
Is this reproducible? If so, then a stack trace of the JVM would be
helpful. With the stack trace we would know which test case stalls.

On Wed, Mar 4, 2015 at 9:46 PM, Henry Saputra (JIRA) 
wrote:

> Henry Saputra created FLINK-1651:
> 
>
>  Summary: Running mvn test got stuck
>  Key: FLINK-1651
>  URL: https://issues.apache.org/jira/browse/FLINK-1651
>  Project: Flink
>   Issue Type: Bug
>   Components: test
> Reporter: Henry Saputra
>
>
> I keep getting my test stuck at this state:
>
> ...
>
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 sec
> - in org.apache.flink.runtime.types.TypeTest
> Running org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 sec
> - in org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
> Running org.apache.flink.runtime.util.DataInputOutputSerializerTest
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.848 sec
> - in org.apache.flink.runtime.operators.DataSourceTaskTest
> Running org.apache.flink.runtime.util.DelegatingConfigurationTest
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
> - in org.apache.flink.runtime.util.DelegatingConfigurationTest
> Running org.apache.flink.runtime.util.EnvironmentInformationTest
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.563 sec
> - in org.apache.flink.runtime.io.network.serialization.LargeRecordsTest
> Running org.apache.flink.runtime.util.event.TaskEventHandlerTest
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec
> - in org.apache.flink.runtime.util.event.TaskEventHandlerTest
> Running org.apache.flink.runtime.util.LRUCacheMapTest
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012 sec
> - in org.apache.flink.runtime.util.LRUCacheMapTest
> Running org.apache.flink.runtime.util.MathUtilTest
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
> - in org.apache.flink.runtime.util.MathUtilTest
> Running org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.064 sec
> - in org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
> Running org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
> - in org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.238 sec
> - in org.apache.flink.runtime.taskmanager.TaskManagerTest
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.616 sec
> - in org.apache.flink.runtime.profiling.impl.InstanceProfilerTest
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.303 sec
> - in org.apache.flink.runtime.util.DataInputOutputSerializerTest
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.488 sec
> - in org.apache.flink.runtime.util.EnvironmentInformationTest
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.81 sec -
> in org.apache.flink.runtime.taskmanager.TaskManagerProcessReapingTest
> Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.653
> sec - in org.apache.flink.runtime.operators.MatchTaskTest
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.071 sec
> - in org.apache.flink.runtime.operators.sort.LargeRecordHandlerTest
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.534 sec
> - in org.apache.flink.runtime.operators.DataSinkTaskTest
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.98 sec
> - in org.apache.flink.runtime.operators.sort.NormalizedKeySorterTest
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.017 sec
> - in org.apache.flink.runtime.io.disk.ChannelViewsTest
>
>
> After this seemed like nothing happen. And the program just hang.
>
> I am using MacOSX with Java version 1.7.0_71
>
>
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>


Re: Not Flink related: IntelliJ cannot open maven projects

2015-03-04 Thread Till Rohrmann
That is odd. Most of the committers are using the IntelliJ to develop
Flink. Have you tried to delete the Flink directory and checking it out
again. Make sure that all IntelliJ related files (*.iml) are deleted so
that it is really a fresh import.

I just cloned the flink repository and imported the project into IntelliJ.
It worked for me. I'm using IntelliJ 14.0.3 and Java 7.

On Thu, Mar 5, 2015 at 4:54 AM, Dulaj Viduranga 
wrote:

> Hello,
>
> Not Flink related.
> IntelliJ cannot open maven projects. It is stuck on “Looking for available
> profiles….”. I can open the project in eclipse without any problem.
> Any ideas?
> No rush.
>
> Thanks.


Re: Could not build up connection to JobManager

2015-03-04 Thread Till Rohrmann
Hi Dulaj,

I looked through your commit and noticed that the JobClient might not be
listening on the right network interface. Your commit seems to fix it. I
just want to understand the problem properly and therefore I opened a
branch with a small change. Could you try out whether this change would
also fix your problem? You can find the code here [1]. Would be awesome if
you checked it out and let it run on your cluster setting. Thanks a lot
Dulaj!

[1]
https://github.com/tillrohrmann/flink/tree/fixLocalFlinkMiniClusterJobClient

On Thu, Mar 5, 2015 at 4:21 AM, Dulaj Viduranga 
wrote:

> The every change in the commit b7da22a is not required but I thought they
> are appropriate.
>
> > On Mar 5, 2015, at 8:11 AM, Dulaj Viduranga 
> wrote:
> >
> > Hi,
> > I found many other places “localhost” is hard coded. I changed them in a
> better way I think. I made a pull request. Please review. b7da22a <
> https://github.com/viduranga/flink/commit/b7da22a562d3da5a9be2657308c0f82e4e2f80cd
> >
> >
> >> On Mar 4, 2015, at 8:17 PM, Stephan Ewen  wrote:
> >>
> >> If I recall correctly, we only hardcode "localhost" in the local mini
> >> cluster - do you think it is problematic there as well?
> >>
> >> Have you found any other places?
> >>
> >> On Mon, Mar 2, 2015 at 10:26 AM, Dulaj Viduranga 
> >> wrote:
> >>
> >>> In some places of the code, "localhost" is hard coded. When it is
> resolved
> >>> by the DNS, it is posible to be directed  to a different IP other than
> >>> 127.0.0.1 (like private range 10.0.0.0/8). I changed those places to
> >>> 127.0.0.1 and it works like a charm.
> >>> But hard coding 127.0.0.1 is not a good option because when the
> jobmanager
> >>> ip is changed, this becomes an issue again. I'm thinking of setting
> >>> jobmanager ip from the config.yaml to these places.
> >>> If you have a better idea on doing this with your experience, please
> let
> >>> me know.
> >>>
> >>> Best.
> >>>
> >
>
>


Re: [jira] [Created] (FLINK-1651) Running mvn test got stuck

2015-03-04 Thread Henry Saputra
This is consistently repro in my mac work machine. Nothing really on
the JStack thread dump or Java Mission Control =(

The console just hang but do not see any deadlock from the thread trace.

- Henry

On Wed, Mar 4, 2015 at 11:02 PM, Till Rohrmann  wrote:
> Is this reproducible? If so, then a stack trace of the JVM would be
> helpful. With the stack trace we would know which test case stalls.
>
> On Wed, Mar 4, 2015 at 9:46 PM, Henry Saputra (JIRA) 
> wrote:
>
>> Henry Saputra created FLINK-1651:
>> 
>>
>>  Summary: Running mvn test got stuck
>>  Key: FLINK-1651
>>  URL: https://issues.apache.org/jira/browse/FLINK-1651
>>  Project: Flink
>>   Issue Type: Bug
>>   Components: test
>> Reporter: Henry Saputra
>>
>>
>> I keep getting my test stuck at this state:
>>
>> ...
>>
>> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 sec
>> - in org.apache.flink.runtime.types.TypeTest
>> Running org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
>> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 sec
>> - in org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
>> Running org.apache.flink.runtime.util.DataInputOutputSerializerTest
>> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.848 sec
>> - in org.apache.flink.runtime.operators.DataSourceTaskTest
>> Running org.apache.flink.runtime.util.DelegatingConfigurationTest
>> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
>> - in org.apache.flink.runtime.util.DelegatingConfigurationTest
>> Running org.apache.flink.runtime.util.EnvironmentInformationTest
>> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.563 sec
>> - in org.apache.flink.runtime.io.network.serialization.LargeRecordsTest
>> Running org.apache.flink.runtime.util.event.TaskEventHandlerTest
>> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec
>> - in org.apache.flink.runtime.util.event.TaskEventHandlerTest
>> Running org.apache.flink.runtime.util.LRUCacheMapTest
>> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012 sec
>> - in org.apache.flink.runtime.util.LRUCacheMapTest
>> Running org.apache.flink.runtime.util.MathUtilTest
>> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
>> - in org.apache.flink.runtime.util.MathUtilTest
>> Running org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
>> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.064 sec
>> - in org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
>> Running org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
>> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
>> - in org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
>> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.238 sec
>> - in org.apache.flink.runtime.taskmanager.TaskManagerTest
>> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.616 sec
>> - in org.apache.flink.runtime.profiling.impl.InstanceProfilerTest
>> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.303 sec
>> - in org.apache.flink.runtime.util.DataInputOutputSerializerTest
>> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.488 sec
>> - in org.apache.flink.runtime.util.EnvironmentInformationTest
>> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.81 sec -
>> in org.apache.flink.runtime.taskmanager.TaskManagerProcessReapingTest
>> Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.653
>> sec - in org.apache.flink.runtime.operators.MatchTaskTest
>> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.071 sec
>> - in org.apache.flink.runtime.operators.sort.LargeRecordHandlerTest
>> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.534 sec
>> - in org.apache.flink.runtime.operators.DataSinkTaskTest
>> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.98 sec
>> - in org.apache.flink.runtime.operators.sort.NormalizedKeySorterTest
>> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.017 sec
>> - in org.apache.flink.runtime.io.disk.ChannelViewsTest
>>
>>
>> After this seemed like nothing happen. And the program just hang.
>>
>> I am using MacOSX with Java version 1.7.0_71
>>
>>
>>
>>
>>
>> --
>> This message was sent by Atlassian JIRA
>> (v6.3.4#6332)
>>


Re: [jira] [Created] (FLINK-1651) Running mvn test got stuck

2015-03-04 Thread Till Rohrmann
That is odd. I just ran mvn clean verify -Dmaven.javadoc.skip=true on my
mac work machine and it ran through successfully.

Could you post which java processes are running when the console hangs? And
also the stack traces of all surefirebooter processes? Maybe someone can
make head or tail of it.

On Thu, Mar 5, 2015 at 8:34 AM, Henry Saputra 
wrote:

> This is consistently repro in my mac work machine. Nothing really on
> the JStack thread dump or Java Mission Control =(
>
> The console just hang but do not see any deadlock from the thread trace.
>
> - Henry
>
> On Wed, Mar 4, 2015 at 11:02 PM, Till Rohrmann 
> wrote:
> > Is this reproducible? If so, then a stack trace of the JVM would be
> > helpful. With the stack trace we would know which test case stalls.
> >
> > On Wed, Mar 4, 2015 at 9:46 PM, Henry Saputra (JIRA) 
> > wrote:
> >
> >> Henry Saputra created FLINK-1651:
> >> 
> >>
> >>  Summary: Running mvn test got stuck
> >>  Key: FLINK-1651
> >>  URL: https://issues.apache.org/jira/browse/FLINK-1651
> >>  Project: Flink
> >>   Issue Type: Bug
> >>   Components: test
> >> Reporter: Henry Saputra
> >>
> >>
> >> I keep getting my test stuck at this state:
> >>
> >> ...
> >>
> >> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006
> sec
> >> - in org.apache.flink.runtime.types.TypeTest
> >> Running
> org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
> >> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561
> sec
> >> - in org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
> >> Running org.apache.flink.runtime.util.DataInputOutputSerializerTest
> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.848
> sec
> >> - in org.apache.flink.runtime.operators.DataSourceTaskTest
> >> Running org.apache.flink.runtime.util.DelegatingConfigurationTest
> >> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003
> sec
> >> - in org.apache.flink.runtime.util.DelegatingConfigurationTest
> >> Running org.apache.flink.runtime.util.EnvironmentInformationTest
> >> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.563
> sec
> >> - in org.apache.flink.runtime.io.network.serialization.LargeRecordsTest
> >> Running org.apache.flink.runtime.util.event.TaskEventHandlerTest
> >> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007
> sec
> >> - in org.apache.flink.runtime.util.event.TaskEventHandlerTest
> >> Running org.apache.flink.runtime.util.LRUCacheMapTest
> >> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012
> sec
> >> - in org.apache.flink.runtime.util.LRUCacheMapTest
> >> Running org.apache.flink.runtime.util.MathUtilTest
> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002
> sec
> >> - in org.apache.flink.runtime.util.MathUtilTest
> >> Running org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
> >> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.064
> sec
> >> - in org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
> >> Running org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
> >> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003
> sec
> >> - in org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
> >> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.238
> sec
> >> - in org.apache.flink.runtime.taskmanager.TaskManagerTest
> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.616
> sec
> >> - in org.apache.flink.runtime.profiling.impl.InstanceProfilerTest
> >> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.303
> sec
> >> - in org.apache.flink.runtime.util.DataInputOutputSerializerTest
> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.488
> sec
> >> - in org.apache.flink.runtime.util.EnvironmentInformationTest
> >> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.81
> sec -
> >> in org.apache.flink.runtime.taskmanager.TaskManagerProcessReapingTest
> >> Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.653
> >> sec - in org.apache.flink.runtime.operators.MatchTaskTest
> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.071
> sec
> >> - in org.apache.flink.runtime.operators.sort.LargeRecordHandlerTest
> >> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.534
> sec
> >> - in org.apache.flink.runtime.operators.DataSinkTaskTest
> >> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.98
> sec
> >> - in org.apache.flink.runtime.operators.sort.NormalizedKeySorterTest
> >> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.017
> sec
> >> - in org.apache.flink.runtime.io.disk.ChannelViewsTest
> >>
> >>
> >> After this seemed like nothing happen. And the program just hang.
> >>
> >> I am u

Re: [jira] [Created] (FLINK-1651) Running mvn test got stuck

2015-03-04 Thread Henry Saputra
As I remember it was running test successfully last week. It is been a
while I ran full test in my env.

Will try to look more into it.

On Wed, Mar 4, 2015 at 11:44 PM, Till Rohrmann  wrote:
> That is odd. I just ran mvn clean verify -Dmaven.javadoc.skip=true on my
> mac work machine and it ran through successfully.
>
> Could you post which java processes are running when the console hangs? And
> also the stack traces of all surefirebooter processes? Maybe someone can
> make head or tail of it.
>
> On Thu, Mar 5, 2015 at 8:34 AM, Henry Saputra 
> wrote:
>
>> This is consistently repro in my mac work machine. Nothing really on
>> the JStack thread dump or Java Mission Control =(
>>
>> The console just hang but do not see any deadlock from the thread trace.
>>
>> - Henry
>>
>> On Wed, Mar 4, 2015 at 11:02 PM, Till Rohrmann 
>> wrote:
>> > Is this reproducible? If so, then a stack trace of the JVM would be
>> > helpful. With the stack trace we would know which test case stalls.
>> >
>> > On Wed, Mar 4, 2015 at 9:46 PM, Henry Saputra (JIRA) 
>> > wrote:
>> >
>> >> Henry Saputra created FLINK-1651:
>> >> 
>> >>
>> >>  Summary: Running mvn test got stuck
>> >>  Key: FLINK-1651
>> >>  URL: https://issues.apache.org/jira/browse/FLINK-1651
>> >>  Project: Flink
>> >>   Issue Type: Bug
>> >>   Components: test
>> >> Reporter: Henry Saputra
>> >>
>> >>
>> >> I keep getting my test stuck at this state:
>> >>
>> >> ...
>> >>
>> >> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006
>> sec
>> >> - in org.apache.flink.runtime.types.TypeTest
>> >> Running
>> org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
>> >> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561
>> sec
>> >> - in org.apache.flink.runtime.util.AtomicDisposableReferenceCounterTest
>> >> Running org.apache.flink.runtime.util.DataInputOutputSerializerTest
>> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.848
>> sec
>> >> - in org.apache.flink.runtime.operators.DataSourceTaskTest
>> >> Running org.apache.flink.runtime.util.DelegatingConfigurationTest
>> >> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003
>> sec
>> >> - in org.apache.flink.runtime.util.DelegatingConfigurationTest
>> >> Running org.apache.flink.runtime.util.EnvironmentInformationTest
>> >> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.563
>> sec
>> >> - in org.apache.flink.runtime.io.network.serialization.LargeRecordsTest
>> >> Running org.apache.flink.runtime.util.event.TaskEventHandlerTest
>> >> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007
>> sec
>> >> - in org.apache.flink.runtime.util.event.TaskEventHandlerTest
>> >> Running org.apache.flink.runtime.util.LRUCacheMapTest
>> >> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012
>> sec
>> >> - in org.apache.flink.runtime.util.LRUCacheMapTest
>> >> Running org.apache.flink.runtime.util.MathUtilTest
>> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002
>> sec
>> >> - in org.apache.flink.runtime.util.MathUtilTest
>> >> Running org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
>> >> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.064
>> sec
>> >> - in org.apache.flink.runtime.util.NonReusingKeyGroupedIteratorTest
>> >> Running org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
>> >> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003
>> sec
>> >> - in org.apache.flink.runtime.util.ReusingKeyGroupedIteratorTest
>> >> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.238
>> sec
>> >> - in org.apache.flink.runtime.taskmanager.TaskManagerTest
>> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.616
>> sec
>> >> - in org.apache.flink.runtime.profiling.impl.InstanceProfilerTest
>> >> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.303
>> sec
>> >> - in org.apache.flink.runtime.util.DataInputOutputSerializerTest
>> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.488
>> sec
>> >> - in org.apache.flink.runtime.util.EnvironmentInformationTest
>> >> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.81
>> sec -
>> >> in org.apache.flink.runtime.taskmanager.TaskManagerProcessReapingTest
>> >> Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.653
>> >> sec - in org.apache.flink.runtime.operators.MatchTaskTest
>> >> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.071
>> sec
>> >> - in org.apache.flink.runtime.operators.sort.LargeRecordHandlerTest
>> >> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.534
>> sec
>> >> - in org.apache.flink.runtime.operators.DataSinkTaskTest
>> >> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.98
>> sec
>> >> -