[jira] [Commented] (FLINK-1637) Flink uberjar does not work with Java 6

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346605#comment-14346605
 ] 

ASF GitHub Bot commented on FLINK-1637:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/450#issuecomment-77117018
  
According to this discussion: 
https://issues.apache.org/jira/browse/SPARK-1520
Jars build by java 6 can have more than 65k entries.
If thats the case, we could also build our releases and nightlies with 
**openjdk 6**


> Flink uberjar does not work with Java 6
> ---
>
> Key: FLINK-1637
> URL: https://issues.apache.org/jira/browse/FLINK-1637
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.9
> Environment: Java 6
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>
> Apparently the uberjar created by maven shade does not work with java 6
> {code}
> /jre1.6.0_45/bin/java -classpath 
> flink-0.9-SNAPSHOT/lib/flink-dist-0.9-SNAPSHOT-yarn-uberjar.jar 
> org.apache.flink.client.CliFrontend
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/client/CliFrontend
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.client.CliFrontend
>   at java.net.URLClassLoader$1.run(Unknown Source)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(Unknown Source)
>   at java.lang.ClassLoader.loadClass(Unknown Source)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
>   at java.lang.ClassLoader.loadClass(Unknown Source)
> Could not find the main class: org.apache.flink.client.CliFrontend.  Program 
> will exit.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1637] Reduce number of files in uberjar...

2015-03-04 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/450#issuecomment-77117018
  
According to this discussion: 
https://issues.apache.org/jira/browse/SPARK-1520
Jars build by java 6 can have more than 65k entries.
If thats the case, we could also build our releases and nightlies with 
**openjdk 6**


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-1641) Make projection operator chainable

2015-03-04 Thread Gyula Fora (JIRA)
Gyula Fora created FLINK-1641:
-

 Summary: Make projection operator chainable
 Key: FLINK-1641
 URL: https://issues.apache.org/jira/browse/FLINK-1641
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Reporter: Gyula Fora


The ProjectInvokable currently doesn't extend the ChainableInvokable class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1512] Add CsvReader for reading into PO...

2015-03-04 Thread chiwanpark
Github user chiwanpark commented on a diff in the pull request:

https://github.com/apache/flink/pull/426#discussion_r25760453
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/io/CsvInputFormat.java ---
@@ -235,8 +252,21 @@ public OUT readRecord(OUT reuse, byte[] bytes, int 
offset, int numBytes) throws

if (parseRecord(parsedValues, bytes, offset, numBytes)) {
// valid parse, map values into pact record
-   for (int i = 0; i < parsedValues.length; i++) {
-   reuse.setField(parsedValues[i], i);
+   if (pojoTypeInfo == null) {
+   Tuple result = (Tuple) reuse;
+   for (int i = 0; i < parsedValues.length; i++) {
+   result.setField(parsedValues[i], i);
+   }
+   } else {
+   for (int i = 0; i < parsedValues.length; i++) {
+   try {
+   
pojoTypeInfo.getPojoFieldAt(i).field.set(reuse, parsedValues[i]);
--- End diff --

@rmetzger Thanks! I modify my implementation to set the fields accessible 
in `CsvInputFormat` and `ScalaCsvInputFormat` and add a test case with private 
fields.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1512) Add CsvReader for reading into POJOs.

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346626#comment-14346626
 ] 

ASF GitHub Bot commented on FLINK-1512:
---

Github user chiwanpark commented on a diff in the pull request:

https://github.com/apache/flink/pull/426#discussion_r25760453
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/io/CsvInputFormat.java ---
@@ -235,8 +252,21 @@ public OUT readRecord(OUT reuse, byte[] bytes, int 
offset, int numBytes) throws

if (parseRecord(parsedValues, bytes, offset, numBytes)) {
// valid parse, map values into pact record
-   for (int i = 0; i < parsedValues.length; i++) {
-   reuse.setField(parsedValues[i], i);
+   if (pojoTypeInfo == null) {
+   Tuple result = (Tuple) reuse;
+   for (int i = 0; i < parsedValues.length; i++) {
+   result.setField(parsedValues[i], i);
+   }
+   } else {
+   for (int i = 0; i < parsedValues.length; i++) {
+   try {
+   
pojoTypeInfo.getPojoFieldAt(i).field.set(reuse, parsedValues[i]);
--- End diff --

@rmetzger Thanks! I modify my implementation to set the fields accessible 
in `CsvInputFormat` and `ScalaCsvInputFormat` and add a test case with private 
fields.


> Add CsvReader for reading into POJOs.
> -
>
> Key: FLINK-1512
> URL: https://issues.apache.org/jira/browse/FLINK-1512
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Robert Metzger
>Assignee: Chiwan Park
>Priority: Minor
>  Labels: starter
>
> Currently, the {{CsvReader}} supports only TupleXX types. 
> It would be nice if users were also able to read into POJOs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1587][gelly] Added additional check for...

2015-03-04 Thread uce
Github user uce commented on the pull request:

https://github.com/apache/flink/pull/440#issuecomment-77125258
  
Hey Vasia,

1. I think the tests shouldn't print to stdout, but use the logger for test 
output.
2. The exceptions were wrong-fully logged for cancelled/failed task. I've 
fixed it yesterday in 9255594fb3b9b7c00d9088c3b630af9ecbdf22f4. Have you 
already rebased on the recent master?

– Ufuk


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1587) coGroup throws NoSuchElementException on iterator.next()

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346654#comment-14346654
 ] 

ASF GitHub Bot commented on FLINK-1587:
---

Github user uce commented on the pull request:

https://github.com/apache/flink/pull/440#issuecomment-77125258
  
Hey Vasia,

1. I think the tests shouldn't print to stdout, but use the logger for test 
output.
2. The exceptions were wrong-fully logged for cancelled/failed task. I've 
fixed it yesterday in 9255594fb3b9b7c00d9088c3b630af9ecbdf22f4. Have you 
already rebased on the recent master?

– Ufuk


> coGroup throws NoSuchElementException on iterator.next()
> 
>
> Key: FLINK-1587
> URL: https://issues.apache.org/jira/browse/FLINK-1587
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
> Environment: flink-0.8.0-SNAPSHOT
>Reporter: Carsten Brandt
>Assignee: Andra Lungu
>
> I am receiving the following exception when running a simple job that 
> extracts outdegree from a graph using Gelly. It is currently only failing on 
> the cluster and I am not able to reproduce it locally. Will try that the next 
> days.
> {noformat}
> 02/20/2015 02:27:02:  CoGroup (CoGroup at inDegrees(Graph.java:675)) (5/64) 
> switched to FAILED
> java.util.NoSuchElementException
>   at java.util.Collections$EmptyIterator.next(Collections.java:3006)
>   at flink.graphs.Graph$CountNeighborsCoGroup.coGroup(Graph.java:665)
>   at 
> org.apache.flink.runtime.operators.CoGroupDriver.run(CoGroupDriver.java:130)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:493)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:360)
>   at 
> org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:257)
>   at java.lang.Thread.run(Thread.java:745)
> 02/20/2015 02:27:02:  Job execution switched to status FAILING
> ...
> {noformat}
> The error occurs in Gellys Graph.java at this line: 
> https://github.com/apache/flink/blob/a51c02f6e8be948d71a00c492808115d622379a7/flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/Graph.java#L636
> Is there any valid case where a coGroup Iterator may be empty? As far as I 
> see there is a bug somewhere.
> I'd like to write a test case for this to reproduce the issue. Where can I 
> put such a test?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-1601) Sometimes the YARNSessionFIFOITCase fails on Travis

2015-03-04 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-1601:


Assignee: Robert Metzger

> Sometimes the YARNSessionFIFOITCase fails on Travis
> ---
>
> Key: FLINK-1601
> URL: https://issues.apache.org/jira/browse/FLINK-1601
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>
> Sometimes the {{YARNSessionFIFOITCase}} fails on Travis with the following 
> exception.
> {code}
> Tests run: 8, Failures: 8, Errors: 0, Skipped: 0, Time elapsed: 71.375 sec 
> <<< FAILURE! - in org.apache.flink.yarn.YARNSessionFIFOITCase
> perJobYarnCluster(org.apache.flink.yarn.YARNSessionFIFOITCase)  Time elapsed: 
> 60.707 sec  <<< FAILURE!
> java.lang.AssertionError: During the timeout period of 60 seconds the 
> expected string did not show up
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.flink.yarn.YarnTestBase.runWithArgs(YarnTestBase.java:315)
>   at 
> org.apache.flink.yarn.YARNSessionFIFOITCase.perJobYarnCluster(YARNSessionFIFOITCase.java:185)
> testQueryCluster(org.apache.flink.yarn.YARNSessionFIFOITCase)  Time elapsed: 
> 0.507 sec  <<< FAILURE!
> java.lang.AssertionError: There is at least one application on the cluster is 
> not finished
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.yarn.YarnTestBase.checkClusterEmpty(YarnTestBase.java:146)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The result is
> {code}
> Failed tests: 
>   YARNSessionFIFOITCase.perJobYarnCluster:185->YarnTestBase.runWithArgs:315 
> During the timeout period of 60 seconds the expected string did not show up
>   YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:146 There is at least 
> one application on the cluster is not finished
>   YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:146 There is at least 
> one application on the cluster is not finished
>   YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:146 There is at least 
> one application on the cluster is not finished
>   YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:146 There is at least 
> one application on the cluster is not finished
>   YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:146 There is at least 
> one application on the cluster is not finished
>   YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:146 There is at least 
> one application on the cluster is not finished
>   YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:14

[jira] [Commented] (FLINK-1601) Sometimes the YARNSessionFIFOITCase fails on Travis

2015-03-04 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346660#comment-14346660
 ] 

Till Rohrmann commented on FLINK-1601:
--

It happened again, but this time with another exception.

{code}
Tests run: 8, Failures: 8, Errors: 0, Skipped: 0, Time elapsed: 15.97 sec <<< 
FAILURE! - in org.apache.flink.yarn.YARNSessionFIFOITCase
perJobYarnCluster(org.apache.flink.yarn.YARNSessionFIFOITCase)  Time elapsed: 
6.696 sec  <<< FAILURE!
java.lang.AssertionError: Runner thread died before the test was finished. 
Return value = 1
at org.junit.Assert.fail(Assert.java:88)
at org.apache.flink.yarn.YarnTestBase.runWithArgs(YarnTestBase.java:311)
at 
org.apache.flink.yarn.YARNSessionFIFOITCase.perJobYarnCluster(YARNSessionFIFOITCase.java:185)

testQueryCluster(org.apache.flink.yarn.YARNSessionFIFOITCase)  Time elapsed: 
0.503 sec  <<< FAILURE!
java.lang.AssertionError: There is at least one application on the cluster is 
not finished
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.flink.yarn.YarnTestBase.checkClusterEmpty(YarnTestBase.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}

The test results are

{code}
YARNSessionFIFOITCase.perJobYarnCluster:185->YarnTestBase.runWithArgs:311 
Runner thread died before the test was finished. Return value = 1
  YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:147 There is at least 
one application on the cluster is not finished
  YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:147 There is at least 
one application on the cluster is not finished
  YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:147 There is at least 
one application on the cluster is not finished
  YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:147 There is at least 
one application on the cluster is not finished
  YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:147 There is at least 
one application on the cluster is not finished
  YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:147 There is at least 
one application on the cluster is not finished
  YARNSessionFIFOITCase>YarnTestBase.checkClusterEmpty:147 There is at least 
one application on the cluster is not finished
{code}

> Sometimes the YARNSessionFIFOITCase fails on Travis
> ---
>
> Key: FLINK-1601
> URL: https://issues.apache.org/jira/browse/FLINK-1601
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>
> Sometimes the {{YARNSessionFIFOITCase}} fails on Travis with the following 
> exception.
> {code}
> Tests run: 8, Failures: 8, Errors: 0, Skipped: 0, Time

[jira] [Created] (FLINK-1642) Flakey YARNSessionCapacitySchedulerITCase

2015-03-04 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-1642:


 Summary: Flakey YARNSessionCapacitySchedulerITCase
 Key: FLINK-1642
 URL: https://issues.apache.org/jira/browse/FLINK-1642
 Project: Flink
  Issue Type: Bug
Reporter: Till Rohrmann
Assignee: Robert Metzger


The {{YARNSessionCapacitySchedulerITCase}} spuriously fails on Travis. The 
error is

{code}
Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 13.058 sec <<< 
FAILURE! - in org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase
testClientStartup(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase)  
Time elapsed: 6.669 sec  <<< FAILURE!
java.lang.AssertionError: Runner thread died before the test was finished. 
Return value = 1
at org.junit.Assert.fail(Assert.java:88)
at org.apache.flink.yarn.YarnTestBase.runWithArgs(YarnTestBase.java:311)
at 
org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase.testClientStartup(YARNSessionCapacitySchedulerITCase.java:53)

testNonexistingQueue(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase)  
Time elapsed: 0.504 sec  <<< FAILURE!
java.lang.AssertionError: There is at least one application on the cluster is 
not finished
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.flink.yarn.YarnTestBase.checkClusterEmpty(YarnTestBase.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}

The test results are

{code}
YARNSessionCapacitySchedulerITCase.testClientStartup:53->YarnTestBase.runWithArgs:311
 Runner thread died before the test was finished. Return value = 1
  YARNSessionCapacitySchedulerITCase>YarnTestBase.checkClusterEmpty:147 There 
is at least one application on the cluster is not finished
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-377] [FLINK-671] Generic Interface / PA...

2015-03-04 Thread aljoscha
Github user aljoscha commented on the pull request:

https://github.com/apache/flink/pull/202#issuecomment-77127742
  
Thanks, I have two last requests, sorry for that.

Could you rename flink-generic to flink-language-binding-generic? The 
problem is, that the package name is now flink-generic, it pops up like this in 
maven central and so on without the information that it is actually a sub 
package of flink-language-binding. This could be quite confusing.

In MapFunctin.py and FilterFunction.py you use map() and filter() 
respectively. These operations are not lazy, i.e. in map() it does first apply 
the user map-function to every element in the partition and then it collects 
the results. This can become a problem if the input is very big. Instead we 
should iterate over the iterator and output each element after mapping. This 
keeps memory consumption low. Same applies to filter().


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-377) Create a general purpose framework for language bindings

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346670#comment-14346670
 ] 

ASF GitHub Bot commented on FLINK-377:
--

Github user aljoscha commented on the pull request:

https://github.com/apache/flink/pull/202#issuecomment-77127742
  
Thanks, I have two last requests, sorry for that.

Could you rename flink-generic to flink-language-binding-generic? The 
problem is, that the package name is now flink-generic, it pops up like this in 
maven central and so on without the information that it is actually a sub 
package of flink-language-binding. This could be quite confusing.

In MapFunctin.py and FilterFunction.py you use map() and filter() 
respectively. These operations are not lazy, i.e. in map() it does first apply 
the user map-function to every element in the partition and then it collects 
the results. This can become a problem if the input is very big. Instead we 
should iterate over the iterator and output each element after mapping. This 
keeps memory consumption low. Same applies to filter().


> Create a general purpose framework for language bindings
> 
>
> Key: FLINK-377
> URL: https://issues.apache.org/jira/browse/FLINK-377
> Project: Flink
>  Issue Type: Improvement
>Reporter: GitHub Import
>Assignee: Chesnay Schepler
>  Labels: github-import
> Fix For: pre-apache
>
>
> A general purpose API to run operators with arbitrary binaries. 
> This will allow to run Stratosphere programs written in Python, JavaScript, 
> Ruby, Go or whatever you like. 
> We suggest using Google Protocol Buffers for data serialization. This is the 
> list of languages that currently support ProtoBuf: 
> https://code.google.com/p/protobuf/wiki/ThirdPartyAddOns 
> Very early prototype with python: 
> https://github.com/rmetzger/scratch/tree/learn-protobuf (basically testing 
> protobuf)
> For Ruby: https://github.com/infochimps-labs/wukong
> Two new students working at Stratosphere (@skunert and @filiphaase) are 
> working on this.
> The reference binding language will be for Python, but other bindings are 
> very welcome.
> The best name for this so far is "stratosphere-lang-bindings".
> I created this issue to track the progress (and give everybody a chance to 
> comment on this)
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/377
> Created by: [rmetzger|https://github.com/rmetzger]
> Labels: enhancement, 
> Assignee: [filiphaase|https://github.com/filiphaase]
> Created at: Tue Jan 07 19:47:20 CET 2014
> State: open



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346680#comment-14346680
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/451#discussion_r25763642
  
--- Diff: 
flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java ---
@@ -601,452 +482,295 @@ public int compare(ExecutionGraph o1, 
ExecutionGraph o2) {
}

/**
-* Executes the cancel action.
+* Executes the CANCEL action.
 * 
 * @param args Command line arguments for the cancel action.
 */
protected int cancel(String[] args) {
-   // Parse command line options
-   CommandLine line;
+   LOG.info("Running 'cancel' command.");
+
+   CancelOptions options;
try {
-   line = parser.parse(CANCEL_OPTIONS, args, false);
-   evaluateGeneralOptions(line);
-   }
-   catch (MissingOptionException e) {
-   return handleArgException(e);
-   }
-   catch (MissingArgumentException e) {
-   return handleArgException(e);
+   options = CliFrontendParser.parseCancelCommand(args);
}
-   catch (UnrecognizedOptionException e) {
+   catch (CliArgsException e) {
return handleArgException(e);
}
-   catch (Exception e) {
-   return handleError(e);
+   catch (Throwable t) {
+   return handleError(t);
}
-   
-   if (printHelp) {
-   printHelpForCancel();
+
+   // evaluate help flag
+   if (options.isPrintHelp()) {
+   CliFrontendParser.printHelpForCancel();
return 0;
}

-   String[] cleanedArgs = line.getArgs();
+   String[] cleanedArgs = options.getArgs();
JobID jobId;
 
if (cleanedArgs.length > 0) {
String jobIdString = cleanedArgs[0];
try {
jobId = new 
JobID(StringUtils.hexStringToByte(jobIdString));
-   } catch (Exception e) {
+   }
+   catch (Exception e) {
+   LOG.error("Error: The value for the Job ID is 
not a valid ID.");
System.out.println("Error: The value for the 
Job ID is not a valid ID.");
return 1;
}
-   } else {
+   }
+   else {
+   LOG.error("Missing JobID in the command line 
arguments.");
System.out.println("Error: Specify a Job ID to cancel a 
job.");
return 1;
}

try {
-   ActorRef jobManager = getJobManager(line, 
getGlobalConfiguration());
-
-   if (jobManager == null) {
-   return 1;
-   }
-
-   final Future response = 
Patterns.ask(jobManager, new CancelJob(jobId),
-   new Timeout(getAkkaTimeout()));
+   ActorRef jobManager = getJobManager(options);
+   Future response = Patterns.ask(jobManager, new 
CancelJob(jobId), new Timeout(askTimeout));
 
try {
-   Await.ready(response, getAkkaTimeout());
-   } catch (Exception exception) {
-   throw new IOException("Canceling the job with 
job ID " + jobId + " failed.",
-   exception);
+   Await.result(response, askTimeout);
--- End diff --

```Await.ready``` should be sufficient, because we don't do anything with 
the return value.


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parsePar

[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346681#comment-14346681
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/451#discussion_r25763670
  
--- Diff: 
flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java ---
@@ -601,452 +482,295 @@ public int compare(ExecutionGraph o1, 
ExecutionGraph o2) {
}

/**
-* Executes the cancel action.
+* Executes the CANCEL action.
 * 
 * @param args Command line arguments for the cancel action.
 */
protected int cancel(String[] args) {
-   // Parse command line options
-   CommandLine line;
+   LOG.info("Running 'cancel' command.");
+
+   CancelOptions options;
try {
-   line = parser.parse(CANCEL_OPTIONS, args, false);
-   evaluateGeneralOptions(line);
-   }
-   catch (MissingOptionException e) {
-   return handleArgException(e);
-   }
-   catch (MissingArgumentException e) {
-   return handleArgException(e);
+   options = CliFrontendParser.parseCancelCommand(args);
}
-   catch (UnrecognizedOptionException e) {
+   catch (CliArgsException e) {
return handleArgException(e);
}
-   catch (Exception e) {
-   return handleError(e);
+   catch (Throwable t) {
+   return handleError(t);
}
-   
-   if (printHelp) {
-   printHelpForCancel();
+
+   // evaluate help flag
+   if (options.isPrintHelp()) {
+   CliFrontendParser.printHelpForCancel();
return 0;
}

-   String[] cleanedArgs = line.getArgs();
+   String[] cleanedArgs = options.getArgs();
JobID jobId;
 
if (cleanedArgs.length > 0) {
String jobIdString = cleanedArgs[0];
try {
jobId = new 
JobID(StringUtils.hexStringToByte(jobIdString));
-   } catch (Exception e) {
+   }
+   catch (Exception e) {
+   LOG.error("Error: The value for the Job ID is 
not a valid ID.");
System.out.println("Error: The value for the 
Job ID is not a valid ID.");
return 1;
}
-   } else {
+   }
+   else {
+   LOG.error("Missing JobID in the command line 
arguments.");
System.out.println("Error: Specify a Job ID to cancel a 
job.");
return 1;
}

try {
-   ActorRef jobManager = getJobManager(line, 
getGlobalConfiguration());
-
-   if (jobManager == null) {
-   return 1;
-   }
-
-   final Future response = 
Patterns.ask(jobManager, new CancelJob(jobId),
-   new Timeout(getAkkaTimeout()));
+   ActorRef jobManager = getJobManager(options);
+   Future response = Patterns.ask(jobManager, new 
CancelJob(jobId), new Timeout(askTimeout));
 
try {
-   Await.ready(response, getAkkaTimeout());
-   } catch (Exception exception) {
-   throw new IOException("Canceling the job with 
job ID " + jobId + " failed.",
-   exception);
+   Await.result(response, askTimeout);
--- End diff --

My bad, we want the exception.


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.cli

[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/451#discussion_r25763642
  
--- Diff: 
flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java ---
@@ -601,452 +482,295 @@ public int compare(ExecutionGraph o1, 
ExecutionGraph o2) {
}

/**
-* Executes the cancel action.
+* Executes the CANCEL action.
 * 
 * @param args Command line arguments for the cancel action.
 */
protected int cancel(String[] args) {
-   // Parse command line options
-   CommandLine line;
+   LOG.info("Running 'cancel' command.");
+
+   CancelOptions options;
try {
-   line = parser.parse(CANCEL_OPTIONS, args, false);
-   evaluateGeneralOptions(line);
-   }
-   catch (MissingOptionException e) {
-   return handleArgException(e);
-   }
-   catch (MissingArgumentException e) {
-   return handleArgException(e);
+   options = CliFrontendParser.parseCancelCommand(args);
}
-   catch (UnrecognizedOptionException e) {
+   catch (CliArgsException e) {
return handleArgException(e);
}
-   catch (Exception e) {
-   return handleError(e);
+   catch (Throwable t) {
+   return handleError(t);
}
-   
-   if (printHelp) {
-   printHelpForCancel();
+
+   // evaluate help flag
+   if (options.isPrintHelp()) {
+   CliFrontendParser.printHelpForCancel();
return 0;
}

-   String[] cleanedArgs = line.getArgs();
+   String[] cleanedArgs = options.getArgs();
JobID jobId;
 
if (cleanedArgs.length > 0) {
String jobIdString = cleanedArgs[0];
try {
jobId = new 
JobID(StringUtils.hexStringToByte(jobIdString));
-   } catch (Exception e) {
+   }
+   catch (Exception e) {
+   LOG.error("Error: The value for the Job ID is 
not a valid ID.");
System.out.println("Error: The value for the 
Job ID is not a valid ID.");
return 1;
}
-   } else {
+   }
+   else {
+   LOG.error("Missing JobID in the command line 
arguments.");
System.out.println("Error: Specify a Job ID to cancel a 
job.");
return 1;
}

try {
-   ActorRef jobManager = getJobManager(line, 
getGlobalConfiguration());
-
-   if (jobManager == null) {
-   return 1;
-   }
-
-   final Future response = 
Patterns.ask(jobManager, new CancelJob(jobId),
-   new Timeout(getAkkaTimeout()));
+   ActorRef jobManager = getJobManager(options);
+   Future response = Patterns.ask(jobManager, new 
CancelJob(jobId), new Timeout(askTimeout));
 
try {
-   Await.ready(response, getAkkaTimeout());
-   } catch (Exception exception) {
-   throw new IOException("Canceling the job with 
job ID " + jobId + " failed.",
-   exception);
+   Await.result(response, askTimeout);
--- End diff --

```Await.ready``` should be sufficient, because we don't do anything with 
the return value.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/451#discussion_r25763670
  
--- Diff: 
flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java ---
@@ -601,452 +482,295 @@ public int compare(ExecutionGraph o1, 
ExecutionGraph o2) {
}

/**
-* Executes the cancel action.
+* Executes the CANCEL action.
 * 
 * @param args Command line arguments for the cancel action.
 */
protected int cancel(String[] args) {
-   // Parse command line options
-   CommandLine line;
+   LOG.info("Running 'cancel' command.");
+
+   CancelOptions options;
try {
-   line = parser.parse(CANCEL_OPTIONS, args, false);
-   evaluateGeneralOptions(line);
-   }
-   catch (MissingOptionException e) {
-   return handleArgException(e);
-   }
-   catch (MissingArgumentException e) {
-   return handleArgException(e);
+   options = CliFrontendParser.parseCancelCommand(args);
}
-   catch (UnrecognizedOptionException e) {
+   catch (CliArgsException e) {
return handleArgException(e);
}
-   catch (Exception e) {
-   return handleError(e);
+   catch (Throwable t) {
+   return handleError(t);
}
-   
-   if (printHelp) {
-   printHelpForCancel();
+
+   // evaluate help flag
+   if (options.isPrintHelp()) {
+   CliFrontendParser.printHelpForCancel();
return 0;
}

-   String[] cleanedArgs = line.getArgs();
+   String[] cleanedArgs = options.getArgs();
JobID jobId;
 
if (cleanedArgs.length > 0) {
String jobIdString = cleanedArgs[0];
try {
jobId = new 
JobID(StringUtils.hexStringToByte(jobIdString));
-   } catch (Exception e) {
+   }
+   catch (Exception e) {
+   LOG.error("Error: The value for the Job ID is 
not a valid ID.");
System.out.println("Error: The value for the 
Job ID is not a valid ID.");
return 1;
}
-   } else {
+   }
+   else {
+   LOG.error("Missing JobID in the command line 
arguments.");
System.out.println("Error: Specify a Job ID to cancel a 
job.");
return 1;
}

try {
-   ActorRef jobManager = getJobManager(line, 
getGlobalConfiguration());
-
-   if (jobManager == null) {
-   return 1;
-   }
-
-   final Future response = 
Patterns.ask(jobManager, new CancelJob(jobId),
-   new Timeout(getAkkaTimeout()));
+   ActorRef jobManager = getJobManager(options);
+   Future response = Patterns.ask(jobManager, new 
CancelJob(jobId), new Timeout(askTimeout));
 
try {
-   Await.ready(response, getAkkaTimeout());
-   } catch (Exception exception) {
-   throw new IOException("Canceling the job with 
job ID " + jobId + " failed.",
-   exception);
+   Await.result(response, askTimeout);
--- End diff --

My bad, we want the exception.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77131171
  
The two last build profiles are probably failing because the tests are 
checking on the output of the CLI frontend.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346695#comment-14346695
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77131171
  
The two last build profiles are probably failing because the tests are 
checking on the output of the CLI frontend.


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> If there are no running jobs, no exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/451#discussion_r25765068
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobManagerMessages.scala
 ---
@@ -19,11 +19,14 @@
 package org.apache.flink.runtime.messages
 
 import org.apache.flink.runtime.accumulators.AccumulatorEvent
+import org.apache.flink.runtime.client.JobStatusMessage
 import org.apache.flink.runtime.executiongraph.{ExecutionAttemptID, 
ExecutionGraph}
 import org.apache.flink.runtime.instance.{InstanceID, Instance}
 import org.apache.flink.runtime.jobgraph.{JobGraph, JobID, JobStatus, 
JobVertexID}
 import org.apache.flink.runtime.taskmanager.TaskExecutionState
 
+import scala.collection.JavaConverters._
--- End diff --

There are also explicit imports of ```JavaConverters._``` in 
```RegisteredTaskManagers``` which then should be removed if putting the import 
here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346697#comment-14346697
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/451#discussion_r25765068
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/messages/JobManagerMessages.scala
 ---
@@ -19,11 +19,14 @@
 package org.apache.flink.runtime.messages
 
 import org.apache.flink.runtime.accumulators.AccumulatorEvent
+import org.apache.flink.runtime.client.JobStatusMessage
 import org.apache.flink.runtime.executiongraph.{ExecutionAttemptID, 
ExecutionGraph}
 import org.apache.flink.runtime.instance.{InstanceID, Instance}
 import org.apache.flink.runtime.jobgraph.{JobGraph, JobID, JobStatus, 
JobVertexID}
 import org.apache.flink.runtime.taskmanager.TaskExecutionState
 
+import scala.collection.JavaConverters._
--- End diff --

There are also explicit imports of ```JavaConverters._``` in 
```RegisteredTaskManagers``` which then should be removed if putting the import 
here.


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatch

[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77132462
  
Except for the YARN test cases which fail, LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346701#comment-14346701
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77132462
  
Except for the YARN test cases which fail, LGTM.


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> If there are no running jobs, no exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77134702
  
The `-h` command is not working for the actions (tested with info and run)

```
[root@sandbox flink-yarn-0.9-SNAPSHOT]# ./bin/flink info -h
10:28:01,440 WARN  org.apache.hadoop.util.NativeCodeLoader  
 - Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable
The program JAR file was not specified.

Use the help option (-h or --help) to get help on the command.
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346713#comment-14346713
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77134702
  
The `-h` command is not working for the actions (tested with info and run)

```
[root@sandbox flink-yarn-0.9-SNAPSHOT]# ./bin/flink info -h
10:28:01,440 WARN  org.apache.hadoop.util.NativeCodeLoader  
 - Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable
The program JAR file was not specified.

Use the help option (-h or --help) to get help on the command.
```


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.con

[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77135314
  
The YARN application master doesn't start anymore with the following 
exception:

```
10:31:01,526 ERROR org.apache.flink.yarn.ApplicationMaster$ 
 - Error while running the application master.
java.lang.Exception: Missing parameter '--executionMode'
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$parseArgs$1.apply(JobManager.scala:796)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$parseArgs$1.apply(JobManager.scala:790)
at scala.Option.map(Option.scala:145)
at 
org.apache.flink.runtime.jobmanager.JobManager$.parseArgs(JobManager.scala:789)
at 
org.apache.flink.yarn.ApplicationMaster$.startJobManager(ApplicationMaster.scala:200)
at 
org.apache.flink.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:88)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1471)
at 
org.apache.flink.yarn.ApplicationMaster$.main(ApplicationMaster.scala:58)
at 
org.apache.flink.yarn.ApplicationMaster.main(ApplicationMaster.scala)
```

That's probably also the reason why the YARN tests are failing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346723#comment-14346723
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77135314
  
The YARN application master doesn't start anymore with the following 
exception:

```
10:31:01,526 ERROR org.apache.flink.yarn.ApplicationMaster$ 
 - Error while running the application master.
java.lang.Exception: Missing parameter '--executionMode'
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$parseArgs$1.apply(JobManager.scala:796)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$parseArgs$1.apply(JobManager.scala:790)
at scala.Option.map(Option.scala:145)
at 
org.apache.flink.runtime.jobmanager.JobManager$.parseArgs(JobManager.scala:789)
at 
org.apache.flink.yarn.ApplicationMaster$.startJobManager(ApplicationMaster.scala:200)
at 
org.apache.flink.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:88)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1471)
at 
org.apache.flink.yarn.ApplicationMaster$.main(ApplicationMaster.scala:58)
at 
org.apache.flink.yarn.ApplicationMaster.main(ApplicationMaster.scala)
```

That's probably also the reason why the YARN tests are failing.


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$

[jira] [Created] (FLINK-1643) Detect tumbling policies where trigger and eviction match

2015-03-04 Thread Gyula Fora (JIRA)
Gyula Fora created FLINK-1643:
-

 Summary: Detect tumbling policies where trigger and eviction match
 Key: FLINK-1643
 URL: https://issues.apache.org/jira/browse/FLINK-1643
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Reporter: Gyula Fora


The windowing api should automatically detect matching trigger and eviction 
policies so it can apply optimizations for tumbling policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-377] [FLINK-671] Generic Interface / PA...

2015-03-04 Thread zentol
Github user zentol commented on the pull request:

https://github.com/apache/flink/pull/202#issuecomment-77140113
  
and here i thought i was being clever by swapping to built-in functions.

addressed both issues.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-377) Create a general purpose framework for language bindings

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346751#comment-14346751
 ] 

ASF GitHub Bot commented on FLINK-377:
--

Github user zentol commented on the pull request:

https://github.com/apache/flink/pull/202#issuecomment-77140113
  
and here i thought i was being clever by swapping to built-in functions.

addressed both issues.


> Create a general purpose framework for language bindings
> 
>
> Key: FLINK-377
> URL: https://issues.apache.org/jira/browse/FLINK-377
> Project: Flink
>  Issue Type: Improvement
>Reporter: GitHub Import
>Assignee: Chesnay Schepler
>  Labels: github-import
> Fix For: pre-apache
>
>
> A general purpose API to run operators with arbitrary binaries. 
> This will allow to run Stratosphere programs written in Python, JavaScript, 
> Ruby, Go or whatever you like. 
> We suggest using Google Protocol Buffers for data serialization. This is the 
> list of languages that currently support ProtoBuf: 
> https://code.google.com/p/protobuf/wiki/ThirdPartyAddOns 
> Very early prototype with python: 
> https://github.com/rmetzger/scratch/tree/learn-protobuf (basically testing 
> protobuf)
> For Ruby: https://github.com/infochimps-labs/wukong
> Two new students working at Stratosphere (@skunert and @filiphaase) are 
> working on this.
> The reference binding language will be for Python, but other bindings are 
> very welcome.
> The best name for this so far is "stratosphere-lang-bindings".
> I created this issue to track the progress (and give everybody a chance to 
> comment on this)
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/377
> Created by: [rmetzger|https://github.com/rmetzger]
> Labels: enhancement, 
> Assignee: [filiphaase|https://github.com/filiphaase]
> Created at: Tue Jan 07 19:47:20 CET 2014
> State: open



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1644) WebClient dies when no ExecutionEnvironment in main method

2015-03-04 Thread Jonathan Hasenburg (JIRA)
Jonathan Hasenburg created FLINK-1644:
-

 Summary: WebClient dies when no ExecutionEnvironment in main method
 Key: FLINK-1644
 URL: https://issues.apache.org/jira/browse/FLINK-1644
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 0.8.1
Reporter: Jonathan Hasenburg
Priority: Minor


When clicking on the check box next to a job in the WebClient, the client dies 
if no ExecutionEnvironment is in the main method (probably because it tries to 
generate the plan).

This can be reproduced easily if a not flink related jar is uploaded.

This is a problem, when your main method only contains code to extract the 
parameters and then creates the corresponding job class for those parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-999] Configurability of LocalExecutor

2015-03-04 Thread jkirsch
Github user jkirsch closed the pull request at:

https://github.com/apache/flink/pull/448


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-999) Configurability of LocalExecutor

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346806#comment-14346806
 ] 

ASF GitHub Bot commented on FLINK-999:
--

Github user jkirsch closed the pull request at:

https://github.com/apache/flink/pull/448


> Configurability of LocalExecutor
> 
>
> Key: FLINK-999
> URL: https://issues.apache.org/jira/browse/FLINK-999
> Project: Flink
>  Issue Type: Wish
>  Components: TaskManager
>Reporter: Till Rohrmann
>Priority: Minor
>
> I noticed when running locally some examples with the Scala API that the 
> TaskManagers ran out of network buffers when increasing the number of slots 
> on one machine. Trying to adjust the value, I stumbled across a more 
> naturally grown interface in LocalExecutor and NepheleMiniCluster. Some 
> parameters are specified as attributes of NepheleMiniCluster, others are 
> loaded from a configuration file and others use the default value specified 
> in ConfigConstants. I think it would be good to give the user the ability to 
> easily control the different parameters in a unified fashion by providing a 
> configuration object. Furthermore, the NepheleMiniCluster is not consistent 
> with the use of parameter values: For example, getJobClient sets the 
> jobmanager rpc port with respect to the attribute jobManagerRpcPort whereas 
> the start method can retrieve it from a specified configuration file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-999) Configurability of LocalExecutor

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346807#comment-14346807
 ] 

ASF GitHub Bot commented on FLINK-999:
--

Github user jkirsch commented on the pull request:

https://github.com/apache/flink/pull/448#issuecomment-77144749
  
#427 is a better approach


> Configurability of LocalExecutor
> 
>
> Key: FLINK-999
> URL: https://issues.apache.org/jira/browse/FLINK-999
> Project: Flink
>  Issue Type: Wish
>  Components: TaskManager
>Reporter: Till Rohrmann
>Priority: Minor
>
> I noticed when running locally some examples with the Scala API that the 
> TaskManagers ran out of network buffers when increasing the number of slots 
> on one machine. Trying to adjust the value, I stumbled across a more 
> naturally grown interface in LocalExecutor and NepheleMiniCluster. Some 
> parameters are specified as attributes of NepheleMiniCluster, others are 
> loaded from a configuration file and others use the default value specified 
> in ConfigConstants. I think it would be good to give the user the ability to 
> easily control the different parameters in a unified fashion by providing a 
> configuration object. Furthermore, the NepheleMiniCluster is not consistent 
> with the use of parameter values: For example, getJobClient sets the 
> jobmanager rpc port with respect to the attribute jobManagerRpcPort whereas 
> the start method can retrieve it from a specified configuration file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-999] Configurability of LocalExecutor

2015-03-04 Thread jkirsch
Github user jkirsch commented on the pull request:

https://github.com/apache/flink/pull/448#issuecomment-77144749
  
#427 is a better approach


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-1628) Strange behavior of "where" function during a join

2015-03-04 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-1628:
-
Priority: Critical  (was: Major)

> Strange behavior of "where" function during a join
> --
>
> Key: FLINK-1628
> URL: https://issues.apache.org/jira/browse/FLINK-1628
> Project: Flink
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 0.9
>Reporter: Daniel Bali
>Assignee: Fabian Hueske
>Priority: Critical
>  Labels: batch
>
> Hello!
> If I use the `where` function with a field list during a join, it exhibits 
> strange behavior.
> Here is the sample code that triggers the error: 
> https://gist.github.com/balidani/d9789b713e559d867d5c
> This example joins a DataSet with itself, then counts the number of rows. If 
> I use `.where(0, 1)` the result is (22), which is not correct. If I use 
> `EdgeKeySelector`, I get the correct result (101).
> When I pass a field list to the `equalTo` function (but not `where`), 
> everything works again.
> If I don't include the `groupBy` and `reduceGroup` parts, everything works.
> Also, when working with large DataSets, passing a field list to `where` makes 
> it incredibly slow, even though I don't see any exceptions in the log (in 
> DEBUG mode).
> Does anybody know what might cause this problem?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-1628) Strange behavior of "where" function during a join

2015-03-04 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske reassigned FLINK-1628:


Assignee: Fabian Hueske

> Strange behavior of "where" function during a join
> --
>
> Key: FLINK-1628
> URL: https://issues.apache.org/jira/browse/FLINK-1628
> Project: Flink
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 0.9
>Reporter: Daniel Bali
>Assignee: Fabian Hueske
>  Labels: batch
>
> Hello!
> If I use the `where` function with a field list during a join, it exhibits 
> strange behavior.
> Here is the sample code that triggers the error: 
> https://gist.github.com/balidani/d9789b713e559d867d5c
> This example joins a DataSet with itself, then counts the number of rows. If 
> I use `.where(0, 1)` the result is (22), which is not correct. If I use 
> `EdgeKeySelector`, I get the correct result (101).
> When I pass a field list to the `equalTo` function (but not `where`), 
> everything works again.
> If I don't include the `groupBy` and `reduceGroup` parts, everything works.
> Also, when working with large DataSets, passing a field list to `where` makes 
> it incredibly slow, even though I don't see any exceptions in the log (in 
> DEBUG mode).
> Does anybody know what might cause this problem?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Streaming cancellation + exception handling re...

2015-03-04 Thread mbalassi
Github user mbalassi commented on the pull request:

https://github.com/apache/flink/pull/449#issuecomment-77150831
  
Code looks good, testing usage right now.

I checked the quickstarts for API changes no need to change stuff there. I 
think some documentation is needed on the behavior but I'm adding that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-1628) Strange behavior of "where" function during a join

2015-03-04 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-1628:
-
Component/s: Optimizer

> Strange behavior of "where" function during a join
> --
>
> Key: FLINK-1628
> URL: https://issues.apache.org/jira/browse/FLINK-1628
> Project: Flink
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 0.9
>Reporter: Daniel Bali
>Assignee: Fabian Hueske
>  Labels: batch
>
> Hello!
> If I use the `where` function with a field list during a join, it exhibits 
> strange behavior.
> Here is the sample code that triggers the error: 
> https://gist.github.com/balidani/d9789b713e559d867d5c
> This example joins a DataSet with itself, then counts the number of rows. If 
> I use `.where(0, 1)` the result is (22), which is not correct. If I use 
> `EdgeKeySelector`, I get the correct result (101).
> When I pass a field list to the `equalTo` function (but not `where`), 
> everything works again.
> If I don't include the `groupBy` and `reduceGroup` parts, everything works.
> Also, when working with large DataSets, passing a field list to `where` makes 
> it incredibly slow, even though I don't see any exceptions in the log (in 
> DEBUG mode).
> Does anybody know what might cause this problem?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1645) Move StreamingClassloaderITCase to flink-streaming

2015-03-04 Thread JIRA
Márton Balassi created FLINK-1645:
-

 Summary: Move StreamingClassloaderITCase to flink-streaming
 Key: FLINK-1645
 URL: https://issues.apache.org/jira/browse/FLINK-1645
 Project: Flink
  Issue Type: Task
  Components: Streaming
Reporter: Márton Balassi
Priority: Minor


StreamingClassloaderITCase is contained in the flink-tests module and it should 
ideally be under flink-streaming.

Moving it requires some care: there is a StreamingProgram class that is built 
by an assembly for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346872#comment-14346872
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77156490
  
Thanks, I addressed your comments. Seems to work now.


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> If there are no running jobs, no exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77156490
  
Thanks, I addressed your comments. Seems to work now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1637] Reduce number of files in uberjar...

2015-03-04 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/450#issuecomment-77157047
  
Makes sense, +1 for building the releases with openjdk6 (half of the code 
is compiled by the Scala Compiler anyways)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1637) Flink uberjar does not work with Java 6

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346875#comment-14346875
 ] 

ASF GitHub Bot commented on FLINK-1637:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/450#issuecomment-77157047
  
Makes sense, +1 for building the releases with openjdk6 (half of the code 
is compiled by the Scala Compiler anyways)


> Flink uberjar does not work with Java 6
> ---
>
> Key: FLINK-1637
> URL: https://issues.apache.org/jira/browse/FLINK-1637
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.9
> Environment: Java 6
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>
> Apparently the uberjar created by maven shade does not work with java 6
> {code}
> /jre1.6.0_45/bin/java -classpath 
> flink-0.9-SNAPSHOT/lib/flink-dist-0.9-SNAPSHOT-yarn-uberjar.jar 
> org.apache.flink.client.CliFrontend
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/client/CliFrontend
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.client.CliFrontend
>   at java.net.URLClassLoader$1.run(Unknown Source)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(Unknown Source)
>   at java.lang.ClassLoader.loadClass(Unknown Source)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
>   at java.lang.ClassLoader.loadClass(Unknown Source)
> Could not find the main class: org.apache.flink.client.CliFrontend.  Program 
> will exit.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1162) Cannot serialize Scala classes with Avro serializer

2015-03-04 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-1162.
-
   Resolution: Fixed
Fix Version/s: 0.9

Fixed with Kryo.
I suspect there is no test for it.

> Cannot serialize Scala classes with Avro serializer
> ---
>
> Key: FLINK-1162
> URL: https://issues.apache.org/jira/browse/FLINK-1162
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime, Scala API
>Reporter: Till Rohrmann
> Fix For: 0.9
>
>
> The problem occurs for class names containing a '$' dollar sign in its name 
> how it is sometimes the case for Scala classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1646) Add name of required configuration value into the "Insufficient number of network buffers" exception

2015-03-04 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-1646:
-

 Summary: Add name of required configuration value into the 
"Insufficient number of network buffers" exception
 Key: FLINK-1646
 URL: https://issues.apache.org/jira/browse/FLINK-1646
 Project: Flink
  Issue Type: Improvement
  Components: TaskManager
Affects Versions: 0.9
Reporter: Robert Metzger
Priority: Minor


As per 
http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/Exception-Insufficient-number-of-network-buffers-required-120-but-only-2-of-2048-available-td746.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1587) coGroup throws NoSuchElementException on iterator.next()

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346905#comment-14346905
 ] 

ASF GitHub Bot commented on FLINK-1587:
---

Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/440#issuecomment-77163740
  
Thanks Ufuk!
@andralungu, could you please change the test to use the logger?
@uce, regarding the exceptions output, I see this after having rebased on 
the latest master (double checked the commit you're referring to is in there..)


> coGroup throws NoSuchElementException on iterator.next()
> 
>
> Key: FLINK-1587
> URL: https://issues.apache.org/jira/browse/FLINK-1587
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
> Environment: flink-0.8.0-SNAPSHOT
>Reporter: Carsten Brandt
>Assignee: Andra Lungu
>
> I am receiving the following exception when running a simple job that 
> extracts outdegree from a graph using Gelly. It is currently only failing on 
> the cluster and I am not able to reproduce it locally. Will try that the next 
> days.
> {noformat}
> 02/20/2015 02:27:02:  CoGroup (CoGroup at inDegrees(Graph.java:675)) (5/64) 
> switched to FAILED
> java.util.NoSuchElementException
>   at java.util.Collections$EmptyIterator.next(Collections.java:3006)
>   at flink.graphs.Graph$CountNeighborsCoGroup.coGroup(Graph.java:665)
>   at 
> org.apache.flink.runtime.operators.CoGroupDriver.run(CoGroupDriver.java:130)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:493)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:360)
>   at 
> org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:257)
>   at java.lang.Thread.run(Thread.java:745)
> 02/20/2015 02:27:02:  Job execution switched to status FAILING
> ...
> {noformat}
> The error occurs in Gellys Graph.java at this line: 
> https://github.com/apache/flink/blob/a51c02f6e8be948d71a00c492808115d622379a7/flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/Graph.java#L636
> Is there any valid case where a coGroup Iterator may be empty? As far as I 
> see there is a bug somewhere.
> I'd like to write a test case for this to reproduce the issue. Where can I 
> put such a test?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1587][gelly] Added additional check for...

2015-03-04 Thread vasia
Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/440#issuecomment-77163740
  
Thanks Ufuk!
@andralungu, could you please change the test to use the logger?
@uce, regarding the exceptions output, I see this after having rebased on 
the latest master (double checked the commit you're referring to is in there..)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77164833
  
Thank you. I'll try it again on my YARN setup later today.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-839) Add GroupReduceFunction and CoGroupFunction with separate Key

2015-03-04 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-839.
---
Resolution: Duplicate

Duplicate of FLINK-553 and FLINK-1272

> Add GroupReduceFunction and CoGroupFunction with separate Key
> -
>
> Key: FLINK-839
> URL: https://issues.apache.org/jira/browse/FLINK-839
> Project: Flink
>  Issue Type: Improvement
>Reporter: GitHub Import
>Priority: Minor
>  Labels: github-import
> Fix For: pre-apache
>
>
> The CoGroup and GroupReduce functions do not give access to the key. I 
> propose to add variants (as subclasses of the generic ones) that provide the 
> keys directly.
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/839
> Created by: [StephanEwen|https://github.com/StephanEwen]
> Labels: enhancement, java api, scala api, user satisfaction, 
> Milestone: Release 0.6 (unplanned)
> Created at: Tue May 20 19:47:12 CEST 2014
> State: open



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346915#comment-14346915
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77164833
  
Thank you. I'll try it again on my YARN setup later today.


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> If there are no running jobs, no exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (FLINK-1577) Misleading error messages when cancelling tasks

2015-03-04 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi reopened FLINK-1577:

  Assignee: Ufuk Celebi

I think I've only fixed half of the issue.

Both [~vkalavri] and [~bamrabi] have reported the issue after I've removed 
logging in the runtime environment in 9255594.

The problem is the following: for pipelined local subpartitions, it is possible 
that the producing task discards the partition after the local input channel 
notified the gate of the receiving task about available data, but before the 
receiving task is cancelled/failed. the gate then queries the input channel for 
data, but the channel cannot return a buffer as the subpartition has already 
been discarded.

Fix is coming up.

> Misleading error messages when cancelling tasks
> ---
>
> Key: FLINK-1577
> URL: https://issues.apache.org/jira/browse/FLINK-1577
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>
> A user running a Flink version before bec9c4d ran into a job manager failure 
> (fixed in bec9c4d), which lead to restarting the JM and cancelling/clearing 
> all tasks on the TMs.
> The logs of the TMs were inconclusive. I think part of that has been fixed by 
> now, e.g. there is a log message when cancelAndClearEverything is called, but 
> the task thread (RuntimeEnvironment) always logs an error when interrupted 
> during the run method -- even if the task gets cancelled.
> I think these error messages are misleading and only the root cause is 
> important (i.e. non-failed tasks should be silently cancelled).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1587) coGroup throws NoSuchElementException on iterator.next()

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346918#comment-14346918
 ] 

ASF GitHub Bot commented on FLINK-1587:
---

Github user uce commented on the pull request:

https://github.com/apache/flink/pull/440#issuecomment-77165497
  
Thanks, Vasia! :) It still seems to be a problem. I've re-opened the 
[corresponding issue](https://issues.apache.org/jira/browse/FLINK-1577) and 
will fix it. In any case, nothing should be wrong with the code here (it is a 
task cancellation race).


> coGroup throws NoSuchElementException on iterator.next()
> 
>
> Key: FLINK-1587
> URL: https://issues.apache.org/jira/browse/FLINK-1587
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
> Environment: flink-0.8.0-SNAPSHOT
>Reporter: Carsten Brandt
>Assignee: Andra Lungu
>
> I am receiving the following exception when running a simple job that 
> extracts outdegree from a graph using Gelly. It is currently only failing on 
> the cluster and I am not able to reproduce it locally. Will try that the next 
> days.
> {noformat}
> 02/20/2015 02:27:02:  CoGroup (CoGroup at inDegrees(Graph.java:675)) (5/64) 
> switched to FAILED
> java.util.NoSuchElementException
>   at java.util.Collections$EmptyIterator.next(Collections.java:3006)
>   at flink.graphs.Graph$CountNeighborsCoGroup.coGroup(Graph.java:665)
>   at 
> org.apache.flink.runtime.operators.CoGroupDriver.run(CoGroupDriver.java:130)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:493)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:360)
>   at 
> org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:257)
>   at java.lang.Thread.run(Thread.java:745)
> 02/20/2015 02:27:02:  Job execution switched to status FAILING
> ...
> {noformat}
> The error occurs in Gellys Graph.java at this line: 
> https://github.com/apache/flink/blob/a51c02f6e8be948d71a00c492808115d622379a7/flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/Graph.java#L636
> Is there any valid case where a coGroup Iterator may be empty? As far as I 
> see there is a bug somewhere.
> I'd like to write a test case for this to reproduce the issue. Where can I 
> put such a test?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1587][gelly] Added additional check for...

2015-03-04 Thread uce
Github user uce commented on the pull request:

https://github.com/apache/flink/pull/440#issuecomment-77165497
  
Thanks, Vasia! :) It still seems to be a problem. I've re-opened the 
[corresponding issue](https://issues.apache.org/jira/browse/FLINK-1577) and 
will fix it. In any case, nothing should be wrong with the code here (it is a 
task cancellation race).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1522][gelly] Added test for SSSP Exampl...

2015-03-04 Thread vasia
Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/429#issuecomment-77166964
  
This looks good! Thanks @andralungu for catching the bug ^^
I'll merge it later.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1522) Add tests for the library methods and examples

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346950#comment-14346950
 ] 

ASF GitHub Bot commented on FLINK-1522:
---

Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/429#issuecomment-77166964
  
This looks good! Thanks @andralungu for catching the bug ^^
I'll merge it later.


> Add tests for the library methods and examples
> --
>
> Key: FLINK-1522
> URL: https://issues.apache.org/jira/browse/FLINK-1522
> Project: Flink
>  Issue Type: New Feature
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Daniel Bali
>  Labels: easyfix, test
>
> The current tests in gelly test one method at a time. We should have some 
> tests for complete applications. As a start, we could add one test case per 
> example and this way also make sure that our graph library methods actually 
> give correct results.
> I'm assigning this to [~andralungu] because she has already implemented the 
> test for SSSP, but I will help as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1640] Remove tailing slash from paths.

2015-03-04 Thread fhueske
GitHub user fhueske opened a pull request:

https://github.com/apache/flink/pull/453

[FLINK-1640] Remove tailing slash from paths. 

- Remove tailing slash from paths. 
- Add tests for Path and FileOutputFormat.

Tailing slashes were previously not removed from paths which caused 
functions that depend on `Path.getName()` and `Path.getParent()` to be have 
inconsistently for paths with and without tailing slashes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhueske/flink pathBug

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/453.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #453


commit fd1320792cbabfd04b151011bc19430e2b63d3ec
Author: Fabian Hueske 
Date:   2015-03-04T14:24:26Z

[FLINK-1640] Remove tailing slash from paths. Add tests for Path and 
FileOutputFormat.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1640) FileOutputFormat writes to wrong path if path ends with '/'

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346962#comment-14346962
 ] 

ASF GitHub Bot commented on FLINK-1640:
---

GitHub user fhueske opened a pull request:

https://github.com/apache/flink/pull/453

[FLINK-1640] Remove tailing slash from paths. 

- Remove tailing slash from paths. 
- Add tests for Path and FileOutputFormat.

Tailing slashes were previously not removed from paths which caused 
functions that depend on `Path.getName()` and `Path.getParent()` to be have 
inconsistently for paths with and without tailing slashes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhueske/flink pathBug

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/453.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #453


commit fd1320792cbabfd04b151011bc19430e2b63d3ec
Author: Fabian Hueske 
Date:   2015-03-04T14:24:26Z

[FLINK-1640] Remove tailing slash from paths. Add tests for Path and 
FileOutputFormat.




> FileOutputFormat writes to wrong path if path ends with '/'
> ---
>
> Key: FLINK-1640
> URL: https://issues.apache.org/jira/browse/FLINK-1640
> Project: Flink
>  Issue Type: Bug
>  Components: Java API, Scala API
>Affects Versions: 0.9, 0.8.1
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>
> The FileOutputFormat duplicates the last directory of a path, if the path 
> ends  with a slash '/'.
> For example, if the output path is specified as {{/home/myuser/outputPath/}} 
> the output is written to {{/home/myuser/outputPath/outputPath/}}.
> This bug was introduced by commit 8fc04e4da8a36866e10564205c3f900894f4f6e0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1647) Push master documentation (latest) to Flink website

2015-03-04 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1647:


 Summary: Push master documentation (latest) to Flink website
 Key: FLINK-1647
 URL: https://issues.apache.org/jira/browse/FLINK-1647
 Project: Flink
  Issue Type: Task
Reporter: Henry Saputra
Assignee: Max Michels


Per discussions in dev ist we would like to push latest (master) doc to Flink 
website.

This will help new contributors to follow what we are cooking in master branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1647) Push master documentation (latest) to Flink website

2015-03-04 Thread Max Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Michels closed FLINK-1647.
--
Resolution: Fixed

Done :) 

http://flink.apache.org/

Under Documentation -> Current Snapshot -> 0.9

> Push master documentation (latest) to Flink website
> ---
>
> Key: FLINK-1647
> URL: https://issues.apache.org/jira/browse/FLINK-1647
> Project: Flink
>  Issue Type: Task
>Reporter: Henry Saputra
>Assignee: Max Michels
>
> Per discussions in dev ist we would like to push latest (master) doc to Flink 
> website.
> This will help new contributors to follow what we are cooking in master 
> branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1647) Push master documentation (latest) to Flink website

2015-03-04 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347066#comment-14347066
 ] 

Henry Saputra commented on FLINK-1647:
--

That was quick =P LOL

> Push master documentation (latest) to Flink website
> ---
>
> Key: FLINK-1647
> URL: https://issues.apache.org/jira/browse/FLINK-1647
> Project: Flink
>  Issue Type: Task
>Reporter: Henry Saputra
>Assignee: Max Michels
>
> Per discussions in dev ist we would like to push latest (master) doc to Flink 
> website.
> This will help new contributors to follow what we are cooking in master 
> branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77194681
  
Allright. I was able to start a YARN session from the frontend.
Also the `-h` is working now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347119#comment-14347119
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77194681
  
Allright. I was able to start a YARN session from the frontend.
Also the `-h` is working now.


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> If there are no running jobs, no exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1627) Netty channel connect deadlock

2015-03-04 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi resolved FLINK-1627.

Resolution: Fixed

Fixed in 0333109.

It was not a connect deadlock, but the network I/O thread was blocked in an 
infinite loop. That's why the connect calls never returned.

> Netty channel connect deadlock 
> ---
>
> Key: FLINK-1627
> URL: https://issues.apache.org/jira/browse/FLINK-1627
> Project: Flink
>  Issue Type: Bug
>Reporter: Ufuk Celebi
>
> [~StephanEwen] reports the following deadlock 
> (https://travis-ci.org/StephanEwen/incubator-flink/jobs/52755844, logs: 
> https://s3.amazonaws.com/flink.a.o.uce.east/travis-artifacts/StephanEwen/incubator-flink/477/477.2.tar.gz).
> {code}
> "CHAIN Partition -> Map (Map at 
> testRestartMultipleTimes(SimpleRecoveryITCase.java:200)) (2/4)" daemon 
> prio=10 tid=0x7f5fdc008800 nid=0xe230 in Object.wait() 
> [0x7f5fca8f2000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xf2a13530> (a java.lang.Object)
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory$ConnectingChannel.waitForChannel(PartitionRequestClientFactory.java:179)
>   - locked <0xf2a13530> (a java.lang.Object)
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory$ConnectingChannel.access$000(PartitionRequestClientFactory.java:125)
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory.createPartitionRequestClient(PartitionRequestClientFactory.java:64)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyConnectionManager.createPartitionRequestClient(NettyConnectionManager.java:53)
>   at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestIntermediateResultPartition(RemoteInputChannel.java:92)
>   at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.requestPartitions(SingleInputGate.java:287)
>   - locked <0xf29dbcd8> (a java.lang.Object)
>   at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:306)
>   at 
> org.apache.flink.runtime.io.network.api.reader.AbstractRecordReader.getNextRecord(AbstractRecordReader.java:75)
>   at 
> org.apache.flink.runtime.io.network.api.reader.MutableRecordReader.next(MutableRecordReader.java:34)
>   at 
> org.apache.flink.runtime.operators.util.ReaderIterator.next(ReaderIterator.java:59)
>   at org.apache.flink.runtime.operators.NoOpDriver.run(NoOpDriver.java:91)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:496)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
>   at 
> org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:205)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> "CHAIN Partition -> Map (Map at 
> testRestartMultipleTimes(SimpleRecoveryITCase.java:200)) (3/4)" daemon 
> prio=10 tid=0x7f5fdc005000 nid=0xe22f in Object.wait() 
> [0x7f5fca9f3000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xf2a13530> (a java.lang.Object)
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory$ConnectingChannel.waitForChannel(PartitionRequestClientFactory.java:179)
>   - locked <0xf2a13530> (a java.lang.Object)
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory$ConnectingChannel.access$000(PartitionRequestClientFactory.java:125)
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory.createPartitionRequestClient(PartitionRequestClientFactory.java:79)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyConnectionManager.createPartitionRequestClient(NettyConnectionManager.java:53)
>   at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestIntermediateResultPartition(RemoteInputChannel.java:92)
>   at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.requestPartitions(SingleInputGate.java:287)
>   - locked <0xf2896f88> (a java.lang.Object)
>   at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:306)
>   at 
> org.apache.flink.runtime.io.network.api.reader.AbstractRecordReader.getNextRecord(AbstractRecordReader.java:75)
>   at 
> org.apache.flink.runtime.io.network.api.reader.MutableRecordReader.next(MutableRecordReader.java:34)
>   at 
> org.apache.flink.runtime.operators.util.ReaderIterator.next(Reade

[jira] [Created] (FLINK-1648) Add a mode where the system automatically sets the parallelism to the available task slots

2015-03-04 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-1648:
---

 Summary: Add a mode where the system automatically sets the 
parallelism to the available task slots
 Key: FLINK-1648
 URL: https://issues.apache.org/jira/browse/FLINK-1648
 Project: Flink
  Issue Type: New Feature
  Components: JobManager
Affects Versions: 0.9
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 0.9


This is basically a port of this code form the 0.8 release:

https://github.com/apache/flink/pull/410



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread StephanEwen
Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/451


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347224#comment-14347224
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/451


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> If there are no running jobs, no exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1616] [client] Overhaul of the client.

2015-03-04 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77206172
  
Manually merged in 5385e48d94a2df81c8fd6102a889cf42dd93fe2f


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347223#comment-14347223
 ] 

ASF GitHub Bot commented on FLINK-1616:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/451#issuecomment-77206172
  
Manually merged in 5385e48d94a2df81c8fd6102a889cf42dd93fe2f


> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Priority: Minor
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> If there are no running jobs, no exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1616) Action "list -r" gives IOException when there are running jobs

2015-03-04 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-1616.
-
   Resolution: Fixed
Fix Version/s: 0.9
 Assignee: Stephan Ewen

Fixed in 5385e48d94a2df81c8fd6102a889cf42dd93fe2f

> Action "list -r" gives IOException when there are running jobs
> --
>
> Key: FLINK-1616
> URL: https://issues.apache.org/jira/browse/FLINK-1616
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9
>Reporter: Vasia Kalavri
>Assignee: Stephan Ewen
>Priority: Minor
> Fix For: 0.9
>
>
> Here's the full exception:
> java.io.IOException: Could not retrieve running jobs from job manager.
>   at org.apache.flink.client.CliFrontend.list(CliFrontend.java:528)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1089)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1114)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka.tcp://flink@10.20.0.25:13245/user/jobmanager#-1517424081]] after 
> [10 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.run(Scheduler.scala:476)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:282)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anonfun$close$1.apply(Scheduler.scala:281)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
>   at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:688)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:641)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:808)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:811)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:804)
>   at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
>   at 
> akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:804)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at 
> akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:638)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> If there are no running jobs, no exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1646) Add name of required configuration value into the "Insufficient number of network buffers" exception

2015-03-04 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi resolved FLINK-1646.

Resolution: Fixed

Fixed in 9c0d634.

> Add name of required configuration value into the "Insufficient number of 
> network buffers" exception
> 
>
> Key: FLINK-1646
> URL: https://issues.apache.org/jira/browse/FLINK-1646
> Project: Flink
>  Issue Type: Improvement
>  Components: TaskManager
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Ufuk Celebi
>Priority: Minor
>  Labels: starter
>
> As per 
> http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/Exception-Insufficient-number-of-network-buffers-required-120-but-only-2-of-2048-available-td746.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Add auto-parallelism to Jobs (0.8 branch)

2015-03-04 Thread StephanEwen
Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/410


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-1646) Add name of required configuration value into the "Insufficient number of network buffers" exception

2015-03-04 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi reassigned FLINK-1646:
--

Assignee: Ufuk Celebi

> Add name of required configuration value into the "Insufficient number of 
> network buffers" exception
> 
>
> Key: FLINK-1646
> URL: https://issues.apache.org/jira/browse/FLINK-1646
> Project: Flink
>  Issue Type: Improvement
>  Components: TaskManager
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Ufuk Celebi
>Priority: Minor
>  Labels: starter
>
> As per 
> http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/Exception-Insufficient-number-of-network-buffers-required-120-but-only-2-of-2048-available-td746.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Add auto-parallelism to Jobs (0.8 branch)

2015-03-04 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/410#issuecomment-77205854
  
Manually merged into `release-0.8` in 
a6f9f9939ca03026baeefb3bd0876b90068b7682


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1640] Remove tailing slash from paths.

2015-03-04 Thread uce
Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/453#discussion_r25795767
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/common/io/FileOutputFormatTest.java
 ---
@@ -44,7 +44,7 @@ public void testCreateNoneParallelLocalFS() {
} catch (IOException e) {
throw new RuntimeException("Test in error", e);
--- End diff --

We do this in other tests as well, but the test will fail if it throws an 
Exception anyways, so it's OK to just declare it in the signature instead of 
rethrowing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1640) FileOutputFormat writes to wrong path if path ends with '/'

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347238#comment-14347238
 ] 

ASF GitHub Bot commented on FLINK-1640:
---

Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/453#discussion_r25795767
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/common/io/FileOutputFormatTest.java
 ---
@@ -44,7 +44,7 @@ public void testCreateNoneParallelLocalFS() {
} catch (IOException e) {
throw new RuntimeException("Test in error", e);
--- End diff --

We do this in other tests as well, but the test will fail if it throws an 
Exception anyways, so it's OK to just declare it in the signature instead of 
rethrowing.


> FileOutputFormat writes to wrong path if path ends with '/'
> ---
>
> Key: FLINK-1640
> URL: https://issues.apache.org/jira/browse/FLINK-1640
> Project: Flink
>  Issue Type: Bug
>  Components: Java API, Scala API
>Affects Versions: 0.9, 0.8.1
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>
> The FileOutputFormat duplicates the last directory of a path, if the path 
> ends  with a slash '/'.
> For example, if the output path is specified as {{/home/myuser/outputPath/}} 
> the output is written to {{/home/myuser/outputPath/outputPath/}}.
> This bug was introduced by commit 8fc04e4da8a36866e10564205c3f900894f4f6e0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1388) POJO support for writeAsCsv

2015-03-04 Thread Adnan Khan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347239#comment-14347239
 ] 

Adnan Khan commented on FLINK-1388:
---

So I've made changes according to the suggestions you guys made. Here's the 
latest 
[version|https://github.com/khnd/flink/commit/bb657254e690ef8fff9fd244174bdfb47938f88e].


> POJO support for writeAsCsv
> ---
>
> Key: FLINK-1388
> URL: https://issues.apache.org/jira/browse/FLINK-1388
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API
>Reporter: Timo Walther
>Assignee: Adnan Khan
>Priority: Minor
>
> It would be great if one could simply write out POJOs in CSV format.
> {code}
> public class MyPojo {
>String a;
>int b;
> }
> {code}
> to:
> {code}
> # CSV file of org.apache.flink.MyPojo: String a, int b
> "Hello World", 42
> "Hello World 2", 47
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1640) FileOutputFormat writes to wrong path if path ends with '/'

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347242#comment-14347242
 ] 

ASF GitHub Bot commented on FLINK-1640:
---

Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/453#discussion_r25795975
  
--- Diff: flink-core/src/test/java/org/apache/flink/core/fs/PathTest.java 
---
@@ -23,6 +23,152 @@
 import static org.junit.Assert.*;
 
 public class PathTest {
+
+   @Test
+   public void testPathFromString() {
+
+   Path p = new Path("/my/path");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("/my/path/");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("/my//path/");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("/my//path//a///");
+   assertEquals("/my/path/a", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("\\my\\patha\\");
+   assertEquals("/my/path/a", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("/my/path/ ");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("hdfs:///my/path");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertEquals("hdfs", p.toUri().getScheme());
+
+   p = new Path("hdfs:///my/path/");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertEquals("hdfs", p.toUri().getScheme());
+
+   p = new Path("file:///my/path");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertEquals("file", p.toUri().getScheme());
+
+   boolean exception;
+   try {
+   new Path((String)null);
+   exception = false;
+   } catch(Exception e) {
+   exception = true;
+   }
+   assertTrue(exception);
+
+   try {
+   new Path("");
+   exception = false;
--- End diff --

We can also do a Assert.fail() in these cases


> FileOutputFormat writes to wrong path if path ends with '/'
> ---
>
> Key: FLINK-1640
> URL: https://issues.apache.org/jira/browse/FLINK-1640
> Project: Flink
>  Issue Type: Bug
>  Components: Java API, Scala API
>Affects Versions: 0.9, 0.8.1
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>
> The FileOutputFormat duplicates the last directory of a path, if the path 
> ends  with a slash '/'.
> For example, if the output path is specified as {{/home/myuser/outputPath/}} 
> the output is written to {{/home/myuser/outputPath/outputPath/}}.
> This bug was introduced by commit 8fc04e4da8a36866e10564205c3f900894f4f6e0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1640] Remove tailing slash from paths.

2015-03-04 Thread uce
Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/453#discussion_r25795975
  
--- Diff: flink-core/src/test/java/org/apache/flink/core/fs/PathTest.java 
---
@@ -23,6 +23,152 @@
 import static org.junit.Assert.*;
 
 public class PathTest {
+
+   @Test
+   public void testPathFromString() {
+
+   Path p = new Path("/my/path");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("/my/path/");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("/my//path/");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("/my//path//a///");
+   assertEquals("/my/path/a", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("\\my\\patha\\");
+   assertEquals("/my/path/a", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("/my/path/ ");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertNull(p.toUri().getScheme());
+
+   p = new Path("hdfs:///my/path");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertEquals("hdfs", p.toUri().getScheme());
+
+   p = new Path("hdfs:///my/path/");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertEquals("hdfs", p.toUri().getScheme());
+
+   p = new Path("file:///my/path");
+   assertEquals("/my/path", p.toUri().getPath());
+   assertEquals("file", p.toUri().getScheme());
+
+   boolean exception;
+   try {
+   new Path((String)null);
+   exception = false;
+   } catch(Exception e) {
+   exception = true;
+   }
+   assertTrue(exception);
+
+   try {
+   new Path("");
+   exception = false;
--- End diff --

We can also do a Assert.fail() in these cases


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1605) Create a shaded Hadoop fat jar to resolve library version conflicts

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347252#comment-14347252
 ] 

ASF GitHub Bot commented on FLINK-1605:
---

GitHub user rmetzger opened a pull request:

https://github.com/apache/flink/pull/454

[FLINK-1605] Bundle all hadoop dependencies and shade guava away

I don't have a working eclipse setup right now, so I didn't test this 
change with eclipse.

I would be very interested in some feedback from there.
Its working with IntelliJ.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rmetzger/flink flink1605-rebased

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/454.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #454


commit f1b291246d50afcd20a696e6e0f883193e79a9db
Author: Robert Metzger 
Date:   2015-02-24T15:30:02Z

[FLINK-1605] Bundle all hadoop dependencies and shade guava away




> Create a shaded Hadoop fat jar to resolve library version conflicts
> ---
>
> Key: FLINK-1605
> URL: https://issues.apache.org/jira/browse/FLINK-1605
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>
> As per mailing list discussion: 
> http://apache-flink-incubator-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Create-a-shaded-Hadoop-fat-jar-to-resolve-library-version-conflicts-td3881.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1640] Remove tailing slash from paths.

2015-03-04 Thread uce
Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/453#discussion_r25796503
  
--- Diff: flink-core/src/main/java/org/apache/flink/core/fs/Path.java ---
@@ -217,7 +211,17 @@ public Path(String pathString) {
 *the path string
 */
public Path(String scheme, String authority, String path) {
-   checkPathArg(path);
+
+   if(path == null) {
--- End diff --

I think it would be better to keep the check in a method as before. I think 
it's a good practice to trim the path :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1640) FileOutputFormat writes to wrong path if path ends with '/'

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347255#comment-14347255
 ] 

ASF GitHub Bot commented on FLINK-1640:
---

Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/453#discussion_r25796503
  
--- Diff: flink-core/src/main/java/org/apache/flink/core/fs/Path.java ---
@@ -217,7 +211,17 @@ public Path(String pathString) {
 *the path string
 */
public Path(String scheme, String authority, String path) {
-   checkPathArg(path);
+
+   if(path == null) {
--- End diff --

I think it would be better to keep the check in a method as before. I think 
it's a good practice to trim the path :-)


> FileOutputFormat writes to wrong path if path ends with '/'
> ---
>
> Key: FLINK-1640
> URL: https://issues.apache.org/jira/browse/FLINK-1640
> Project: Flink
>  Issue Type: Bug
>  Components: Java API, Scala API
>Affects Versions: 0.9, 0.8.1
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>
> The FileOutputFormat duplicates the last directory of a path, if the path 
> ends  with a slash '/'.
> For example, if the output path is specified as {{/home/myuser/outputPath/}} 
> the output is written to {{/home/myuser/outputPath/outputPath/}}.
> This bug was introduced by commit 8fc04e4da8a36866e10564205c3f900894f4f6e0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1605] Bundle all hadoop dependencies an...

2015-03-04 Thread rmetzger
GitHub user rmetzger opened a pull request:

https://github.com/apache/flink/pull/454

[FLINK-1605] Bundle all hadoop dependencies and shade guava away

I don't have a working eclipse setup right now, so I didn't test this 
change with eclipse.

I would be very interested in some feedback from there.
Its working with IntelliJ.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rmetzger/flink flink1605-rebased

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/454.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #454


commit f1b291246d50afcd20a696e6e0f883193e79a9db
Author: Robert Metzger 
Date:   2015-02-24T15:30:02Z

[FLINK-1605] Bundle all hadoop dependencies and shade guava away




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1646) Add name of required configuration value into the "Insufficient number of network buffers" exception

2015-03-04 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347297#comment-14347297
 ] 

Henry Saputra commented on FLINK-1646:
--

That was super quick =P

> Add name of required configuration value into the "Insufficient number of 
> network buffers" exception
> 
>
> Key: FLINK-1646
> URL: https://issues.apache.org/jira/browse/FLINK-1646
> Project: Flink
>  Issue Type: Improvement
>  Components: TaskManager
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Ufuk Celebi
>Priority: Minor
>  Labels: starter
>
> As per 
> http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/Exception-Insufficient-number-of-network-buffers-required-120-but-only-2-of-2048-available-td746.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1602) Remove 0.6-incubating from Flink website SVN

2015-03-04 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1602:
-
Assignee: Max Michels

> Remove 0.6-incubating from Flink website SVN
> 
>
> Key: FLINK-1602
> URL: https://issues.apache.org/jira/browse/FLINK-1602
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Henry Saputra
>Assignee: Max Michels
>Priority: Minor
>
> We have old docs from Flink website SVN repo [1] from older versions.
> I am proposing to remove 0.6-incubating from the repo.
> [1] https://svn.apache.org/repos/asf/flink/.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1602) Remove 0.6-incubating from Flink website SVN

2015-03-04 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347318#comment-14347318
 ] 

Henry Saputra commented on FLINK-1602:
--

[~maxmichaels], seemed like you already took care of this?

Feel free to close it.

> Remove 0.6-incubating from Flink website SVN
> 
>
> Key: FLINK-1602
> URL: https://issues.apache.org/jira/browse/FLINK-1602
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Henry Saputra
>Assignee: Max Michels
>Priority: Minor
>
> We have old docs from Flink website SVN repo [1] from older versions.
> I am proposing to remove 0.6-incubating from the repo.
> [1] https://svn.apache.org/repos/asf/flink/.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1602) Remove 0.6-incubating from Flink website SVN

2015-03-04 Thread Max Michels (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347324#comment-14347324
 ] 

Max Michels commented on FLINK-1602:


Yes, I took care of it. The old docs are deleted from the svn. The user gets 
redirected on accessing the old docs.

> Remove 0.6-incubating from Flink website SVN
> 
>
> Key: FLINK-1602
> URL: https://issues.apache.org/jira/browse/FLINK-1602
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Henry Saputra
>Assignee: Max Michels
>Priority: Minor
>
> We have old docs from Flink website SVN repo [1] from older versions.
> I am proposing to remove 0.6-incubating from the repo.
> [1] https://svn.apache.org/repos/asf/flink/.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1602) Remove 0.6-incubating from Flink website SVN

2015-03-04 Thread Max Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Michels resolved FLINK-1602.

Resolution: Fixed

> Remove 0.6-incubating from Flink website SVN
> 
>
> Key: FLINK-1602
> URL: https://issues.apache.org/jira/browse/FLINK-1602
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Henry Saputra
>Assignee: Max Michels
>Priority: Minor
>
> We have old docs from Flink website SVN repo [1] from older versions.
> I am proposing to remove 0.6-incubating from the repo.
> [1] https://svn.apache.org/repos/asf/flink/.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1602) Remove 0.6-incubating from Flink website SVN

2015-03-04 Thread Max Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Michels closed FLINK-1602.
--

> Remove 0.6-incubating from Flink website SVN
> 
>
> Key: FLINK-1602
> URL: https://issues.apache.org/jira/browse/FLINK-1602
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Henry Saputra
>Assignee: Max Michels
>Priority: Minor
>
> We have old docs from Flink website SVN repo [1] from older versions.
> I am proposing to remove 0.6-incubating from the repo.
> [1] https://svn.apache.org/repos/asf/flink/.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1640] Remove tailing slash from paths.

2015-03-04 Thread uce
Github user uce commented on the pull request:

https://github.com/apache/flink/pull/453#issuecomment-77218259
  
:+1: Looks good to me. I've tried it out locally. Nice that you added tests 
:) 

My inline comments regarding the tests are very general and don't really 
need be addressed. We should merge this asap into `master` and `release-0.8`.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1640] Remove tailing slash from paths.

2015-03-04 Thread hsaputra
Github user hsaputra commented on a diff in the pull request:

https://github.com/apache/flink/pull/453#discussion_r25801300
  
--- Diff: flink-core/src/main/java/org/apache/flink/core/fs/Path.java ---
@@ -217,7 +211,17 @@ public Path(String pathString) {
 *the path string
 */
public Path(String scheme, String authority, String path) {
-   checkPathArg(path);
+
+   if(path == null) {
--- End diff --

+1 what Ufuk recommend. This would reduce code duplication.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1640) FileOutputFormat writes to wrong path if path ends with '/'

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347345#comment-14347345
 ] 

ASF GitHub Bot commented on FLINK-1640:
---

Github user uce commented on the pull request:

https://github.com/apache/flink/pull/453#issuecomment-77218259
  
:+1: Looks good to me. I've tried it out locally. Nice that you added tests 
:) 

My inline comments regarding the tests are very general and don't really 
need be addressed. We should merge this asap into `master` and `release-0.8`.




> FileOutputFormat writes to wrong path if path ends with '/'
> ---
>
> Key: FLINK-1640
> URL: https://issues.apache.org/jira/browse/FLINK-1640
> Project: Flink
>  Issue Type: Bug
>  Components: Java API, Scala API
>Affects Versions: 0.9, 0.8.1
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>
> The FileOutputFormat duplicates the last directory of a path, if the path 
> ends  with a slash '/'.
> For example, if the output path is specified as {{/home/myuser/outputPath/}} 
> the output is written to {{/home/myuser/outputPath/outputPath/}}.
> This bug was introduced by commit 8fc04e4da8a36866e10564205c3f900894f4f6e0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1640) FileOutputFormat writes to wrong path if path ends with '/'

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347365#comment-14347365
 ] 

ASF GitHub Bot commented on FLINK-1640:
---

Github user hsaputra commented on a diff in the pull request:

https://github.com/apache/flink/pull/453#discussion_r25801300
  
--- Diff: flink-core/src/main/java/org/apache/flink/core/fs/Path.java ---
@@ -217,7 +211,17 @@ public Path(String pathString) {
 *the path string
 */
public Path(String scheme, String authority, String path) {
-   checkPathArg(path);
+
+   if(path == null) {
--- End diff --

+1 what Ufuk recommend. This would reduce code duplication.


> FileOutputFormat writes to wrong path if path ends with '/'
> ---
>
> Key: FLINK-1640
> URL: https://issues.apache.org/jira/browse/FLINK-1640
> Project: Flink
>  Issue Type: Bug
>  Components: Java API, Scala API
>Affects Versions: 0.9, 0.8.1
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>
> The FileOutputFormat duplicates the last directory of a path, if the path 
> ends  with a slash '/'.
> For example, if the output path is specified as {{/home/myuser/outputPath/}} 
> the output is written to {{/home/myuser/outputPath/outputPath/}}.
> This bug was introduced by commit 8fc04e4da8a36866e10564205c3f900894f4f6e0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1587) coGroup throws NoSuchElementException on iterator.next()

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347396#comment-14347396
 ] 

ASF GitHub Bot commented on FLINK-1587:
---

Github user andralungu commented on the pull request:

https://github.com/apache/flink/pull/440#issuecomment-77224411
  
Hi @uce!

I am using a data set's print() method in that particular test suite. I 
tried several mixes and matches with the logger but none were successful. 

If, for example you do a log.info, that requires a string. How can you add 
the DataSet there? I tried adding it with .toString() in the end, but the 
problem is that, then, env.execute(), which expects a print or write in order 
to consume the data will no longer work.

Could you give me more details?
Thanks :)
Andra  




> coGroup throws NoSuchElementException on iterator.next()
> 
>
> Key: FLINK-1587
> URL: https://issues.apache.org/jira/browse/FLINK-1587
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
> Environment: flink-0.8.0-SNAPSHOT
>Reporter: Carsten Brandt
>Assignee: Andra Lungu
>
> I am receiving the following exception when running a simple job that 
> extracts outdegree from a graph using Gelly. It is currently only failing on 
> the cluster and I am not able to reproduce it locally. Will try that the next 
> days.
> {noformat}
> 02/20/2015 02:27:02:  CoGroup (CoGroup at inDegrees(Graph.java:675)) (5/64) 
> switched to FAILED
> java.util.NoSuchElementException
>   at java.util.Collections$EmptyIterator.next(Collections.java:3006)
>   at flink.graphs.Graph$CountNeighborsCoGroup.coGroup(Graph.java:665)
>   at 
> org.apache.flink.runtime.operators.CoGroupDriver.run(CoGroupDriver.java:130)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:493)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:360)
>   at 
> org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:257)
>   at java.lang.Thread.run(Thread.java:745)
> 02/20/2015 02:27:02:  Job execution switched to status FAILING
> ...
> {noformat}
> The error occurs in Gellys Graph.java at this line: 
> https://github.com/apache/flink/blob/a51c02f6e8be948d71a00c492808115d622379a7/flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/Graph.java#L636
> Is there any valid case where a coGroup Iterator may be empty? As far as I 
> see there is a bug somewhere.
> I'd like to write a test case for this to reproduce the issue. Where can I 
> put such a test?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1587][gelly] Added additional check for...

2015-03-04 Thread andralungu
Github user andralungu commented on the pull request:

https://github.com/apache/flink/pull/440#issuecomment-77224411
  
Hi @uce!

I am using a data set's print() method in that particular test suite. I 
tried several mixes and matches with the logger but none were successful. 

If, for example you do a log.info, that requires a string. How can you add 
the DataSet there? I tried adding it with .toString() in the end, but the 
problem is that, then, env.execute(), which expects a print or write in order 
to consume the data will no longer work.

Could you give me more details?
Thanks :)
Andra  




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-1649) Give a good error message when a user program emits a null record

2015-03-04 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-1649:
---

 Summary: Give a good error message when a user program emits a 
null record
 Key: FLINK-1649
 URL: https://issues.apache.org/jira/browse/FLINK-1649
 Project: Flink
  Issue Type: Bug
  Components: Local Runtime
Affects Versions: 0.9
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 0.9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1650) Suppress Akka's Netty Shutdown Errors through the log config

2015-03-04 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-1650:
---

 Summary: Suppress Akka's Netty Shutdown Errors through the log 
config
 Key: FLINK-1650
 URL: https://issues.apache.org/jira/browse/FLINK-1650
 Project: Flink
  Issue Type: Bug
  Components: other
Affects Versions: 0.9
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 0.9


I suggest to set the logging for 
`org.jboss.netty.channel.DefaultChannelPipeline` to error, in order to get rid 
of the misleading stack trace caused by an akka/netty hickup on shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1650) Suppress Akka's Netty Shutdown Errors through the log config

2015-03-04 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347450#comment-14347450
 ] 

Stephan Ewen commented on FLINK-1650:
-

The logged error is below.
Setting the log level to "ERROR" should allow us to see critical messages and 
suppress the confusing warnings that seem to be an akka/netty bug.

{code}
Feb 18, 2015 5:25:18 PM org.jboss.netty.channel.DefaultChannelPipeline
WARNING: An exception was thrown by an exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been 
shutdown
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:120)
at 
org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:72)
at 
org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at 
org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:56)
at 
org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at 
org.jboss.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34)
at 
org.jboss.netty.channel.Channels.fireExceptionCaughtLater(Channels.java:496)
at 
org.jboss.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:46)
at 
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
at org.jboss.netty.channel.Channels.disconnect(Channels.java:781)
at 
org.jboss.netty.channel.AbstractChannel.disconnect(AbstractChannel.java:211)
at 
akka.remote.transport.netty.NettyTransport$$anonfun$gracefulClose$1.apply(NettyTransport.scala:223)
at 
akka.remote.transport.netty.NettyTransport$$anonfun$gracefulClose$1.apply(NettyTransport.scala:222)
at scala.util.Success.foreach(Try.scala:205)
at scala.concurrent.Future$$anonfun$foreach$1.apply(Future.scala:204)
at scala.concurrent.Future$$anonfun$foreach$1.apply(Future.scala:204)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{code}

> Suppress Akka's Netty Shutdown Errors through the log config
> 
>
> Key: FLINK-1650
> URL: https://issues.apache.org/jira/browse/FLINK-1650
> Project: Flink
>  Issue Type: Bug
>  Components: other
>Affects Versions: 0.9
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 0.9
>
>
> I suggest to set the logging for 
> `org.jboss.netty.channel.DefaultChannelPipeline` to error, in order to get 
> rid of the misleading stack trace caused by an akka/netty hickup on shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1650) Suppress Akka's Netty Shutdown Errors through the log config

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347461#comment-14347461
 ] 

ASF GitHub Bot commented on FLINK-1650:
---

GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/455

[FLINK-1650] [logging] Suppress wrong netty warnings on akka shutdown



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink 
netty_error_message

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/455.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #455


commit 877f87f65b0179341081adda20081a818eaffaf3
Author: Stephan Ewen 
Date:   2015-03-04T19:52:50Z

[FLINK-1650] [logging] Suppress wrong netty warnings on akka shutdown




> Suppress Akka's Netty Shutdown Errors through the log config
> 
>
> Key: FLINK-1650
> URL: https://issues.apache.org/jira/browse/FLINK-1650
> Project: Flink
>  Issue Type: Bug
>  Components: other
>Affects Versions: 0.9
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 0.9
>
>
> I suggest to set the logging for 
> `org.jboss.netty.channel.DefaultChannelPipeline` to error, in order to get 
> rid of the misleading stack trace caused by an akka/netty hickup on shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1650] [logging] Suppress wrong netty wa...

2015-03-04 Thread StephanEwen
GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/455

[FLINK-1650] [logging] Suppress wrong netty warnings on akka shutdown



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink 
netty_error_message

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/455.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #455


commit 877f87f65b0179341081adda20081a818eaffaf3
Author: Stephan Ewen 
Date:   2015-03-04T19:52:50Z

[FLINK-1650] [logging] Suppress wrong netty warnings on akka shutdown




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1649) Give a good error message when a user program emits a null record

2015-03-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347465#comment-14347465
 ] 

ASF GitHub Bot commented on FLINK-1649:
---

GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/456

[FLINK-1649] [runtime] Give a good error message for null values

Contains also a test and code cleanup in the OutputCollector

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink null_values

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/456.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #456


commit a02632ec8f3b1d0295b2d6d0b425aa017f1ed6fa
Author: Stephan Ewen 
Date:   2015-03-04T19:36:33Z

[FLINK-1649] [runtime] Give a good error message when a user emits an 
unsupported null value




> Give a good error message when a user program emits a null record
> -
>
> Key: FLINK-1649
> URL: https://issues.apache.org/jira/browse/FLINK-1649
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Affects Versions: 0.9
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 0.9
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >