I think this is related to the Yarn bug with the YarnSessionCli we
just fixed. The problem is that forked processes of the Surefire
plugin communicate via STDIN. The Scala Shell also reads from STDIN
which results in a deadlock from time to time...
Created an issue for that: https://issues.apache.
On Thu, Jun 2, 2016 at 1:26 PM, Maximilian Michels wrote:
> I thought this had been fixed by Chiwan in the meantime. Could you
Chiwan fixed the ML issues IMO. You can pick any of the recent builds
from https://travis-ci.org/apache/flink/builds
For example:
https://s3.amazonaws.com/archive.travi
I thought this had been fixed by Chiwan in the meantime. Could you
post a build log?
On Thu, Jun 2, 2016 at 1:14 PM, Ufuk Celebi wrote:
> With the recent fixes, the builds are more stable, but I still see
> many failing, because of the Scala shell tests, which lead to the JVMs
> crashing. I've re
With the recent fixes, the builds are more stable, but I still see
many failing, because of the Scala shell tests, which lead to the JVMs
crashing. I've researched this a little bit, but didn't find an
obvious solution to the problem.
Does it make sense to disable the tests until someone has time
You are right, Chiwan.
I think that this pattern you use should be supported, though. Would be
good to check if the job executes at the point of the "collect()" calls
more than is necessary.
That would explain the network buffer issue then...
On Tue, May 31, 2016 at 12:18 PM, Chiwan Park wrote:
Hi Stephan,
Yes, right. But KNNITSuite calls ExecutionEnvironment.getExecutionEnvironment
only once [1]. I’m testing with moving method call of getExecutionEnvironment
to each test case.
[1]:
https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/test/scala/org/apache/flink/m
Hi Chiwan!
I think the Execution environment is not shared, because what the
TestEnvironment sets is a Context Environment Factory. Every time you call
"ExecutionEnvironment.getExecutionEnvironment()", you get a new environment.
Stephan
On Tue, May 31, 2016 at 11:53 AM, Chiwan Park wrote:
> I
I’ve created a JIRA issue [1] related to KNN test cases. I will send a PR for
it.
From my investigation [2], cluster for ML tests have only one taskmanager with
4 slots. Is 2048 insufficient for total number of network numbers? I still
think the problem is sharing ExecutionEnvironment between t
Thanks Stephan for the synopsis of our last weeks test instability
madness. It's sad to see the shortcomings of Maven test plugins but
another lesson learned is that our testing infrastructure should get a
bit more attention. We have reached a point several times where our
tests where inherently in
I think that the tests fail because of sharing ExecutionEnvironment between
test cases. I’m not sure why it is problem, but it is only difference between
other ML tests.
I created a hotfix and pushed it to my repository. When it seems fixed [1],
I’ll merge the hotfix to master branch.
[1]: htt
Maybe it seems about KNN test case which is merged into yesterday. I’ll look
into ML test.
Regards,
Chiwan Park
> On May 31, 2016, at 5:38 PM, Ufuk Celebi wrote:
>
> Currently, an ML test is reliably failing and occasionally some HA
> tests. Is someone looking into the ML test?
>
> For HA, I
Currently, an ML test is reliably failing and occasionally some HA
tests. Is someone looking into the ML test?
For HA, I will revert a commit, which might cause the HA
instabilities. Till is working on a proper fix as far as I know.
On Tue, May 31, 2016 at 3:50 AM, Chiwan Park wrote:
> Thanks fo
Thanks for the great work! :-)
Regards,
Chiwan Park
> On May 31, 2016, at 7:47 AM, Flavio Pompermaier wrote:
>
> Awesome work guys!
> And even more thanks for the detailed report...This troubleshooting summary
> will be undoubtedly useful for all our maven projects!
>
> Best,
> Flavio
> On 30
Awesome work guys!
And even more thanks for the detailed report...This troubleshooting summary
will be undoubtedly useful for all our maven projects!
Best,
Flavio
On 30 May 2016 23:47, "Ufuk Celebi" wrote:
> Thanks for the effort, Max and Stephan! Happy to see the green light again.
>
> On Mon,
Thanks for the effort, Max and Stephan! Happy to see the green light again.
On Mon, May 30, 2016 at 11:03 PM, Stephan Ewen wrote:
> Hi all!
>
> After a few weeks of terrible build issues, I am happy to announce that the
> build works again properly, and we actually get meaningful CI results.
>
>
Hi all!
After a few weeks of terrible build issues, I am happy to announce that the
build works again properly, and we actually get meaningful CI results.
Here is a story in many acts, from builds deep red to bright green joy.
Kudos to Max, who did most of this troubleshooting. This evening, Max
16 matches
Mail list logo