Robert or Stephan know the Travis setup quite well.
They might know, if we can give a bit more than 80MB. But at some point
there will be a hard limit.
Once we have dynamic memory management (most of) such problems should be
solved.
2015-03-30 23:46 GMT+02:00 Andra Lungu :
> Oh! In that case, who
Oh! In that case, who should I refer to? :D
[It's kind of ugly to split this kind of test. I mean if a person is
counting the degrees, then that's the result that should be tested - at
least in my opinion]
In any case, thanks for the help :)
On Mon, Mar 30, 2015 at 11:37 PM, Fabian Hueske wrote:
Well, each combiner, reducer, join, coGroup, and solutionset needs a share
of memory (maps & filters don't).
In your case it was pretty much at the edge, the hash joins require 33
buffers and got 32. So one memory-consuming operator less might fix it.
I did not look in detail at the other job, but
Hi Fabian,
I'll see what I can do :).
I am just a bit shocked. If this set of coGroups and joins was too much for
a test case, how come the following worked?
https://github.com/andralungu/flink/commit/f60b022de056ac259459b68eee6ff0ae9993f0f8
400 lines of complex computations :) And I have an eve
Hi,
I'm back looking into this and I tried out the for-loop approach that was
suggested above.
I implemented a simple algorithm, k-core, which computes the k-core of a
graph by iteratively filtering out vertices with degree less than k.
You can find the code in [1].
Unfortunately, this is giving
It seems that the issue is fixed. I've just pushed two times to a pull
request and it immediately started building both.
I think the "apache" user has much more parallel builds available now (we
don't have any builds queuing up anymore).
On Thu, Mar 26, 2015 at 4:06 PM, Henry Saputra
wrote:
> Aw
Hi Matthias,
the streaming folks can probably answer the questions better. But I'll
write something to bring this message back to their attention ;)
1) Which exceptions are you seeing? Flink should be able to cleanly shut
down.
2) As far as I saw it, the execute() method (of the Streaming API) go
+1
Would be good to have well documented release process with all the
black magic scripts we have =)
Thanks for driving this, Robert.
- Henry
On Mon, Mar 30, 2015 at 11:15 AM, Robert Metzger wrote:
> Okay, I think we have reached consensus on this.
>
> I'll create a "RC0" non-voting, preview re
Okay, I think we have reached consensus on this.
I'll create a "RC0" non-voting, preview release candidate for 0.9.0-
milestone-1 on Thursday (April 2) this week so that we have version to
tests against.
Once all issues of RC0 have been resolved, we'll start voting in the week
of April 6. (The vo
Hi Amit!
The DataSet API is basically a fluent builder for the internal DAG of
operations, the "Plan". This plan is build when you call "env.execute()".
You can directly get the Plan by calling
ExecutionEnvironment#createProgramPlan()
The JSON plan has in addition the information inserted by the
Hi
I am trying to extract/retrieve the Flink execution plan. I managed to get
it as JSON string in following ways:
1. Using JAR - via PackagedProgram using getPreviewPlan() ; or
2. Directly in program - via ExecutionEnvironment's getExecutionPlan()
My question is - Is it possible to retrieve dire
Sibao Hong created FLINK-1805:
-
Summary: The class IOManagerAsync(in
org.apache.flink.runtime.io.disk.iomanager) should use its own Log
Key: FLINK-1805
URL: https://issues.apache.org/jira/browse/FLINK-1805
Robert Metzger created FLINK-1804:
-
Summary: flink-quickstart-scala tests fail on scala-2.11 build
profile on travis
Key: FLINK-1804
URL: https://issues.apache.org/jira/browse/FLINK-1804
Project: Flin
Gyula Fora created FLINK-1803:
-
Summary: Add type hints to the streaming api
Key: FLINK-1803
URL: https://issues.apache.org/jira/browse/FLINK-1803
Project: Flink
Issue Type: Improvement
Stephan Ewen created FLINK-1802:
---
Summary: BlobManager directories should be checked before
TaskManager startup
Key: FLINK-1802
URL: https://issues.apache.org/jira/browse/FLINK-1802
Project: Flink
Hi Andra,
I found the cause for the exception. Your test case is simply too complex
for our testing environment.
We restrict the TM memory for testcases to 80MB in order to execute
multiple tests in parallel on Travis.
I counted the memory consumers in your job and got:
- 2 Combine
- 4 GroupReduc
Stephan Ewen created FLINK-1801:
---
Summary: NetworkEnvironment should start without JobManager
association
Key: FLINK-1801
URL: https://issues.apache.org/jira/browse/FLINK-1801
Project: Flink
I
I filed the JIRA for the beta badge:
https://issues.apache.org/jira/browse/FLINK-1800
On Mon, Mar 30, 2015 at 12:34 PM, Maximilian Michels wrote:
> +1 for using annotations to mark the status of API classes/methods. I think
> that is very good practice to manage backwards-compatibility.
>
> On S
Robert Metzger created FLINK-1800:
-
Summary: Add a "Beta" badge in the documentation to components in
flink-staging
Key: FLINK-1800
URL: https://issues.apache.org/jira/browse/FLINK-1800
Project: Flink
Till Rohrmann created FLINK-1799:
Summary: Scala API does not support generic arrays
Key: FLINK-1799
URL: https://issues.apache.org/jira/browse/FLINK-1799
Project: Flink
Issue Type: Bug
+1 for using annotations to mark the status of API classes/methods. I think
that is very good practice to manage backwards-compatibility.
On Sun, Mar 29, 2015 at 8:20 PM, Henry Saputra
wrote:
> +1 to this.
>
> Was thinking about the same thing.
>
> - Henry
>
>
>
> On Sun, Mar 29, 2015 at 7:38 AM
Great :)
On Sun, Mar 29, 2015 at 7:49 PM, Henry Saputra
wrote:
> Thanks for driving the resolution, Aljoscha
>
> On Sun, Mar 29, 2015 at 3:26 AM, Aljoscha Krettek
> wrote:
> > I hereby close the vote. Thanks for all your votes!
> >
> > We have 15 votes:
> > +Relation: 4
> > +DataTable: 3
>
Sure,
It was in the first mail but that was sent a while ago :)
This is the code:
https://github.com/andralungu/gelly-partitioning/tree/alphaSplit
I also added the log4j file in case it helps!
The error is totally reproducible. 2 out of 2 people got the same.
Steps to reproduce:
1). Clone the co
Hmm, that is really weird.
Can you point me to a branch in your repository and the test case that
gives the error?
Then I have a look at it and try to figure out what's going wrong.
Cheers, Fabian
2015-03-30 10:43 GMT+02:00 Andra Lungu :
> Hello,
>
> I went on and did some further debugging on
I am reworking the web frontend a bit (PR may come in a bit) and was
thinking the same thing.
Would also allow to vastly reduce the runtime dependencies, since all the
webserver stuff would be out...
On Sun, Mar 29, 2015 at 9:11 PM, Henry Saputra
wrote:
> Also looks like runtime getting too big
Hello,
I went on and did some further debugging on this issue. Even though the
exception said that the problem comes from here:
4837 [Join(Join at* weighEdges(NodeSplitting.java:117)*) (1/4)] ERROR
org.apache.flink.runtime.operators.RegularPactTask - Error in task code:
Join(Join at weighEdges(No
Péter Szabó created FLINK-1798:
--
Summary: Bug in IterateExample while running with parallelism > 1:
broker slot is already occupied
Key: FLINK-1798
URL: https://issues.apache.org/jira/browse/FLINK-1798
P
27 matches
Mail list logo