Hi Stephan, I tried the solution with DeltaIteration#setSolutionSetUnManaged(), unfortunatelly the error is still there... Even when I try to run it with just one iteration... Also, I am not sure that the job can be broken into subparts in this particular case.
Any other suggestions would be appreciated :) Thanks! Andra On Mon, Feb 9, 2015 at 10:36 AM, Stephan Ewen <se...@apache.org> wrote: > This is actually a problem of the number of memory segments available to > the hash table for the solution set. > > For complex pipelines, memory currently gets too fragmented. > > There are two workarounds, until we do the dynamic memory management, or > break it into shorter pipelines: Break the job up into subparts, or move > the solution set to the user code part of the heap. There is a flag to > attach to the Delta Iterations, see > "DeltaIteration#setSolutionSetUnManaged()" > > Greetinigs, > Stephan > > > On Mon, Feb 9, 2015 at 10:32 AM, Till Rohrmann <trohrm...@apache.org> > wrote: > > > Hi Andra, > > > > have you tried increasing the number of network buffers in your cluster? > > You can control by the configuration value: > > > > taskmanager.network.numberOfBuffers: #numberBuffers > > > > Greets, > > > > Till > > > > On Mon, Feb 9, 2015 at 9:56 AM, Andra Lungu <lungu.an...@gmail.com> > wrote: > > > > > Hello everyone, > > > > > > I am implementing a graph algorithm as part of a course and I will also > > add > > > it to the Flink- Gelly examples. > > > My problem is that I started developing it in the Gelly repository, > which > > > runs on flink 0.9. It works like a charm there, but in order to test in > > on > > > a cluster to see its real capabilities, I need to move it to the course > > > repository, which runs on flink 0.8. > > > > > > Initially, I thought this migration should occur without incidents > since > > > flink 0.8 is more stable. Instead, I got the following exception: > > > java.lang.IllegalArgumentException: *Too few memory segments provided. > > Hash > > > Table needs at least 33 memory segments.* > > > at > > > > > > > > > org.apache.flink.runtime.operators.hash.CompactingHashTable.<init>(CompactingHashTable.java:238) > > > at > > > > > > > > > org.apache.flink.runtime.operators.hash.CompactingHashTable.<init>(CompactingHashTable.java:227) > > > at > > > > > > > > > org.apache.flink.runtime.iterative.task.IterationHeadPactTask.initCompactingHashTable(IterationHeadPactTask.java:177) > > > at > > > > > > > > > org.apache.flink.runtime.iterative.task.IterationHeadPactTask.run(IterationHeadPactTask.java:279) > > > at > > > > > > > > > org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:360) > > > at > > > > > > > > > org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:257) > > > at java.lang.Thread.run(Thread.java:745) > > > > > > This is the code for Gelly, where all tests pass: > > > https://github.com/andralungu/flink-graph/tree/minSpanningTree > > > Unfortunately, the code for the course is private, so you cannot > actually > > > see it... maybe @aalexandrov can do something about the privacy > settings > > of > > > this repo. > > > https://github.com/andralungu/IMPRO-3.WS14/tree/dmst_algorithm > > > > > > Thank you! > > > Andra > > > > > >