Hi
I think this is the same issue we had before on the list [1]. Stephan
recommended the following workaround:
A possible workaround is to use the option "setSolutionSetUnmanaged(true)"
on the iteration. That will eliminate the fragmentation issue, at least.
Unfortunately, you cannot set this when using graph.run(new PageRank(...))
I created a Gist which shows you how to set this using PageRank
https://gist.github.com/s1ck/801a8ef97ce374b358df
Please let us know if it worked out for you.
Cheers,
Martin
[1]
http://mail-archives.apache.org/mod_mbox/flink-user/201508.mbox/%3CCAELUF_ByPAB%2BPXWLemPzRH%3D-awATeSz4sGz4v9TmnvFku3%3Dx3A%40mail.gmail.com%3E
On 14.03.2016 16:55, Ovidiu-Cristian MARCU wrote:
Hi,
While running PageRank on a synthetic graph I run into this problem:
Any advice on how should I proceed to overcome this memory issue?
IterationHead(Vertex-centric iteration
(org.apache.flink.graph.library.PageRank$VertexRankUpdater@7712cae0 |
org.apache.flink.graph.library.PageRank$RankMesseng$
java.lang.RuntimeException: Memory ran out. Compaction failed. numPartitions:
32 minPartition: 24 maxPartition: 25 number of overflow segments: 328
bucketSize: 638 Overall memory: 115539968 Partition memory: 50659328 Message:
Index: 25, Size: 24
at
org.apache.flink.runtime.operators.hash.CompactingHashTable.insertRecordIntoPartition(CompactingHashTable.java:469)
at
org.apache.flink.runtime.operators.hash.CompactingHashTable.insertOrReplaceRecord(CompactingHashTable.java:414)
at
org.apache.flink.runtime.operators.hash.CompactingHashTable.buildTableWithUniqueKey(CompactingHashTable.java:325)
at
org.apache.flink.runtime.iterative.task.IterationHeadTask.readInitialSolutionSet(IterationHeadTask.java:212)
at
org.apache.flink.runtime.iterative.task.IterationHeadTask.run(IterationHeadTask.java:273)
at
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:354)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)
Thanks!
Best,
Ovidiu