If it might help anyone else, I had a similar issue when running my unit
tests,
I could solve it by increasing memory of sbt
export SBT_OPTS="-Xmx3G -XX:+UseConcMarkSweepGC
-XX:+CMSClassUnloadingEnabled -Xss1G"
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
BTW we should add an entry for this to the faq and point to the configuration
or faq entry in the exception message.
On 20 Jul 2015, at 15:15, Vasiliki Kalavri wrote:
> Hi Shivani,
>
> why are you using a vertex-centric iteration to compute the approximate
> Adamic-Adar?
> It's not an iterati
Hi Shivani,
the Jaccard example is implemented in Giraph, and therefore uses iterations.
However, in Gelly we are not forced to do that for non-iterative
computations.
I see that there is some confusion with the implementation specifics.
Let me try to write down some skeleton code / detailed desc
But it will need to build BloomFilters for each vertex for each edge so idk
how efficient that would be.
On Mon, Jul 20, 2015 at 4:02 PM, Shivani Ghatge wrote:
> Hello Vasia,
>
> I will adapt the exact method for BloomFilter. (I think it can be done.
> Sorry. My mistake).
>
>
> On Mon, Jul 20, 2
Hello Vasia,
I will adapt the exact method for BloomFilter. (I think it can be done.
Sorry. My mistake).
On Mon, Jul 20, 2015 at 3:45 PM, Shivani Ghatge wrote:
> Also the example of Jaccard that you had linked me to used VertexCentric
> configuration which I understand is because that api only
Also the example of Jaccard that you had linked me to used VertexCentric
configuration which I understand is because that api only uses
VertexCentricIteration for all the operations? But I think that is the best
way in order to know what neighbors belong to the BloomFilter?
On Mon, Jul 20, 2015 at
Hello Vasia,
As I had mentioned before, I need a BloomFilter as well as a HashSet for
the approximation to work. In the exact solution I am getting two HashSets
and comparing them. In approximate version, if we get two BloomFilters then
we have no way to compare the neighborhood sets.
I thought w
I believe there was some work in progress to reduce memory fragmentation
and solve similar problems.
Anyone knows what's happening with that?
On 20 July 2015 at 16:29, Andra Lungu wrote:
> I also questioned the vertex-centric approach before. The exact
> computation does not throw this exception
I also questioned the vertex-centric approach before. The exact computation
does not throw this exception so I guess adapting the approximate version
will do the trick [I also suggested improving the algorithm to use less
operators offline].
However, the issue still persists. We saw it in Affinity
Hi Shivani,
why are you using a vertex-centric iteration to compute the approximate
Adamic-Adar?
It's not an iterative computation :)
In fact, it should be as complex (in terms of operators) as the exact
Adamic-Adar, only more efficient because of the different neighborhood
representation. Are yo
Hi Shivani,
The issue is that by the time the Hash Join is executed, the
MutableHashTable cannot allocate enough memory segments. That means that
your other operators are occupying them. It is fine that this also occurs
on Travis because the workers there have limited memory as well.
Till suggest
Hi,
I am afraid this is a known issue:
http://mail-archives.apache.org/mod_mbox/flink-dev/201503.mbox/%3CCAK5ODX7_-Wxg9pr7CkkkG4CzA+yNCNMvmea5L2i2iZZV=2c...@mail.gmail.com%3E
The behavior back then seems to be exactly what Shivani is experiencing at
the moment. At that point I remember Fabian sug
Hello Maximilian,
Thanks for the suggestion. I will use it to check the program. But when I
am creating a PR for the same implementation with a Test, I am getting the
same error even on Travis build. So for that what would be the solution?
Here is my PR https://github.com/apache/flink/pull/923
An
The taskmanager.memory.fraction you can also set from within the IDE by
giving the corresponding configuration object to the LocalEnvironment using
the setConfiguration method. However, the taskmanager.heap.mb is basically
the -Xmx value with which you start your JVM. Usually, you can set this in
y
Hi Shivani,
Flink doesn't have enough memory to perform a hash join. You need to
provide Flink with more memory. You can either increase the
"taskmanager.heap.mb" config variable or set "taskmanager.memory.fraction"
to some value greater than 0.7 and smaller then 1.0. The first config
variable all
15 matches
Mail list logo