We actually have work in progress to reduce the memory fragmentation, which
should solve this issue.
I hope it will be ready for the 0.9 release.
On Thu, May 14, 2015 at 8:46 AM, Andra Lungu wrote:
> Hi Yi,
>
> The problem here, as Stephan already suggested, is that you have a very
> large job.
Hi Yi,
The problem here, as Stephan already suggested, is that you have a very
large job. Each complex operation (join, coGroup, etc) needs a
share of memory.
In Flink, for the test cases at least, they restrict the TaskManagers'
memory to just 80MB in order to run multiple tests in parallel on Tr
You are probably starting the system with very little memory, or you have
an immensely large job.
Have a look here, I think this discussion on the user mailing list a few
days ago is about the same issue:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Memory-exception-td1206.h
Hello ,
Thank @Stephan for the explanations. Though I with these information, I
still have no clue to trace the error.
Now, the exception stack in the *cluster mode* always looks like this
(even I set env.setParallelism(1)):
org.apache.flink.runtime.client.JobExecutionException: Job execu
Hi!
The *collection execution* runs the program simply as functions over Java
collections. It is single threaded, always local, and does not use any
Flink memory management, serialization, or so. It is designed to be very
lightweight and is tailored towards very small problems.
The *cluster mode*
Hello,
Thanks Andra for the gaussian sequence generation. It is a little
tricky, i just leave this part for future work.
I meet another problem in AffinityPropogation algorithm. I write a few
test code for it.
https://github.com/joey001/flink/blob/ap_add/flink-staging/flink-gelly/src/test/ja