Hi Konstantin,

Flink uses managed memory only for its internal processing (sorting, hash
tables, etc.).
If you allocate too much memory in your user code, it can still fail with
an OOME. This can also happen for large broadcast sets.
Can you check how much much memory the JVM allocated and how much was
allocated as managed memory. You'll find that information in the task
manager log file.

Cheers,
Fabian

2015-10-08 17:58 GMT+02:00 KOSTIANTYN Kudriavtsev <
kudryavtsev.konstan...@gmail.com>:

> Hi guys,
>
> I'm running FLink on EMR with 2 m3.xlarge (each 16 GB RAM) and trying to
> process 3.8 GB CSV data from S3. I'm surprised the fact that Flink failed
> with OutOfMemory: Java Heap space
>
> I tried to find the reason:
> 1) to identify TaskManager with a command ps aux | grep TaskManager
> 2) then build Heap histo:
> $ jmap -histo:live 19648 | head -n23
>  num     #instances         #bytes  class name
> ----------------------------------------------
>    1:        131018     3763501304  [B
>    2:         61022        7820352  <methodKlass>
>    3:         61022        7688456  <constMethodKlass>
>    4:          4971        5454408  <constantPoolKlass>
>    5:          4966        4582232  <instanceKlassKlass>
>    6:          4169        3003104  <constantPoolCacheKlass>
>    7:         15696        1447168  [C
>    8:          1291         638824  [Ljava.lang.Object;
>    9:          5318         506000  java.lang.Class
>
>
> Do you have any ideas what can be the reason and how it can be fixed?
> Is Flink uses out-of-heap memory?
>
>
> Thank you,
> Konstantin Kudryavtsev
>

Reply via email to