Hi Paul, when you run your Flink cluster with YARN then we cannot give the full amount of the allocated container memory to Flink. The reason is that YARN itself needs some of the memory as well. Since YARN is quite strict with containers which exceed their memory limit (the container is instantly killed), we assign per default 0.25 of the container’s memory to YARN.
I cannot tell why in your case it is 0.5. Maybe you’re using an old version of Flink. But you can control the memory fraction which is given to Yarn using the configuration parameter yarn.heap-cutoff-ratio. Cheers, Till On Tue, Jul 14, 2015 at 10:47 AM, Pa Rö <paul.roewer1...@googlemail.com> wrote: > hello community, > > i want run my flink app on a cluster (cloudera 5.4.4) with 3 nodes (one pc > has i7 8core with 16GB RAM). now i want submit my spark job on yarn (20GB > RAM). > > my script to deploy the flink cluster on yarn: > > export HADOOP_CONF_DIR=/etc/hadoop/conf/ > ./flink-0.9.0/bin/yarn-session.sh -n 1 -jm 10240 -tm 10240 > > my script to submit the job is to time the following: > > ./flink-0.9.0/bin/flink run /home/marcel/Desktop/ma-flink.jar > > in the flink dashbord shown are only 5GB memory used for computed my job? > > maybe my configuration is not the optimal?? > > best regards, > paul >