hello community,

i want run my flink app on a cluster (cloudera 5.4.4) with 3 nodes (one pc
has i7 8core with 16GB RAM). now i want submit my spark job on yarn (20GB
RAM).

my script to deploy the flink cluster on yarn:

export HADOOP_CONF_DIR=/etc/hadoop/conf/
./flink-0.9.0/bin/yarn-session.sh -n 1 -jm 10240 -tm 10240

my script to submit the job is to time the following:

./flink-0.9.0/bin/flink run /home/marcel/Desktop/ma-flink.jar

in the flink dashbord shown are only 5GB memory used for computed my job?

maybe my configuration is not the optimal??

best regards,
paul

Reply via email to