Hi, I'm running the example of JavaKafkaWordCount in a standalone cluster. I want to set 1600MB memory for each slave node. I wrote in the spark/conf/spark-env.sh
SPARK_WORKER_MEMORY=1600m But the logs on slave nodes looks this: Spark Executor Command: "/usr/java/latest/bin/java" "-cp" ":/~path/spark/conf:/~path/spark/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.CoarseGrainedExecutorBackend" The memory seems to be the default number, not 1600M. I don't how to make SPARK_WORKER_MEMORY work. Can anyone help me? Many thanks in advance. Yunmeng