If you want to use 2g of memory on each worker, you can simply export
SPARK_WORKER_MEMORY=2g inside your spark-env.sh on all machine in the
cluster.

Thanks
Best Regards

On Wed, Apr 8, 2015 at 7:27 AM, Jia Yu <jia...@asu.edu> wrote:

> Hi guys,
>
> Currently I am running Spark program on Amazon EC2. Each worker has around
> (less than but near to )2 gb memory.
>
> By default, I can see each worker is allocated 976 mb memory as the table
> shows below on Spark WEB UI. I know this value is from (Total memory minus
> 1 GB). But I want more than 1 gb in each of my worker.
>
> AddressStateCoresMemory
>
> ALIVE1 (0 Used)976.0 MB (0.0 B Used)Based on the instruction on Spark
> website, I made "export SPARK_WORKER_MEMORY=1g" in spark-env.sh. But it
> doesn't work. BTW, I can set "SPARK_EXECUTOR_MEMORY=1g" and it works.
>
> Can anyone help me? Is there a requirement that one worker must maintain 1
> gb memory for itself aside from the memory for Spark?
>
> Thanks,
> Jia
>

Reply via email to