Thanks Aaron !!
On Mon, Mar 24, 2014 at 10:58 PM, Aaron Davidson wrote:
> 1. Note sure on this, I don't believe we change the defaults from Java.
>
> 2. SPARK_JAVA_OPTS can be used to set the various Java properties (other
> than memory heap size itself)
>
> 3. If you want to have 8 GB executor
1. Note sure on this, I don't believe we change the defaults from Java.
2. SPARK_JAVA_OPTS can be used to set the various Java properties (other
than memory heap size itself)
3. If you want to have 8 GB executors then, yes, only two can run on each
16 GB node. (In fact, you should also keep a sig
Thanks Aaron and Sean...
Setting SPARK_MEM finally worked. But i have a small doubt.
1)What is the default value that is allocated for JVM and for HEAP_SPACE
for Garbage collector.
2)Usually we set 1/3 of total memory for heap. So what should be the
practice for Spark processes. Where & how shoul
PS you have a typo in "DEAMON" - its DAEMON. Thanks Latin.
On Mar 24, 2014 7:25 AM, "Sai Prasanna" wrote:
> Hi All !! I am getting the following error in interactive spark-shell
> [0.8.1]
>
>
> *org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed more
> than 0 times; aborting job jav
To be clear on what your configuration will do:
- SPARK_DAEMON_MEMORY=8g will make your standalone master and worker
schedulers have a lot of memory. These do not impact the actual amount of
useful memory given to executors or your driver, however, so you probably
don't need to set this.
- SPARK_W
Hi All !! I am getting the following error in interactive spark-shell
[0.8.1]
*org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed more than
0 times; aborting job java.lang.OutOfMemoryError: GC overhead limit
exceeded*
But i had set the following in the spark.env.sh and hadoop-env.s