On Mon, Jan 19, 2015 at 6:29 AM, Akhil Das wrote:
> Its the executor memory (spark.executor.memory) which you can set while
> creating the spark context. By default it uses 0.6% of the executor memory
(Uses 0.6 or 60%)
-
To unsu
Akhil,
Ah, very good point. I guess "SET spark.sql.shuffle.partitions=1024" should
do it.
Alex
On Sun, Jan 18, 2015 at 10:29 PM, Akhil Das
wrote:
> Its the executor memory (spark.executor.memory) which you can set while
> creating the spark context. By default it uses 0.6% of the executor memo
Its the executor memory (spark.executor.memory) which you can set while
creating the spark context. By default it uses 0.6% of the executor memory
for Storage. Now, to show some memory usage, you need to cache (persist)
the RDD. Regarding the OOM Exception, you can increase the level of
parallelism
All,
I'm getting out of memory exceptions in SparkSQL GROUP BY queries. I have
plenty of RAM, so I should be able to brute-force my way through, but I
can't quite figure out what memory option affects what process.
My current memory configuration is the following:
export SPARK_WORKER_MEMORY=8397