Thank you, Patrick
I am planning to switch to 1.0 now.
By the way of feedback - I used Andrew's suggestion and found that it does
exactly that - sets Executor JVM heap - and nothing else. Workers have
already been started and when the shell starts, it is now able to control
Executor JVM heap.
Th
In 1.0+ you can just pass the --executor-memory flag to ./bin/spark-shell.
On Fri, Jun 6, 2014 at 12:32 AM, Oleg Proudnikov
wrote:
> Thank you, Hassan!
>
>
> On 6 June 2014 03:23, hassan wrote:
>>
>> just use -Dspark.executor.memory=
>>
>>
>>
>> --
>> View this message in context:
>> http://apac
Thank you, Hassan!
On 6 June 2014 03:23, hassan wrote:
> just use -Dspark.executor.memory=
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Setting-executor-memory-when-using-spark-shell-tp7082p7103.html
> Sent from the Apache Spark User List mail
Thank you, Andrew!
On 5 June 2014 23:14, Andrew Ash wrote:
> Oh my apologies that was for 1.0
>
> For Spark 0.9 I did it like this:
>
> MASTER=spark://mymaster:7077 SPARK_MEM=8g ./bin/spark-shell -c
> $CORES_ACROSS_CLUSTER
>
> The downside of this though is that SPARK_MEM also sets the driver's
just use -Dspark.executor.memory=
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Setting-executor-memory-when-using-spark-shell-tp7082p7103.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Oh my apologies that was for 1.0
For Spark 0.9 I did it like this:
MASTER=spark://mymaster:7077 SPARK_MEM=8g ./bin/spark-shell -c
$CORES_ACROSS_CLUSTER
The downside of this though is that SPARK_MEM also sets the driver's JVM to
be 8g, rather than just the executors. I think this is the reason f
Thank you, Andrew,
I am using Spark 0.9.1 and tried your approach like this:
bin/spark-shell --driver-java-options
"-Dspark.executor.memory=$MEMORY_PER_EXECUTOR"
I get
bad option: '--driver-java-options'
There must be something different in my setup. Any ideas?
Thank you again,
Oleg
On 5
Hi Oleg,
I set the size of my executors on a standalone cluster when using the shell
like this:
./bin/spark-shell --master $MASTER --total-executor-cores
$CORES_ACROSS_CLUSTER --driver-java-options
"-Dspark.executor.memory=$MEMORY_PER_EXECUTOR"
It doesn't seem particularly clean, but it works.