Also, still for 1), in conf/spark-defaults.sh, you can give the following
arguments to tune the Driver's resources:
spark.driver.cores
spark.driver.memory
Not sure if you can pass them at submit time, but it should be possible.
--
View this message in context:
http://apache-spark-user-list.10
For 1)
In standalone mode, you can increase the worker's resource allocation in
their local conf/spark-env.sh with the following variables:
SPARK_WORKER_CORES,
SPARK_WORKER_MEMORY
At application submit time, you can tune the number of resource allocated to
executors with /--executor-cores/ and /