Hi Nick and Abel,
Looks like you are requesting 8g for your executors, but only allowing 2g
on the workers. You should set SPARK_WORKER_MEMORY to at least 8g if you
intend to use that much memory in your application. Also, you shouldn't
have to set SPARK_DAEMON_JAVA_OPTS; you can just set
"spark.e
Thank you Abel,
It seems that your advice worked. Even though I receive a message that it
is a deprecated way of defining Spark Memory (the system prompts that I
should set spark.driver.memory), the memory is increased.
Again, thank you,
Nick
On Mon, Jul 21, 2014 at 9:42 AM, Abel Coronado Irue
Hi Nick
Maybe if you use:
export SPARK_MEM=4g
On Mon, Jul 21, 2014 at 11:35 AM, Nick R. Katsipoulakis
wrote:
> Hello,
>
> Currently I work on a project in which:
>
> I spawn a standalone Apache Spark MLlib job in Standalone mode, from a
> running Java Process.
>
> In the code of the Spar
Hello,
Currently I work on a project in which:
I spawn a standalone Apache Spark MLlib job in Standalone mode, from a
running Java Process.
In the code of the Spark Job I have the following code:
SparkConf sparkConf = new SparkConf().setAppName("SparkParallelLoad");
sparkConf.set("spark.executo