Oh, I also forgot to mention:
I start the master and workers (call ./sbin/start-all.sh), and then start
the shell:
MASTER=spark://localhost:7077 ./bin/spark-shell
Then I get the exceptions...
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Problem-
Hi,
I'm running my program on a single large memory many core machine (64 cores,
1TB RAM). But to avoid having huge JVMs, I want to use several processes /
worker instances - each using 8 cores (i.e. use SPARK_WORKER_INSTANCES).
When I use 2 worker instances, everything works fine, but when I try
Thanks for your answer yxzhao, but setting SPARK_MEM doesn't solve the
problem.
I also understand that setting SPARK_MEM is the same as calling
SparkConf.set("spark.executor.memory",..) which I do.
Any additional advice would be highly appreciated.
--
View this message in context:
http://apac
Hello,
I'm trying to run a simple test program that loads a large file (~12.4GB)
into memory of a single many-core machine.
The machine I'm using has more than enough memory (1TB RAM) and 64 cores
(of which I want to use 16 for worker threads).
Even though I set both the executor memory (spark.exe