vm.swappiness=0? Some vendors recommend this set to 0 (zero), although I've
seen this causes even kernel to fail to allocate memory.
It may cause node reboot. If that's the case, set vm.swappiness to 5-10 and
decrease spark.*.memory. Your spark.driver.memory+
spark.executor.memory + OS + etc >> amount of memory node has.



-- 
Ruslan Dautkhanov

On Thu, Jun 4, 2015 at 8:59 AM, Chao Chen <kandy...@gmail.com> wrote:

> Hi all,
> I am new to spark. I am trying to deploy HDFS (hadoop-2.6.0) and
> Spark-1.3.1 with four nodes, and each node has 8-cores and 8GB memory.
> One is configured as headnode running masters, and 3 others are workers
>
> But when I try to run the Pagerank from HiBench, it always cause a node to
> reboot during the middle of the work for all scala, java, and python
> versions. But works fine
> with the MapReduce version from the same benchmark.
>
> I also tried standalone deployment, got the same issue.
>
> My spark-defaults.conf
> spark.master                            yarn-client
> spark.driver.memory             4g
> spark.executor.memory           4g
> spark.rdd.compress              false
>
>
> The job submit script is:
>
> bin/spark-submit  --properties-file
> HiBench/report/pagerank/spark/scala/conf/sparkbench/spark.conf --class
> org.apache.spark.examples.SparkPageRank --master yarn-client
> --num-executors 2 --executor-cores 4 --executor-memory 4G --driver-memory
> 4G
> HiBench/src/sparkbench/target/sparkbench-4.0-SNAPSHOT-MR2-spark1.3-jar-with-dependencies.jar
> hdfs://discfarm:9000/HiBench/Pagerank/Input/edges
> hdfs://discfarm:9000/HiBench/Pagerank/Output 3
>
> What is problem with my configuration ? and How can I find the cause ?
>
> any help is welcome !
>
>
>
>
>
>
>
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to