Sure, thanks Akhil. 
A further question : Is local file system(file:///) not supported in standalone 
cluster? 



bit1...@163.com
 
From: Akhil Das
Date: 2015-02-18 17:35
To: bit1...@163.com
CC: user
Subject: Re: Problem with 1 master + 2 slaves cluster
Since the cluster is standalone, you are better off reading/writing to hdfs 
instead of local filesystem.

Thanks
Best Regards

On Wed, Feb 18, 2015 at 2:32 PM, bit1...@163.com <bit1...@163.com> wrote:
But I am able to run the SparkPi example:
./run-example SparkPi 1000 --master spark://192.168.26.131:7077

Result:Pi is roughly 3.14173708



bit1...@163.com
 
From: bit1...@163.com
Date: 2015-02-18 16:29
To: user
Subject: Problem with 1 master + 2 slaves cluster
Hi sparkers,
I setup a spark(1.2.1) cluster with 1 master and 2 slaves, and then startup 
them, everything looks running normally.
In the master node, I run the spark-shell, with the following steps:

bin/spark-shell --master spark://192.168.26.131:7077
scala> var rdd = sc.textFile("file:///home/hadoop/history.txt.used.byspark", 7)
rdd.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _,5).map(x => (x._2, 
x._1)).sortByKey(false).map(x => (x._2, 
x._1)).saveAsTextFile("file:///home/hadoop/output")

After finishing running the application, there is no word count related output, 
there does exist an output directory appear on each slave node,  but there is 
only a "_temporary" subdirectory

Any ideas? Thanks!





Reply via email to