Hi all,
I am new to Spark, and have one problem that, no computations run on
workers/slave_servers in the standalone cluster mode.
The Spark version is 1.6.0, and environment is CentOS. I run the example codes,
e.g.
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/mllib/LinearRegression.scala#L117.
What I did: 1. setup slaves in ./conf/slaves, 2. setup the spark-env.sh file,
3. sbin/start-all.sh, 4. run the test program with spark-submit. Follow the
link, http://spark.apache.org/docs/latest/spark-standalone.html.
Could anyone give some suggestions on this? Or the link to how setup this?
Many thanksJunjie Qian