I want to look at porting a Hadoop problem to Spark - eventually I want to
run on a Hadoop 2.0 cluster but while I am learning and porting I want to
run small problems in my windows box.
I installed scala and sbt.
I download Spark and in the spark directory can say
mvn -Phadoop-0.23 -Dhadoop.version=0.23.7 -DskipTests clean package
which succeeds
I tried
sbt/sbt assembly
which fails with errors

In the documentation
<https://spark.apache.org/docs/latest/spark-standalone.html>it says

*Note:* The launch scripts do not currently support Windows. To run a Spark
cluster on Windows, start the master and workers by hand.
with no indication of how to do this.

I can build and run samples (say JavaWordCount)  to the point where they
fail because a master cannot be found (none is running)

I want to know how to get a spark master and a slave or two running on my
windows box so I can look at the samples and start playing with Spark

Does anyone have a windows instance running??
Please DON'T SAY I SHOULD RUN LINUX! if it is supposed to work on windows
someone should have tested it and be willing to state how.

Reply via email to