Those pages include instructions for running locally:

"Note that all of the sample programs take a <master> parameter specifying the cluster URL to connect to. This can be a URL for a distributed cluster, or local to run locally with one thread, or local[N] to run locally with N threads. You should start by using local for testing."

Pass "local[N]" as the master and it'll run locally with N threads. Alternatively, set up a standalone "cluster" as you normally would, just run the master and 1 slave locally.

-Ewen

March 16, 2014 at 11:42 PM
Sorry, I did not explain myself correctly.
I know how to run spark, the question is how to instruct spark to do all of the computation on a single machine? 
I was trying to convert the code to scala but I miss some of the methods of spark like reduceByKey

Eran





--
Eran | CTO 
March 16, 2014 at 10:25 PM
Please follow the instructions at http://spark.apache.org/docs/latest/index.html and http://spark.apache.org/docs/latest/quick-start.html to get started on a local machine.



Sent from Mailbox for iPhone



March 16, 2014 at 2:38 PM
Hi,

I know it is probably not the purpose of spark but the syntax is easy and cool...
I need to run some spark like code in memory on a single machine any pointers how to optimize it to run only on one machine?


--
Eran | CTO 

Reply via email to