when you start master it stats applicationmaster. it does not
slaves/workers!

you need to start slaves with start-slaves.sh

slaves will look at the file $SPARK_HOME/conf/slaves to get a list of nodes
to start slaves. then it will start slaves/workers in each node. you can
see all this in spark GUI

[image: Inline images 1]

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 30 May 2016 at 10:15, Ian <psilonl...@gmail.com> wrote:

> Normally, when you start the master, the slaves should also be started
> automatically. This, however, presupposes that you've configured the
> slaves.
> In the $SPARK_HOME/conf directory there should be a slaves or
> slaves.template file. If it only contains localhost, then you have not set
> up any worker nodes.
>
> Also note that SSH from the master to the slaves must be enabled for the
> user that runs the Thrift server.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/JDBC-Cluster-tp27012p27046.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to