1. Upto you, you can either add internal ip or the external ip, it won't be
a problem unless they are not in the same network.

2. If you only want to start a particular slave, then you can do like:

sbin/start-slave.sh <worker#> <master-spark-URL>



Thanks
Best Regards

On Thu, May 28, 2015 at 1:52 PM, Nizan Grauer <ni...@windward.eu> wrote:

> hi,
>
> thanks for your answer!
>
> I have few more:
>
> 1) the file /root/spark/conf/slaves , has the full DNS names of servers (
> ec2-52-26-7-137.us-west-2.compute.amazonaws.com), did you add there the
> internal ip?
> 2) You call to start-all. Isn't it too aggressive? Let's say I have 20
> slaves up, and I want to add one more, why should we stop the entire
> cluster for this?
>
> thanks, nizan
>
> On Thu, May 28, 2015 at 10:19 AM, Akhil Das <ak...@sigmoidanalytics.com>
> wrote:
>
>> I do this way:
>>
>> - Launch a new instance by clicking on the slave instance and choose *launch
>> more like this *
>> *- *Once its launched, ssh into it and add the master public key to
>> .ssh/authorized_keys
>> - Add the slaves internal IP to the master's conf/slaves file
>> - do sbin/start-all.sh and it will show up along with other slaves.
>>
>>
>>
>> Thanks
>> Best Regards
>>
>> On Thu, May 28, 2015 at 12:29 PM, nizang <ni...@windward.eu> wrote:
>>
>>> hi,
>>>
>>> I'm working on spark standalone system on ec2, and I'm having problems on
>>> resizing the cluster (meaning - adding or removing slaves).
>>>
>>> In the basic ec2 scripts
>>> (http://spark.apache.org/docs/latest/ec2-scripts.html), there's only
>>> script
>>> for lunching the cluster, not adding slaves to it. On the
>>> spark-standalone
>>> page
>>> (
>>> http://spark.apache.org/docs/latest/spark-standalone.html#cluster-launch-scripts
>>> ),
>>> I can see only options for stopping and starting slaves, not adding them.
>>>
>>> What I try to do now (as a bad workaround...), is the following:
>>>
>>> 1) Go to the ec2 UI, create image from the current slave
>>> 2) Lunch new instance based on this image
>>> 3) Copy the public DNS of this slave
>>> 4) SSH to the master, and edit the file
>>> "/root/spark-ec2/ec2-variables.sh",
>>> and add the DNS to the "export SLAVES" variables
>>> 5) Running the script /root/spark-ec2/setup.sh
>>>
>>> After doing the above steps, I can see the new slave in the UI (8080
>>> port)
>>> of the master. However, this solution is bad for many reasons:
>>>
>>> 1) It requires many manual steps
>>> 2) It requires stopping and starting the cluster
>>> 3) There's no auto-detection in case slave stopped
>>>
>>> and many other reasons...
>>>
>>> oes anybody have another idea on how to add/remove slaves for standalone
>>> on
>>> a simple and safe way?
>>>
>>> thanks, nizan
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Adding-slaves-on-spark-standalone-on-ec2-tp23064.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>

Reply via email to