Hello,

I found that I could dynamically add/remove new workers to a running
standalone Spark cluster by simply triggering:

start-slave.sh (SPARK_MASTER_ADDR)

and

stop-slave.sh

E.g., I could instantiate a new AWS instance and just add it to a running
cluster without needing to add it to slaves file and restarting the whole
cluster.
It seems that there's no need for me to stop a running cluster.

Is this a valid way of dynamically resizing a spark cluster (as of now, I'm
not concerned about HDFS)? Or will there be certain unforeseen problems if
nodes are added/removed this way?

Reply via email to