I have a workaround to the issue
As you can see from the log it is about 15 sec btw worker start and
shutdown.
The workaround might be to sleep 30 sec, check if worker is running and if
not try to start-slave again
part of emr spark bootstrap py script
spark_master = "spark://...:7077"
...
curl
I see the following error time to time when try to start slaves on spark
1.4.0
[hadoop@ip-10-0-27-240 apps]$ pwd
/mnt/var/log/apps
[hadoop@ip-10-0-27-240 apps]$ cat
spark-hadoop-org.apache.spark.deploy.worker.Worker-1-ip-10-0-27-240.ec2.internal.out
Spark Command: /usr/java/latest/bin/java -cp
/