What version of the script are you running? What did you see in the EC2 web console when this happened?
Sometimes instances just don't come up in a reasonable amount of time and you have to kill and restart the process. Does this always happen, or was it just once? Nick On Thu, Dec 18, 2014 at 9:42 AM, Eduardo Cusa < eduardo.c...@usmediaconsulting.com> wrote: > Hi guys. > > I run the folling command to lauch a new cluster : > > ./spark-ec2 -k test -i test.pem -s 1 --vpc-id vpc-XXXXX --subnet-id > subnet-XXXXX launch vpc_spark > > The instances started ok but the command never end. With the following > output: > > > Setting up security groups... > Searching for existing cluster vpc_spark... > Spark AMI: ami-5bb18832 > Launching instances... > Launched 1 slaves in us-east-1a, regid = r-e9d603c4 > Launched master in us-east-1a, regid = r-89d104a4 > Waiting for cluster to enter 'ssh-ready' state............... > > > any ideas what happend? > > > regards > Eduardo > > >