I also found this issue. I have reported it as a bug
https://issues.apache.org/jira/browse/SPARK-5242 and submitted a fix. You
can find link to fixed fork in the comments on the issue page. Please vote
on the issue, hopefully guys will accept pull request faster then :)

Regards, Vladimir

On Mon, Dec 29, 2014 at 7:48 PM, Eduardo Cusa <
[email protected]> wrote:

> I running the master branch.
>
> Finally I can make it work, changing  all occurrences of "
> *public_dns_name*" property with "*private_ip_address*" in the
> spark_ec2.py script.
>
> My VPC instances always have null value in "*public_dns_name*" property
>
> Now my script only work for VPC instances.
>
> Regards
> Eduardo
>
>
>
>
>
>
>
>
>
>
> On Sat, Dec 20, 2014 at 7:53 PM, Nicholas Chammas <
> [email protected]> wrote:
>
>> What version of the script are you running? What did you see in the EC2
>> web console when this happened?
>>
>> Sometimes instances just don't come up in a reasonable amount of time and
>> you have to kill and restart the process.
>>
>> Does this always happen, or was it just once?
>>
>> Nick
>>
>> On Thu, Dec 18, 2014 at 9:42 AM, Eduardo Cusa <
>> [email protected]> wrote:
>>
>>> Hi guys.
>>>
>>> I run the folling command to lauch a new cluster :
>>>
>>> ./spark-ec2 -k test -i test.pem -s 1  --vpc-id vpc-XXXXX --subnet-id
>>> subnet-XXXXX launch  vpc_spark
>>>
>>> The instances started ok but the command never end. With the following
>>> output:
>>>
>>>
>>> Setting up security groups...
>>> Searching for existing cluster vpc_spark...
>>> Spark AMI: ami-5bb18832
>>> Launching instances...
>>> Launched 1 slaves in us-east-1a, regid = r-e9d603c4
>>> Launched master in us-east-1a, regid = r-89d104a4
>>> Waiting for cluster to enter 'ssh-ready' state...............
>>>
>>>
>>> any ideas what happend?
>>>
>>>
>>> regards
>>> Eduardo
>>>
>>>
>>>
>>
>

Reply via email to