As requested, here is the script I am running. It is a simple shell script
which calls spark-ec2 wrapper script. I execute it from the 'ec2' directory
of spark, as usual. The AMI used is the raw one from the AWS Quick Start
section. It is the first option (an Amazon Linux paravirtual image). Any
ideas or confirmation would be GREATLY appreciated. Please and thank you.


#!/bin/sh

export AWS_ACCESS_KEY_ID=MyCensoredKey
export AWS_SECRET_ACCESS_KEY=MyCensoredKey

AMI_ID=ami-2f726546

./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s 10 -v
0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge launch
marcotest



On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
shivaram.venkatara...@gmail.com> wrote:

> Hmm -- That is strange. Can you paste the command you are using to launch
> the instances ? The typical workflow is to use the spark-ec2 wrapper script
> using the guidelines at
> http://spark.apache.org/docs/latest/ec2-scripts.html
>
> Shivaram
>
>
> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
> silvio.costant...@granatads.com> wrote:
>
>> Hi Shivaram,
>>
>> OK so let's assume the script CANNOT take a different user and that it
>> must be 'root'. The typical workaround is as you said, allow the ssh with
>> the root user. Now, don't laugh, but, this worked last Friday, but today
>> (Monday) it no longer works. :D Why? ...
>>
>> ...It seems that NOW, when you launch a 'paravirtual' ami, the root
>> user's 'authorized_keys' file is always overwritten. This means the
>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>
>> Just to point out, I am trying to make this work with a paravirtual
>> instance and not an HVM instance.
>>
>> Please and thanks,
>> Marco.
>>
>>
>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>> shivaram.venkatara...@gmail.com> wrote:
>>
>>> Right now the spark-ec2 scripts assume that you have root access and a
>>> lot of internal scripts assume have the user's home directory hard coded as
>>> /root.   However all the Spark AMIs we build should have root ssh access --
>>> Do you find this not to be the case ?
>>>
>>> You can also enable root ssh access in a vanilla AMI by editing
>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>
>>> Thanks
>>> Shivaram
>>>
>>>
>>>
>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>> silvio.costant...@granatads.com> wrote:
>>>
>>>> Hi all,
>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled for
>>>> ssh. Also, it is the default user for the Spark-EC2 script.
>>>>
>>>> Currently, the Amazon Linux images have an 'ec2-user' set up for ssh
>>>> instead of 'root'.
>>>>
>>>> I can see that the Spark-EC2 script allows you to specify which user to
>>>> log in with, but even when I change this, the script fails for various
>>>> reasons. And the output SEEMS that the script is still based on the
>>>> specified user's home directory being '/root'.
>>>>
>>>> Am I using this script wrong?
>>>> Has anyone had success with this 'ec2-user' user?
>>>> Any ideas?
>>>>
>>>> Please and thank you,
>>>> Marco.
>>>>
>>>
>>>
>>
>

Reply via email to