I am also running into "modules/mod_authn_alias.so" issue on r3.8xlarge when
launched cluster with ./spark-ec2; so ganglia is not accessible. From the
posts it seems that Patrick suggested using Ubuntu 12.04. Can you please
provide name of AMI that can be used with -a flag that will not have this
I am also running into "modules/mod_authn_alias.so" issue on r3.8xlarge when
launched cluster with ./spark-ec2; so ganglia is not accessible. From the
posts it seems that Patrick suggested using Ubuntu 12.04. Can you please
provide name of AMI that can be used with -a flag that will not have this
On Wed, Jun 4, 2014 at 9:35 AM, Jeremy Lee
wrote:
> Oh, I went back to m1.large while those issues get sorted out.
Random side note, Amazon is deprecating the m1 instances in favor of m3
instances, which have SSDs and more ECUs than their m1 counterparts.
m3.2xlarge has 30GB of RAM and may be a
On Wed, Jun 4, 2014 at 12:31 PM, Matei Zaharia
wrote:
> Ah, sorry to hear you had more problems. Some thoughts on them:
>
There will always be more problems, 'tis the nature of coding. :-) I try
not to bother the list until I've smacked my head against them for a few
hours, so it's only the "mos
Ah, sorry to hear you had more problems. Some thoughts on them:
> Thanks for that, Matei! I'll look at that once I get a spare moment. :-)
>
> If you like, I'll keep documenting my newbie problems and frustrations...
> perhaps it might make things easier for others.
>
> Another issue I seem to
Thanks for that, Matei! I'll look at that once I get a spare moment. :-)
If you like, I'll keep documenting my newbie problems and frustrations...
perhaps it might make things easier for others.
Another issue I seem to have found (now that I can get small clusters up):
some of the examples (the s
FYI, I opened https://issues.apache.org/jira/browse/SPARK-1990 to track this.
Matei
On Jun 1, 2014, at 6:14 PM, Jeremy Lee wrote:
> Sort of.. there were two separate issues, but both related to AWS..
>
> I've sorted the confusion about the Master/Worker AMI ... use the version
> chosen by the
Sort of.. there were two separate issues, but both related to AWS..
I've sorted the confusion about the Master/Worker AMI ... use the version
chosen by the scripts. (and use the right instance type so the script can
choose wisely)
But yes, one also needs a "launch machine" to kick off the cluster
More specifically with the -a flag, you *can* set your own AMI, but you’ll need
to base it off ours. This is because spark-ec2 assumes that some packages (e.g.
java, Python 2.6) are already available on the AMI.
Matei
On Jun 1, 2014, at 11:01 AM, Patrick Wendell wrote:
> Hey just to clarify t
Ah yes, looking back at the first email in the thread, indeed that was the
case. For the record, I too launch clusters from my laptop, where I have
Python 2.7 installed.
On Sun, Jun 1, 2014 at 2:01 PM, Patrick Wendell wrote:
> Hey just to clarify this - my understanding is that the poster
> (Je
Hey just to clarify this - my understanding is that the poster
(Jeremey) was using a custom AMI to *launch* spark-ec2. I normally
launch spark-ec2 from my laptop. And he was looking for an AMI that
had a high enough version of python.
Spark-ec2 itself has a flag "-a" that allows you to give a spec
*sigh* OK, I figured it out. (Thank you Nick, for the hint)
"m1.large" works, (I swear I tested that earlier and had similar issues...
)
It was my obsession with starting "r3.*large" instances. Clearly I hadn't
patched the script in all the places.. which I think caused it to default
to the Amazo
If you are explicitly specifying the AMI in your invocation of spark-ec2,
may I suggest simply removing any explicit mention of AMI from your
invocation? spark-ec2 automatically selects an appropriate AMI based on the
specified instance type.
2014년 6월 1일 일요일, Nicholas Chammas님이 작성한 메시지:
> Could y
Could you post how exactly you are invoking spark-ec2? And are you having
trouble just with r3 instances, or with any instance type?
2014년 6월 1일 일요일, Jeremy Lee님이 작성한 메시지:
> It's been another day of spinning up dead clusters...
>
> I thought I'd finally worked out what everyone else knew - don't
It's been another day of spinning up dead clusters...
I thought I'd finally worked out what everyone else knew - don't use the
default AMI - but I've now run through all of the "official" quick-start
linux releases and I'm none the wiser:
Amazon Linux AMI 2014.03.1 - ami-7aba833f (64-bit)
Provisi
Oh, sorry, I forgot to add: here are the extra lines in my spark_ec2.py
@205
"r3.large":"hvm",
"r3.xlarge": "hvm",
"r3.2xlarge": "hvm",
"r3.4xlarge": "hvm",
"r3.8xlarge": "hvm"
Clearly a masterpiece of hacking. :-) I haven't tested all of them. The r3
set seems to act
Hi there, Patrick. Thanks for the reply...
It wouldn't surprise me that AWS Ubuntu has Python 2.7. Ubuntu is cool like
that. :-)
Alas, the Amazon Linux AMI (2014.03.1) does not, and it's the very first
one on the recommended instance list. (Ubuntu is #4, after Amazon, RedHat,
SUSE) So, users such
We are migrating our scripts to r3. Thr lineage is in spark-ec2 would be
happy to migrate those too.
Having trouble with ganglia setup currently :)
Regards
Mayur
On 31 May 2014 09:07, "Patrick Wendell" wrote:
> Hi Jeremy,
>
> That's interesting, I don't think anyone has ever reported an issue
> r
Hi Jeremy,
That's interesting, I don't think anyone has ever reported an issue running
these scripts due to Python incompatibility, but they may require Python
2.7+. I regularly run them from the AWS Ubuntu 12.04 AMI... that might be a
good place to start. But if there is a straightforward way to
19 matches
Mail list logo