Hello all,
There is a bug in the spark-ec2 script (perhaps due to a change in the
Amazon AMI).
The --ebs-vol-size option directs the spark-ec2 script to add an EBS volume
of the specified size, and mount it at /vol for a persistent HDFS. To do
this, it uses mkfs.xfs which is not available (thoug
I've just tried it again, with the same results.
When you say "it worked" what does the NameNode page list as your
"Configured Capacity"?
For me, (apparently) regardless of what I specify in the --ebs-vol-size
parameter, I get a persistent HDFS capacity of 31.5 GB.
I also used df -h to see what d
please add
From: "Ben Horner [via Apache Spark User List]"
mailto:ml-node+s1001560n9934...@n3.nabble.com>>
Date: Wednesday, July 16, 2014 at 8:47 AM
To: Ben Horner mailto:ben.hor...@atigeo.com>>
Subject: Re: Trouble with spark-ec2 script: --ebs-vol-size
Should I take it fr
Should I take it from the lack of replies that the --ebs-vol-size feature
doesn't work?
-Ben
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Trouble-with-spark-ec2-script-ebs-vol-size-tp9619p9934.html
Sent from the Apache Spark User List mailing list archiv
Hello,
I'm using the spark-0.9.1-bin-hadoop1 distribution, and the ec2/spark-ec2
script within it to spin up a cluster. I tried running my processing just
using the default (ephemeral) HDFS configuration, but my job errored out,
saying that there was no space left. So now I'm trying to increase