e "launch_cluster" in ec2/spark_ec2.py, where the
>> ports seem to be configured.
>>
>>
>> On Thu, Jul 17, 2014 at 1:29 PM, Matt Work Coarr
>> wrote:
>> > Thanks Marcelo! This is a huge help!!
>> >
>> > Looking at the executor lo
Thanks Marcelo! This is a huge help!!
Looking at the executor logs (in a vanilla spark install, I'm finding them
in $SPARK_HOME/work/*)...
It launches the executor, but it looks like the
CoarseGrainedExecutorBackend is having trouble talking to the driver
(exactly what you said!!!).
Do you know
t; Have you looked at the slave machine to see if the process has
> actually launched? If it has, have you tried peeking into its log
> file?
>
> (That error is printed whenever the executors fail to report back to
> the driver. Insufficient resources to launch the executor is the mos
Hello spark folks,
I have a simple spark cluster setup but I can't get jobs to run on it. I
am using the standlone mode.
One master, one slave. Both machines have 32GB ram and 8 cores.
The slave is setup with one worker that has 8 cores and 24GB memory
allocated.
My application requires 2 cor
Thanks Akhil! I'll give that a try!
gt; "c3.4xlarge": "pvm",
> "c3.8xlarge": "pvm"
> }
> if opts.instance_type in instance_types:
> instance_type = instance_types[opts.instance_type]
> else:
> instance_type = "pvm"
> print >> stderr
How would I go about creating a new AMI image that I can use with the spark
ec2 commands? I can't seem to find any documentation. I'm looking for a
list of steps that I'd need to perform to make an Amazon Linux image ready
to be used by the spark ec2 tools.
I've been reading through the spark 1.0
Hi, I'm attempting to run "spark-ec2 launch" on AWS. My AWS instances
would be in our EC2 VPC (which seems to be causing a problem).
The two security groups MyClusterName-master and MyClusterName-slaves have
already been setup with the same ports open as the security group that
spark-ec2 tries to