I am not manually setting spark.mesos.coarse to true neither in the code
nor in any configuration file. So it should be taking the default value and
so it should be running in fine-grained mode.
When i try to log, conf.get("spark.mesos.coarse"), my application exited
with this error:
Exception in
"Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have sufficient resources".
I'm assuming you are submitting the job in coarse-grained mode, in that
case make sure you are asking for the available resources.
If you want to submit multipl
Hi,
I am using a mesos cluster to run my spark jobs.
I have one mesos-master and two mesos-slaves setup on 2 machines.
On one machine, master and slave are setup and on the second machine
mesos-slave is setup
I run these on m3-large ec2 instances.
1. When i try to submit two jobs using spark-sub