This used to work. Only thing that has changed is that the mesos installed
on Spark executor is on a different version from before. My Spark executor
runs in a container, the image of which has mesos installed. The version of
that mesos is actually different from the version of mesos master. Not su
Sounds a little like the driver got one offer when it was using zero
resources, then it's not getting any more. How many frameworks (and which)
are running on the cluster? The Mesos Master log should say which
frameworks are getting offers, and should help diagnose the problem.
A
On Thu, Dec 7, 2
Sounds strange. Maybe it has to do with the job itself? What kind of job is
it? Have you gotten it to run on more than one node before? What's in the
spark-submit command?
Susan
On Wed, Dec 6, 2017 at 11:21 AM, Ji Yan wrote:
> I am sure that the other agents have plentiful enough resources, but
I am sure that the other agents have plentiful enough resources, but I
don't know why Spark only scheduled executors on one single node, up to
that node's capacity ( it is a different node everytime I run btw ).
I checked the DEBUG log from Spark Driver, didn't see any mention of
decline. But from
Hello Ji,
Spark will launch Executors round-robin on offers, so when the resources on
an agent get broken into multiple resource offers it's possible that many
Executrors get placed on a single agent. However, from your description,
it's not clear why your other agents do not get Executors schedul