Hi Martin,
Tim suggested that you pastebin the mesos logs -- can you share those for
the list?
Cheers,
Andrew
On Thu, May 15, 2014 at 5:02 PM, Martin Weindel wrote:
> Andrew,
>
> thanks for your response. When using the coarse mode, the jobs run fine.
>
> My problem is the fine-grained mode.
Andrew,
thanks for your response. When using the coarse mode, the jobs run fine.
My problem is the fine-grained mode. Here the parallel jobs nearly
always end in a dead lock. It seems to have something to do with
resource allocation, as Mesos shows neither used nor idle CPU resources
in this
I have a similar issue (but with spark 0.9.1) when a shell is active.
Multiple jobs run fine, but when the shell is active (even if at the moment
is not using any CPU) I encounter the exact same behaviour.
At the moment I don't know what happens and how to solve it, but I was
planning to have a lo
Are you setting a core limit with spark.cores.max? If you don't, in coarse
mode each Spark job uses all available cores on Mesos and doesn't let them
go until the job is terminated. At which point the other job can access
the cores.
https://spark.apache.org/docs/latest/running-on-mesos.html -- "