t we use: --mesos
> mesos://zk://prodMesosMaster01:2181,prodMesosMaster02:2181,prodMesosMaster03:2181/mesos
>
> And we followed the instructions here:
> https://spark.apache.org/docs/1.2.0/running-on-mesos.html
>
> On 23 September 2015 at 08:22, Dick Davies wrote:
>>
>> I'
Really excited to try out the new Docker executor support on 1.4.1, I'm
making progress but feel like I'm missing something.
(versions:
spark-1.4.1-hadoop2.6 - not using hadoop yet
mac os x yosemite java 8 spark-shell
mesos 0.22.1 : 2 slaves, 1 master + zk , all on centos 6.x
docker 1.8.x
)
I w
You'd need the jar file (holding class definitions etc.) to do the
deserialisation on the executor.
On 18 June 2015 at 03:48, Shiyao Ma wrote:
> Hi,
>
> Looking from my executor logs, the submitted application jar is
> transmitted to each executors?
>
> Why does spark do the above? To my understa
;> On Wednesday, December 3, 2014, Matei Zaharia
>> wrote:
>>>
>>> I'd suggest asking about this on the Mesos list (CCed). As far as I know,
>>> there was actually some ongoing work for this.
>>>
>>> Matei
>>>
>>>
Just wondered if anyone had managed to start spark
jobs on mesos wrapped in a docker container?
At present (i.e. very early testing) I'm able to submit executors
to mesos via spark-submit easily enough, but they fall over
as we don't have a JVM on our slaves out of the box.
I can push one out via