Hi Andrew,

Thanks for the current doc.

I'd almost gotten to the point where I thought that my custom code needed
> to be included in the SPARK_EXECUTOR_URI but that can't possibly be
> correct.  The Spark workers that are launched on Mesos slaves should start
> with the Spark core jars and then transparently get classes from custom
> code over the network, or at least that's who I thought it should work.
>  For those who have been using Mesos in previous releases, you've never had
> to do that before have you?


Regarding the delivery of the custom job code to Mesos, we have been using
'ADD_JARS' (in the command line) or 'SparkConfig.setJars(Seq[String]) with
a fat jar packing all dependencies.
That works as well on the Spark 'standalone' cluster, but we deploy mostly
on Mesos, so I couldn't say about classloading difference between the two.

-greetz, Gerard.

Reply via email to