The Mesos install guide says this:

"To use Mesos from Spark, you need a Spark binary package available in a
place accessible by Mesos, and a Spark driver program configured to connect
to Mesos."

For example, putting it in HDFS or copying it to each node in the same
location should do the trick.

https://spark.apache.org/docs/latest/running-on-mesos.html



Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com

On Mon, Sep 22, 2014 at 2:35 PM, John Omernik <j...@omernik.com> wrote:

> Any thoughts on this?
>
> On Sat, Sep 20, 2014 at 12:16 PM, John Omernik <j...@omernik.com> wrote:
>
>> I am running the Thrift server in SparkSQL, and running it on the node I
>> compiled spark on.  When I run it, tasks only work if they landed on that
>> node, other executors started on nodes I didn't compile spark on (and thus
>> don't have the compile directory) fail.  Should spark be distributed
>> properly with the executor uri in my spark-defaults for mesos?
>>
>> Here is the error on nodes with Lost executors
>>
>> sh: 1: /opt/mapr/spark/spark-1.1.0-SNAPSHOT/sbin/spark-executor: not found
>>
>>
>>
>

Reply via email to