I am running the Thrift server in SparkSQL, and running it on the node I
compiled spark on.  When I run it, tasks only work if they landed on that
node, other executors started on nodes I didn't compile spark on (and thus
don't have the compile directory) fail.  Should spark be distributed
properly with the executor uri in my spark-defaults for mesos?

Here is the error on nodes with Lost executors

sh: 1: /opt/mapr/spark/spark-1.1.0-SNAPSHOT/sbin/spark-executor: not found

Reply via email to