p://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos-tp6806p7048.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I do assume that you've added HADOOP_HOME to you environment variables.
Otherwise, you could fill the actual path of hadoop on your cluster. Also,
did you do update the bash?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-
this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos-tp6806p7021.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Since $HADOOP_HOME is deprecated, try adding it to the Mesos configuration
file.
Add `export MESOS_HADOOP_HOME=$HADOOP_HOME to ~/.bashrc` and that should
solve your error
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos
java:590)
at
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:471)
... 6 more
If you can suggest what is causing this error that would be great.
--
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos-tp6806p6927.html
Sent from t
ctory on the workers
> (which are otherwise printed if i run without using mesos).
>
> --
> Thanks
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos-tp6806p6900.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
using mesos).
--
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos-tp6806p6900.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
>
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> >
20-10 successfully in removeExecutor
> 14/06/03 19:55:00 INFO DAGScheduler: Host gained which was in lost
> list earlier: IMPETUS-DSRV04.impetus.co.in
I've checked my configuration of spark many times and it looks fine to me.
Any ideas what might have gone wrong?
--
Thanks
--
View this me