Thanks Ted for reply.

But this is not what I want. This would tell spark to read hadoop dependency
from maven repository, which is the original version of hadoop. I myslef is
modifying the hadoop code, and wanted to include them inside the spark fat
jar. "Spark-Class" would run slaves with the fat jar created in the assembly
folder, and that jar does not contain my modified classes. 

Something that confuses me is, what spark includes the hadoop classes in
it's built jar output? Isn't it supposed to go and read from the hadoop
folder in each worker node?



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Compiling-Spark-with-a-local-hadoop-profile-tp14517p14519.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to