In root pom.xml :
    <hadoop.version>2.2.0</hadoop.version>

You can override the version of hadoop with command similar to:
-Phadoop-2.4 -Dhadoop.version=2.7.0

Cheers

On Thu, Oct 8, 2015 at 11:22 AM, sbiookag <sbioo...@asu.edu> wrote:

> I'm modifying hdfs module inside hadoop, and would like the see the
> reflection while i'm running spark on top of it, but I still see the native
> hadoop behaviour. I've checked and saw Spark is building a really fat jar
> file, which contains all hadoop classes (using hadoop profile defined in
> maven), and deploy it over all workers. I also tried bigtop-dist, to
> exclude
> hadoop classes but see no effect.
>
> Is it possible to do such a thing easily, for example by small
> modifications
> inside the maven file?
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/Compiling-Spark-with-a-local-hadoop-profile-tp14517.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>

Reply via email to