Hi,

On Wed, Aug 20, 2014 at 11:59 AM, Matt Narrell <matt.narr...@gmail.com> wrote:
> Specifying the driver-class-path yields behavior like
> https://issues.apache.org/jira/browse/SPARK-2420 and
> https://issues.apache.org/jira/browse/SPARK-2848  It feels like opening a
> can of worms here if I also need to replace the guava dependencies.

There's a PR open for the Guava dependency. I'm not sure at the moment
if there will be a need to shade any other dependencies.

> Wouldn’t calling “./make-distribution.sh —skip-java-test —hadoop 2.4.1
> —with-yarn —tgz” include the appropriate versions of the hadoop libs into
> the spark jar?

Yes. I was suggesting a workaround that just excludes any Hadoop
classes from your Spark jar; that way you're guaranteed to use
whatever Hadoop classes you have in your cluster. :-)

> I’m trying to rebuild using the hadoop-provided profile, but I’m getting
> several build errors.  Is this sufficient:  mvm -Phadoop-provided clean
> package -Phadoop-2.4 -Pyarn -Dyarn.version=2.4.1 -Dhadoop.version=2.4.1
> -DskipTests

Hmm, I'm pretty sure that's worked for me in the past, but then I
haven't tried that in a while.

-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to