On 8 Mar 2016, at 07:23, Lu, Yingqi
mailto:yingqi...@intel.com>> wrote:
Thank you for the quick reply. I am very new to maven and always use the
default settings. Can you please be a little more specific on the instructions?
I think all the jar files from Hadoop build are located at
Hadoop-3.
use to compile Spark
and how can I change the pom.xml?
Thanks,
Lucy
From: fightf...@163.com [mailto:fightf...@163.com]
Sent: Monday, March 07, 2016 11:15 PM
To: Lu, Yingqi ; user
Subject: Re: How to compile Spark with private build of Hadoop
I think you can establish your own maven
I think the first step is to publish your in-house built Hadoop related
jars to your local maven or ivy repo, and then change the Spark building
profiles like -Phadoop-2.x (you could use 2.7 or you have to change the pom
file if you met jar conflicts) -Dhadoop.version=3.0.0-SNAPSHOT to build
agains
how can I change the pom.xml?
Thanks,
Lucy
From: fightf...@163.com [mailto:fightf...@163.com]
Sent: Monday, March 07, 2016 11:15 PM
To: Lu, Yingqi ; user
Subject: Re: How to compile Spark with private build of Hadoop
I think you can establish your own maven repository and deploy your modified
@spark.apache.org
Subject: How to compile Spark with private build of Hadoop
Hi All,
I am new to Spark and I have a question regarding to compile Spark. I modified
trunk version of Hadoop source code. How can I compile Spark (standalone mode)
with my modified version of Hadoop (HDFS, Hadoop-common and etc
Hi All,
I am new to Spark and I have a question regarding to compile Spark. I modified
trunk version of Hadoop source code. How can I compile Spark (standalone mode)
with my modified version of Hadoop (HDFS, Hadoop-common and etc.)?
Thanks a lot for your help!
Thanks,
Lucy