Hi Shiyao,
>From the same page you referred:Maven is the official recommendation for 
>packaging Spark, and is the “build of reference”. But SBT is supported for 
>day-to-day development since it can provide much faster iterative compilation. 
>More advanced developers may wish to use SBT.

For maven, pom.xml is the main and important file.
-P stands for Profilesearch for 'profile' in spark/pom.xmlMore on it: 
http://maven.apache.org/guides/introduction/introduction-to-profiles.html
-D stands for Definemaven takes it from Java or earlier languages.It is a way 
to pass system.properties and/or override existing properties from build file.
Core build:spark/core/pom.xml is your build file for building only Spark-Core.

Thanking you.

With Regards
Sree 


     On Tuesday, April 21, 2015 12:12 AM, Akhil Das 
<[email protected]> wrote:
   

 With maven you could like:

mvn -Dhadoop.version=2.3.0 -DskipTests clean package -pl core

ThanksBest Regards
On Mon, Apr 20, 2015 at 8:10 PM, Shiyao Ma <[email protected]> wrote:

Hi.

My usage is only about the spark core and hdfs, so no spark sql or
mlib or other components invovled.


I saw the hint on the
http://spark.apache.org/docs/latest/building-spark.html, with a sample
like:
build/sbt -Pyarn -Phadoop-2.3 assembly. (what's the -P for?)


Fundamentally, I'd like to let sbt only compile and package the core
and the hadoop.

Meanwhile, it would be appreciated if you could inform me what's the
scala file that controls the logic of "-Pyarn", so that I can dig into
the build source and have a finer control.



Thanks.

--

吾輩は猫である。ホームーページはhttp://introo.me。

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]





  

Reply via email to