What makes one confused is that,

spark-0.9.2-bin-hadoop1.tgz
<http://s3.amazonaws.com/spark-related-packages/spark-0.9.2-bin-hadoop1.tgz> =>
contains source code and sbt
spark-1.0.1-bin-hadoop1.tgz
<http://s3.amazonaws.com/spark-related-packages/spark-1.0.1-bin-hadoop1.tgz> =>
does not

According to their name, they are all binary package.

Every time when I need the a cluster with source code, I have to give the
git cmt hash to the script.

Is this intentional ?

Thank you.



2014-07-28 23:30 GMT+02:00 redocpot <julien19890...@gmail.com>:

> Thank you for your reply.
>
> I need sbt for packaging my project and then submit it.
>
> Could you tell me how to run a spark project on 1.0 AMI without sbt?
>
> I don't understand why 1.0 only contains the prebuilt packages. I dont
> think
> it makes sense, since sbt is essential.
>
> User has to download sbt or clone github repo, whereas in 0.9 ami, sbt is
> pre-installed.
>
> A command like:
> $ sbt/sbt package run
> could do the job.
>
> Thanks. =)
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/sbt-directory-missed-tp10783p10812.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>



-- 
REN Hao

Data Engineer @ ClaraVista

Paris, France

Tel:  +33 06 14 54 57 24

Reply via email to