rg/maven2/org/apache/hadoop/hadoop-yarn-server/2.2.0/hadoop-yarn-server-2.2.0.jar
>
>
>
>
>
> -Ken
>
>
>
> *From:* Shivaram Venkataraman [mailto:shiva...@eecs.berkeley.edu]
> *Sent:* Friday, April 25, 2014 4:31 PM
>
> *To:* user@spark.apache.org
> *Subje
en.apache.org/maven2/org/apache/hadoop/hadoop-yarn-server/2.2.0/hadoop-yarn-server-2.2.0.jar
-Ken
From: Shivaram Venkataraman [mailto:shiva...@eecs.berkeley.edu]
Sent: Friday, April 25, 2014 4:31 PM
To: user@spark.apache.org
Subject: Re: Build times for Spark
Are you by any chance building this
sen [mailto:rosenvi...@gmail.com]
>> *Sent:* Friday, April 25, 2014 3:27 PM
>> *To:* user@spark.apache.org
>> *Subject:* Re: Build times for Spark
>>
>>
>>
>> Did you configure SBT to use the extra memory?
>>
>>
>>
>> On Fri, Apr 25,
do that?
>
>
>
> -Ken
>
>
>
> *From:* Josh Rosen [mailto:rosenvi...@gmail.com]
> *Sent:* Friday, April 25, 2014 3:27 PM
> *To:* user@spark.apache.org
> *Subject:* Re: Build times for Spark
>
>
>
> Did you configure SBT to use the extra memory?
>
No, I haven’t done any config for SBT. Is there somewhere you might be able to
point me toward for how to do that?
-Ken
From: Josh Rosen [mailto:rosenvi...@gmail.com]
Sent: Friday, April 25, 2014 3:27 PM
To: user@spark.apache.org
Subject: Re: Build times for Spark
Did you configure SBT to use
Did you configure SBT to use the extra memory?
On Fri, Apr 25, 2014 at 12:53 PM, Williams, Ken wrote:
> I’ve cloned the github repo and I’m building Spark on a pretty beefy
> machine (24 CPUs, 78GB of RAM) and it takes a pretty long time.
>
>
>
> For instance, today I did a ‘git pull’ for the
Are you using SSD? We found that the bottleneck is not computational, but
disk IO. When assembly, sbt is moving lots of class files, jars, and
packaging them into a single flat jar. I can do assembly in my macbook in
10mins while before upgrading to SSD, it took 30~40mins.
Sincerely,
DB Tsai
---