Re: JAVA_HOME problem with upgrade to 1.3.0

2015-03-23 Thread Williams, Ken
> From: , Ken Williams > mailto:ken.willi...@windlogics.com>> > Date: Thursday, March 19, 2015 at 10:59 AM > To: Spark list mailto:user@spark.apache.org>> > Subject: JAVA_HOME problem with upgrade to 1.3.0 > > […] > Finally, I go and check the YARN app master’s web interface (since the job is >

Re: JAVA_HOME problem with upgrade to 1.3.0

2015-03-19 Thread Williams, Ken
> From: Ted Yu mailto:yuzhih...@gmail.com>> > Date: Thursday, March 19, 2015 at 11:05 AM > > JAVA_HOME, an environment variable, should be defined on the node where > appattempt_1420225286501_4699_02 ran. Has this behavior changed in 1.3.0 since 1.2.1 though? Using 1.2.1 and making no othe

JAVA_HOME problem with upgrade to 1.3.0

2015-03-19 Thread Williams, Ken
I’m trying to upgrade a Spark project, written in Scala, from Spark 1.2.1 to 1.3.0, so I changed my `build.sbt` like so: -libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.1" % “provided" +libraryDependencies += "org.apache.spark" %% "spark-core" % "1.3.0" % "provided" the

RE: Build times for Spark

2014-04-25 Thread Williams, Ken
nks Shivaram On Fri, Apr 25, 2014 at 2:09 PM, Akhil Das mailto:ak...@sigmoidanalytics.com>> wrote: You can always increase the sbt memory by setting export JAVA_OPTS="-Xmx10g" Thanks Best Regards On Sat, Apr 26, 2014 at 2:17 AM, Williams, Ken mailto:ken.willi...@windlogics

RE: Build times for Spark

2014-04-25 Thread Williams, Ken
the extra memory? On Fri, Apr 25, 2014 at 12:53 PM, Williams, Ken mailto:ken.willi...@windlogics.com>> wrote: I’ve cloned the github repo and I’m building Spark on a pretty beefy machine (24 CPUs, 78GB of RAM) and it takes a pretty long time. For instance, today I did a ‘git pull’ for the

Build times for Spark

2014-04-25 Thread Williams, Ken
I've cloned the github repo and I'm building Spark on a pretty beefy machine (24 CPUs, 78GB of RAM) and it takes a pretty long time. For instance, today I did a 'git pull' for the first time in a week or two, and then doing 'sbt/sbt assembly' took 43 minutes of wallclock time (88 minutes of CPU

RE: Problem connecting to HDFS in Spark shell

2014-04-21 Thread Williams, Ken
> -Original Message- > From: Marcelo Vanzin [mailto:van...@cloudera.com] > Hi Ken, > > On Mon, Apr 21, 2014 at 1:39 PM, Williams, Ken > wrote: > > I haven't figured out how to let the hostname default to the host > mentioned in our /etc/hadoop/conf/hdfs-si

RE: Problem connecting to HDFS in Spark shell

2014-04-21 Thread Williams, Ken
e the Hadoop command-line tools do, but that's not so important. -Ken > -Original Message- > From: Williams, Ken [mailto:ken.willi...@windlogics.com] > Sent: Monday, April 21, 2014 2:04 PM > To: Spark list > Subject: Problem connecting to HDFS in Spark shell > >

Problem connecting to HDFS in Spark shell

2014-04-21 Thread Williams, Ken
I'm trying to get my feet wet with Spark. I've done some simple stuff in the shell in standalone mode, and now I'm trying to connect to HDFS resources, but I'm running into a problem. I synced to git's master branch (c399baa - "SPARK-1456 Remove view bounds on Ordered in favor of a context bou