> From: , Ken Williams
> mailto:ken.willi...@windlogics.com>>
> Date: Thursday, March 19, 2015 at 10:59 AM
> To: Spark list mailto:user@spark.apache.org>>
> Subject: JAVA_HOME problem with upgrade to 1.3.0
>
> […]
> Finally, I go and check the YARN app master’s web interface (since the job is
>
> From: Ted Yu mailto:yuzhih...@gmail.com>>
> Date: Thursday, March 19, 2015 at 11:05 AM
>
> JAVA_HOME, an environment variable, should be defined on the node where
> appattempt_1420225286501_4699_02 ran.
Has this behavior changed in 1.3.0 since 1.2.1 though? Using 1.2.1 and making
no othe
I’m trying to upgrade a Spark project, written in Scala, from Spark 1.2.1 to
1.3.0, so I changed my `build.sbt` like so:
-libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.1" %
“provided"
+libraryDependencies += "org.apache.spark" %% "spark-core" % "1.3.0" %
"provided"
the
nks
Shivaram
On Fri, Apr 25, 2014 at 2:09 PM, Akhil Das
mailto:ak...@sigmoidanalytics.com>> wrote:
You can always increase the sbt memory by setting
export JAVA_OPTS="-Xmx10g"
Thanks
Best Regards
On Sat, Apr 26, 2014 at 2:17 AM, Williams, Ken
mailto:ken.willi...@windlogics
the extra memory?
On Fri, Apr 25, 2014 at 12:53 PM, Williams, Ken
mailto:ken.willi...@windlogics.com>> wrote:
I’ve cloned the github repo and I’m building Spark on a pretty beefy machine
(24 CPUs, 78GB of RAM) and it takes a pretty long time.
For instance, today I did a ‘git pull’ for the
I've cloned the github repo and I'm building Spark on a pretty beefy machine
(24 CPUs, 78GB of RAM) and it takes a pretty long time.
For instance, today I did a 'git pull' for the first time in a week or two, and
then doing 'sbt/sbt assembly' took 43 minutes of wallclock time (88 minutes of
CPU
> -Original Message-
> From: Marcelo Vanzin [mailto:van...@cloudera.com]
> Hi Ken,
>
> On Mon, Apr 21, 2014 at 1:39 PM, Williams, Ken
> wrote:
> > I haven't figured out how to let the hostname default to the host
> mentioned in our /etc/hadoop/conf/hdfs-si
e the Hadoop command-line tools do, but
that's not so important.
-Ken
> -Original Message-
> From: Williams, Ken [mailto:ken.willi...@windlogics.com]
> Sent: Monday, April 21, 2014 2:04 PM
> To: Spark list
> Subject: Problem connecting to HDFS in Spark shell
>
>
I'm trying to get my feet wet with Spark. I've done some simple stuff in the
shell in standalone mode, and now I'm trying to connect to HDFS resources, but
I'm running into a problem.
I synced to git's master branch (c399baa - "SPARK-1456 Remove view bounds on
Ordered in favor of a context bou