Running Hadoop and HDFS on unsupported JVM runtime sounds a little
adventurous. But as long as Spark can run in a separate Java 8 runtime it's
all good. I think having lambdas and type inference is huge when writing
these jobs and using Scala (paying the price of complexity, poor tooling
etc etc) f
Cloudera customers will need to put pressure on them to support Java 8.
They only officially supported Java 7 when Oracle stopped supporting Java 6.
dean
On Wed, May 7, 2014 at 5:05 AM, Matei Zaharia wrote:
> Java 8 support is a feature in Spark, but vendors need to decide for
> themselves when
Java 8 support is a feature in Spark, but vendors need to decide for themselves
when they’d like support Java 8 commercially. You can still run Spark on Java 7
or 6 without taking advantage of the new features (indeed our builds are always
against Java 6).
Matei
On May 6, 2014, at 8:59 AM, Ian
I think the distinction there might be they never said they ran that code
under CDH5, just that spark supports it and spark runs under CDH5. Not that
you can use these features while running under CDH5.
They could use mesos or the standalone scheduler to run them
On Tue, May 6, 2014 at 6:16 AM,
Hi Kristoffer,
You're correct that CDH5 only supports up to Java 7 at the moment. But
Yarn apps do not run in the same JVM as Yarn itself (and I believe MR1
doesn't either), so it might be possible to pass arguments in a way
that tells Yarn to launch the application master / executors with the
Jav
Hi
I just read an article [1] about Spark, CDH5 and Java 8 but did not get
exactly how Spark can run Java 8 on a YARN cluster at runtime. Is Spark
using a separate JVM that run on data nodes or is it reusing the YARN JVM
runtime somehow, like hadoop1?
CDH5 only supports Java 7 [2] as far as I kno