IIRC Scala 2.11 doesn't work on Java 9, and full support will come in 2.13.
I think that may be the biggest gating factor for Spark. At least, we can
get going on 2.12 support now that 2.10 support is dropped.
On Fri, Jul 14, 2017 at 5:23 PM Matei Zaharia
wrote:
> FYI, the JDK group at Oracle is
FYI, the JDK group at Oracle is reaching out to see whether anyone
wants to test with JDK 9 and give them feedback. Just contact them
directly if you'd like to.
-- Forwarded message --
From: dalibor topic
Date: Wed, Jul 12, 2017 at 3:16 AM
Subject: Testing Apache Spark with JDK 9
Try to replace your UDF with Spark built-in expressions, it should be as simple
as `$”x” * (lit(1) - $”y”)`.
> On 14 Jul 2017, at 5:46 PM, 163 wrote:
>
> I modify the tech query5 to DataFrame:
> val forders =
> spark.read.parquet("hdfs://dell127:20500/SparkParquetDoubleTimestamp100G/orders
>
I modify the tech query5 to DataFrame:
val forders =
spark.read.parquet("hdfs://dell127:20500/SparkParquetDoubleTimestamp100G/orders
”).filter("o_orderdate
< 1995-01-01 and o_orderdate >= 1994-01-01").select("o_custkey", "o_orderkey")
val flineitem =
spark.read.parquet("hdfs://dell127:20500/Spa
>
> I modify the tech query5 to DataFrame:
> val forders =
> spark.read.parquet("hdfs://dell127:20500/SparkParquetDoubleTimestamp100G/orders
>
> ”).filter("o_orderdate
> < 1995-01-01 and o_orderdate >= 1994-01-01").select("o_custkey",
> "o_orderkey")
> val flineitem =
> spark.read.parquet("
A possible workaround is to add the rand column into tbl1 with a projection
before the join.
SELECT a.col1
FROM (
SELECT col1,
CASE
WHEN col2 IS NULL
THEN cast(rand(9)*1000 - 99 as string)
ELSE
col2
END AS col2
FROM tbl1) a
LEFT OUTER