HI Jungtaek Lim ,
Thanks for the response, so we have no option only to wait till hadoop
officially supports java 11.
Thanks and regards,
kaki mahesh raja
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
Hmm... I read the page again, and it looks like we are in a gray area.
Hadoop community supports JDK 11 starting from Hadoop 3.3, while we haven't
reached adding Hadoop 3.3 as a dependency. It may not make a real issue on
runtime with Hadoop 3.x as Spark is using a part of Hadoop (client layer),
b
Hadoop 2.x doesn't support JDK 11. See Hadoop Java version compatibility
with JDK:
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Java+Versions
That said, you'll need to use Spark 3.x with Hadoop 3.1 profile to make
Spark work with JDK 11.
On Tue, Mar 16, 2021 at 10:06 PM Sean Owen w
That looks like you didn't compile with Java 11 actually. How did you try
to do so?
On Tue, Mar 16, 2021, 7:50 AM kaki mahesh raja
wrote:
> HI All,
>
> We have compiled spark with java 11 ("11.0.9.1") and when testing the
> thrift
> server we are seeing that insert query from operator using beel