--
Regards
Ravi Chinoy
Phone: (415) 230 9971
Jason, In case you need a pointer on how to run Spark with a version of Java
different than the version used by the Hadoop processes, as indicated by
Dongjoon, this is an example of what we do on our Hadoop clusters:
https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_Set_J
Please try Apache Spark 3.3+ (SPARK-33772) with Java 17 on your cluster
simply, Jason.
I believe you can set up for your Spark 3.3+ jobs to run with Java 17 while
your cluster(DataNode/NameNode/ResourceManager/NodeManager) is still
sitting on Java 8.
Dongjoon.
On Fri, Dec 8, 2023 at 11:12 PM Jas