Hi,

Spark version 2.3.3 on Google Dataproc


I am trying to use databricks to other databases


https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html


 to read from Hive table on Prem using Spark in Cloud


This works OK without a Try enclosure.


import spark.implicits._

import scala.util.{Try, Success, Failure}

val HiveDF = Try(spark.read.

     format("jdbc").

     option("url", jdbcUrl).

     option("dbtable", HiveSchema+"."+HiveTable).

     option("user", HybridServerUserName).

     option("password", HybridServerPassword).

     load()) match {

                   case Success(HiveDF) => HiveDF

                   case Failure(e) => throw new Exception("Error
Encountered reading Hive table")

     }

However, with Try I am getting the following error


<console>:66: error: recursive value HiveDF needs type

                          case Success(HiveDF) => HiveDF

Wondering what is causing this. I have used it before (say reading from an
XML file) and it worked the,

Thanks





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

Reply via email to