Thanks Sean. I guess I was being pedantic. In any case if the source table
does not exist as spark.read is a collection, then it is going to fall over
one way or another!
On Fri, 2 Oct 2020 at 15:55, Sean Owen wrote:
> It would be quite trivial. None of that affects any of the Spark execution
It would be quite trivial. None of that affects any of the Spark execution.
It doesn't seem like it helps though - you are just swallowing the cause.
Just let it fly?
On Fri, Oct 2, 2020 at 9:34 AM Mich Talebzadeh
wrote:
> As a side question consider the following read JDBC read
>
>
> val lowerB
As a side question consider the following read JDBC read
val lowerBound = 1L
val upperBound = 100L
val numPartitions = 10
val partitionColumn = "id"
val HiveDF = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("driver", HybridServerDriverName).
option("