Hi, This JDBC connection works with Oracle table with primary key ID
val s = HiveContext.read.format("jdbc").options( Map("url" -> _ORACLEserver, "dbtable" -> "(SELECT ID, CLUSTERED, SCATTERED, RANDOMISED, RANDOM_STRING, SMALL_VC, PADDING FROM scratchpad.dummy)", "partitionColumn" -> "ID", *"lowerBound" -> "1","upperBound" -> "100000000",* "numPartitions" -> "4", "user" -> _username, "password" -> _password)).load Note that both lowerbound and upperbound for ID column are fixed. However, Itried to workout upperbound dynamically as follows: // // Get maxID first // scala> val maxID = HiveContext.read.format("jdbc").options(Map("url" -> _ORACLEserver,"dbtable" -> "(SELECT MAX(ID) AS maxID FROM scratchpad.dummy)", | "user" -> _username, "password" -> _password)).load().collect. apply(0).getDecimal(0) maxID: java.math.BigDecimal = 100000000.0000000000 and this fails scala> val s = HiveContext.read.format("jdbc").options( | Map("url" -> _ORACLEserver, | "dbtable" -> "(SELECT ID, CLUSTERED, SCATTERED, RANDOMISED, RANDOM_STRING, SMALL_VC, PADDING FROM scratchpad.dummy)", | "partitionColumn" -> "ID", * | "lowerBound" -> "1", | "upperBound" -> "maxID",* | "numPartitions" -> "4", | "user" -> _username, | "password" -> _password)).load java.lang.NumberFormatException: For input string: "maxID" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:589) at java.lang.Long.parseLong(Long.java:631) at scala.collection.immutable.StringLike$class.toLong(StringLike.scala:276) at scala.collection.immutable.StringOps.toLong(StringOps.scala:29) at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:42) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125) ... 56 elided Any ideas how this can work! Thanks