brkyvz commented on code in PR #49238:
URL: https://github.com/apache/spark/pull/49238#discussion_r1892870362


##########
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala:
##########
@@ -162,12 +130,7 @@ class DataFrameReader private[sql](sparkSession: 
SparkSession)
     assertNoSpecifiedSchema("jdbc")
     // connectionProperties should override settings in extraOptions.
     val params = extraOptions ++ connectionProperties.asScala
-    val options = new JDBCOptions(url, table, params)
-    val parts: Array[Partition] = predicates.zipWithIndex.map { case (part, i) 
=>
-      JDBCPartition(part, i) : Partition
-    }
-    val relation = JDBCRelation(parts, options)(sparkSession)
-    sparkSession.baseRelationToDataFrame(relation)
+    Dataset.ofRows(sparkSession, UnresolvedJDBCRelation(url, table, 
predicates, params))

Review Comment:
   having this available at the moment can help us unify it in SQL afterwards 
:) This seems to be the only edge case - seems like a new API since I looked at 
Spark. Do you want me to leave it as is and just migrate the file based and 
generic data sources?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to