cloud-fan commented on code in PR #52334:
URL: https://github.com/apache/spark/pull/52334#discussion_r2429735625


##########
sql/core/src/main/scala/org/apache/spark/sql/classic/SparkSession.scala:
##########
@@ -448,16 +448,24 @@ class SparkSession private(
   private[sql] def sql(sqlText: String, args: Array[_], tracker: 
QueryPlanningTracker): DataFrame =
     withActive {
       val plan = tracker.measurePhase(QueryPlanningTracker.PARSING) {
-        val parsedPlan = sessionState.sqlParser.parsePlan(sqlText)
-        if (args.nonEmpty) {
-          // Check for SQL scripting with positional parameters before 
creating parameterized query
-          if (parsedPlan.isInstanceOf[CompoundBody]) {
+        val parsedPlan = if (args.nonEmpty) {
+          // Use parameter context directly for parsing
+          val paramContext = 
PositionalParameterContext(args.map(lit(_).expr).toSeq)

Review Comment:
   To simplify the pre-parser, shall we use a fake plan to resolve and 
constant-fold the parameter values first? Then we don't need 
`LiteralToSqlConverter` as the parameter value is always `Literal` and we can 
simply use `Literal#sql`.
   
   We can use a fake plan `Project(parameters, OneRowRelation)`, analyze and 
optimize, and get the project list. They must be all literals otherwise we 
should report error for parameters.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to