miland-db commented on code in PR #49427:
URL: https://github.com/apache/spark/pull/49427#discussion_r1926044733


##########
sql/core/src/main/scala/org/apache/spark/sql/scripting/SqlScriptingInterpreter.scala:
##########
@@ -63,6 +67,79 @@ case class SqlScriptingInterpreter(session: SparkSession) {
       case _ => None
     }
 
+  /**
+   * Transform [[CompoundBody]] into [[CompoundBodyExec]].
+ *
+   * @param compoundBody
+   *   CompoundBody to be transformed into CompoundBodyExec.
+   * @param args
+   *   A map of parameter names to SQL literal expressions.
+   * @param isHandler
+   *   Indicates if the body is a handler body to do additional processing 
during transforming.
+   * @return
+   *   Executable version of the CompoundBody .
+   */
+  private def transformBodyIntoExec(
+      compoundBody: CompoundBody,
+      args: Map[String, Expression],
+      context: SqlScriptingExecutionContext): CompoundBodyExec = {
+    // Add drop variables to the end of the body.
+    val variables = compoundBody.collection.flatMap {
+      case st: SingleStatement => getDeclareVarNameFromPlan(st.parsedPlan)
+      case _ => None
+    }
+    val dropVariables = variables
+      .map(varName => DropVariable(varName, ifExists = true))
+      .map(new SingleStatementExec(_, Origin(), args, isInternal = true, 
context))
+      .reverse
+
+    // Create a map of conditions (SqlStates) to their respective handlers.
+    val conditionHandlerMap = HashMap[String, ErrorHandlerExec]()
+    compoundBody.handlers.foreach(handler => {
+      val handlerBodyExec =
+        transformBodyIntoExec(
+          handler.body,
+          args,
+          context)
+
+      // Execution node of handler.
+      val handlerScopeLabel = if (handler.handlerType == HandlerType.EXIT) {
+        Some(compoundBody.label.get)
+      } else {
+        None
+      }
+
+      val handlerExec = new ErrorHandlerExec(
+        handlerBodyExec,
+        handler.handlerType,
+        handlerScopeLabel)
+
+      // For each condition handler is defined for, add corresponding key 
value pair
+      // to the conditionHandlerMap.
+      handler.conditions.foreach(condition => {
+        // Condition can either be the key in conditions map or SqlState.

Review Comment:
   Sorry for interrupting before @srielau answers. That is correct and 
considering the current implementation, situation you are describing is 
possible to happen. Looking from the user perspective, condition names are 
usually something meaningful, usually longer than 5 characters and contain `_` 
(underscore) as a delimiter between words. That means in very high percentage 
of situations condition names will not be mixed up with SQL state. It is 
reasonable to assume that Spark users will not make such a mistake when 
declaring custom conditions.
   
   I definitely agree that as a **follow-up** we should build more robust 
mechanism around this. It is impossible to introduce everything this feature 
should have in a single PR. For the first iteration it is important to cover 
all reasonable use-cases for this feature, and create follow-up tasks. I 
propose to leave this as is, and make a follow-up task to get back on this 
after we merge this one. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to