miland-db commented on code in PR #47403:
URL: https://github.com/apache/spark/pull/47403#discussion_r1691277611


##########
sql/core/src/main/scala/org/apache/spark/sql/scripting/SqlScriptingInterpreter.scala:
##########
@@ -73,11 +74,19 @@ case class SqlScriptingInterpreter() {
           .map(new SingleStatementExec(_, Origin(), isInternal = true))
           .reverse
         new CompoundBodyExec(
-          body.collection.map(st => transformTreeIntoExecutable(st)) ++ 
dropVariables)
+          body.collection.map(st => transformTreeIntoExecutable(st)) ++ 
dropVariables, session)
       case sparkStatement: SingleStatement =>
         new SingleStatementExec(
           sparkStatement.parsedPlan,
           sparkStatement.origin,
-          isInternal = false)
+          shouldCollectResult = true)
     }
+
+  def execute(compoundBody: CompoundBody): Iterator[Array[Row]] = {

Review Comment:
   For now, because we don't know which one is the last statement to do only 
one `collect()` and use noop sink for the rest, we decided to do collect for 
all statements.
   
   Either way, it is important to execute each statement as soon as we 
encounter it to be able to handle errors properly. [PR introducing 
handlers](https://github.com/apache/spark/pull/47423) is currently work in 
progress and will probably explain why we did things the way we did in this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to