cloud-fan commented on code in PR #54554:
URL: https://github.com/apache/spark/pull/54554#discussion_r2867242604


##########
sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala:
##########
@@ -84,6 +84,16 @@ import org.apache.spark.util.Utils
  *     times, each time picks one config set from each dimension, until all 
the combinations are
  *     tried. For example, if dimension 1 has 2 lines, dimension 2 has 3 
lines, this testing file
  *     will be run 6 times (cartesian product).
+ *  6. A line with --DEBUG (on its own line before a query) marks that query 
for debug mode.
+ *     When any --DEBUG marker is present, the test enters debug mode:
+ *     - Commands (CREATE TABLE, INSERT, DROP, SET, etc.) are always executed 
automatically.
+ *     - Only --DEBUG-marked non-command queries are executed; all others are 
skipped.
+ *     - For failed queries, the full error stacktrace is printed to the 
console.
+ *     - Query results are still compared against the golden file.
+ *     - The test always fails at the end with a reminder to remove --DEBUG 
markers.
+ *     To inspect the DataFrame interactively, set a breakpoint in 
`runDebugQueries` at the
+ *     line where `localSparkSession.sql(sql)` is called, then evaluate the 
DataFrame in the

Review Comment:
   it's less useful in non-debug mode as all queries run, so I didn't put it in 
the classdoc. People can find the DataFrame in `runQueries` similarly.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to