Pajaraja commented on code in PR #49955: URL: https://github.com/apache/spark/pull/49955#discussion_r2000904956
########## sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala: ########## @@ -4537,6 +4537,22 @@ object SQLConf { .checkValues(LegacyBehaviorPolicy.values.map(_.toString)) .createWithDefault(LegacyBehaviorPolicy.CORRECTED.toString) + val CTE_RECURSION_LEVEL_LIMIT = buildConf("spark.sql.cteRecursionLevelLimit") + .doc("Maximum level of recursion that is allowed while executing a recursive CTE definition." + + "If a query does not get exhausted before reaching this limit it fails. Use -1 for " + + "unlimited.") + .version("4.0.0") + .intConf + .createWithDefault(100) + + val CTE_RECURSION_ROW_LIMIT = buildConf("spark.sql.cteRecursionRowLimit") + .doc("Maximum number of rows that can be returned when executing a recursive CTE definition." + + "If a query does not get exhausted before reaching this limit it fails. Use -1 for " + + "unlimited.") + .version("4.0.0") + .intConf + .createWithDefault(1000) Review Comment: Peter suggested it in the case that the num of rows grows exponentially, which I think makes sense. The default value should probably be bigger though. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org