milanisvet commented on code in PR #49571: URL: https://github.com/apache/spark/pull/49571#discussion_r1922515273
########## sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala: ########## @@ -4520,6 +4520,31 @@ object SQLConf { .checkValues(LegacyBehaviorPolicy.values.map(_.toString)) .createWithDefault(LegacyBehaviorPolicy.CORRECTED.toString) + val CTE_RECURSION_LEVEL_LIMIT = buildConf("spark.sql.cteRecursionLevelLimit") + .internal() + .doc("Maximum level of recursion that is allowed wile executing a recursive CTE definition." + + "If a query does not get exhausted before reaching this limit it fails. Use -1 for " + + "unlimited.") + .version("4.0.0") + .intConf + .createWithDefault(100) + + object CTERecursionCacheMode extends Enumeration { + val NONE, REPARTITION, PERSIST, LOCAL_CHECKPOINT, CHECKPOINT = Value + } + + val CTE_RECURSION_CACHE_MODE = buildConf("spark.sql.cteRecursionCacheMode") Review Comment: Not really sure which option we should use, I suppose there are cases where one option will be better than other. I probably need to test this and see. But I agree it does not have to be exposed to the end user probably. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org