Pajaraja commented on PR #49955:
URL: https://github.com/apache/spark/pull/49955#issuecomment-2706383952

   > A side note, for those usecases where the `UnionLoop` is infinite we 
should probaby introduce a config similar to 
`spark.sql.cteRecursionLevelLimit`, but to limit the number of rows returned by 
the loop. And set it some reasonably large default. This is because we might 
not be able to push down a limit into `UnionLoop` if there is a node between 
the limit and the loop node where the limit can't be pushed through (e.g. a 
`Filter`).
   
   This makes sense, but I wonder how would we tell apart the cases where it's 
an infinite recursion, and we're returning the first k (k modifiable in flag) 
results vs a finite (but very large) recursion we throw an error for.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to