cloud-fan commented on code in PR #50973: URL: https://github.com/apache/spark/pull/50973#discussion_r2101452266
########## sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala: ########## @@ -3546,31 +3539,32 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor * The resolved encoders then will be used to deserialize the internal row to Scala value. */ object ResolveEncodersInUDF extends Rule[LogicalPlan] { - override def apply(plan: LogicalPlan): LogicalPlan = plan.resolveOperatorsUpWithPruning( - _.containsPattern(SCALA_UDF), ruleId) { + override def apply(plan: LogicalPlan): LogicalPlan = + plan.resolveOperatorsUpWithSubqueriesAndPruning(_.containsPattern(SCALA_UDF), ruleId) { Review Comment: we usually don't do this in analyzer rules, because we recursively invoke the entire analyzer to resolve subquery expressions. It doesn't work here? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org