aokolnychyi commented on code in PR #50044:
URL: https://github.com/apache/spark/pull/50044#discussion_r1966030259


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##########
@@ -3534,7 +3534,8 @@ class Analyzer(override val catalogManager: 
CatalogManager) extends RuleExecutor
         TableOutputResolver.suitableForByNameCheck(v2Write.isByName,
           expected = v2Write.table.output, queryOutput = v2Write.query.output)
         val projection = TableOutputResolver.resolveOutputColumns(
-          v2Write.table.name, v2Write.table.output, v2Write.query, 
v2Write.isByName, conf)
+          v2Write.table.name, v2Write.table.output, v2Write.query, 
v2Write.isByName, conf,
+          supportColDefaultValue = true)

Review Comment:
   I don't think there is value in validating if the catalog defines 
`SUPPORT_COLUMN_DEFAULT_VALUE` in capabilities during writes. If a connector 
includes default value metadata in its schema, it should be enough to fill 
default values. The flag exists for ALTER and CREATE/REPLACE statements. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to