aokolnychyi commented on code in PR #54427:
URL: https://github.com/apache/spark/pull/54427#discussion_r2841917812
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceV2Strategy.scala:
##########
@@ -579,7 +579,11 @@ class DataSourceV2Strategy(session: SparkSession) extends
Strategy with Predicat
val table = a.table.asInstanceOf[ResolvedTable]
ResolveTableConstraints.validateCatalogForTableChange(
a.changes, table.catalog, table.identifier)
- AlterTableExec(table.catalog, table.identifier, a.changes) :: Nil
+ AlterTableExec(
+ table.catalog,
+ table.identifier,
+ a.changes,
+ recacheTable(table, includeTimeTravel = false)) :: Nil
Review Comment:
We may debate that only some table changes should trigger a refresh but it
is very fragile and dangerous. For instance, Iceberg still allows revert of
table state by setting a predefined table property (not something I encourage).
This must invalidate the cache in Spark too. To sum up, it is the safest call
to recache but I can be convinced otherwise.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]