JiaqiWang18 commented on PR #51644: URL: https://github.com/apache/spark/pull/51644#issuecomment-3337149265
>We were not resetting checkpoint dirs on full refresh. Seems like currently, if the table `st` lives in `spark-warehouse/st` and I do a `spark-pipelines run --conf spark.sql.catalogImplementation=hive --full-refresh-all`, all the content of the directory gets deleted along with the checkpoint sub directories in it, probably because we call `TRUNCATE TABLE` [here](https://github.com/sryza/spark/blob/dde895c722af503d57ab235907a00373a7935178/sql/pipelines/src/main/scala/org/apache/spark/sql/pipelines/graph/DatasetManager.scala#L292). I guess if we move the checkpoints outside of the table's directory, we need to reset manually. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
