[ https://issues.apache.org/jira/browse/HIVE-26127?focusedWorklogId=805039&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-805039 ]
ASF GitHub Bot logged work on HIVE-26127: ----------------------------------------- Author: ASF GitHub Bot Created on: 30/Aug/22 23:32 Start Date: 30/Aug/22 23:32 Worklog Time Spent: 10m Work Description: vihangk-db opened a new pull request, #3560: URL: https://github.com/apache/hive/pull/3560 ### What changes were proposed in this pull request? This PR backports HIVE-26127 from master branch to branch-3 since the issue affects branch-3 releases as well. ### Why are the changes needed? This PR fixes the issue reported in HIVE-26127. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Added a new q file. The original PR adds to a existing insert_overwrite.q but in branch-3 that qfile does not exist. This PR only creates a subset of the queries from master branch's insert_overwrite.q relevant to this issue. Issue Time Tracking ------------------- Worklog Id: (was: 805039) Time Spent: 50m (was: 40m) > INSERT OVERWRITE throws FileNotFound when destination partition is deleted > --------------------------------------------------------------------------- > > Key: HIVE-26127 > URL: https://issues.apache.org/jira/browse/HIVE-26127 > Project: Hive > Issue Type: Bug > Components: Query Processor > Reporter: Yu-Wen Lai > Assignee: Yu-Wen Lai > Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-2 > > Time Spent: 50m > Remaining Estimate: 0h > > Steps to reproduce: > # create external table src (col int) partitioned by (year int); > # create external table dest (col int) partitioned by (year int); > # insert into src partition (year=2022) values (1); > # insert into dest partition (year=2022) values (2); > # hdfs dfs -rm -r ${hive.metastore.warehouse.external.dir}/dest/year=2022 > # insert overwrite table dest select * from src; > We will get FileNotFoundException as below. > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Directory > file:/home/yuwen/workdir/upstream/hive/itests/qtest/target/localfs/warehouse/ext_part/par=1 > could not be cleaned up. > at > org.apache.hadoop.hive.ql.metadata.Hive.deleteOldPathForReplace(Hive.java:5387) > at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:5282) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:2657) > at > org.apache.hadoop.hive.ql.metadata.Hive.lambda$loadDynamicPartitions$6(Hive.java:3143) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) {code} > It is because it call listStatus on a path doesn't exist. We should not fail > insert overwrite because there is nothing to be clean up. > {code:java} > fs.listStatus(path, pathFilter){code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)