[ https://issues.apache.org/jira/browse/HIVE-28118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Palakur Eshwitha Sai updated HIVE-28118: ---------------------------------------- Description: With Viewfs overload scheme enabled in the cluster, hive insert into S3 operation fails with MoveTask error when the HDFS hive warehouse path is encrypted and one of the partitions is present in S3. The logic checks if encryption is enabled using the DFSUtilClient, which is true for HDFS paths but S3 does not support encryption zones. The setup has a hive table with one of the partitions moved to S3 and mount point configured to it using Viewfs overload scheme. The rest of the partitions and other hdfs folders are mounted using linkFallback. Below is the error stacktrace: {code:java} Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. Unable to move source hdfs://s3cluster/warehouse/tablespace/external/hive/sales_by_state/state=CA/.hive-staging_hive_2024-01-05_05-52-09_785_7084847051463431810-8/-ext-10002 to destination hdfs://s3cluster/warehouse/tablespace/external/hive/sales_by_state/state=CA/.hive-staging_hive_2024-01-05_05-52-09_785_7084847051463431810-8/-ext-10000 (state=08S01,code=1) Caused by: java.io.FileNotFoundException: No such file or directory: s3a://testhadoop/warehouse/tablespace/external/hive/sales_by_state/state=CA/.hive-staging_hive_2024-01-05_05-52-09_785_7084847051463431810-8/-ext-10000 at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2344) ~[hadoop-aws-3.2.2.3.2.2.4-8.jar:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2226) ~[hadoop-aws-3.2.2.3.2.2.4-8.jar:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2160) ~[hadoop-aws-3.2.2.3.2.2.4-8.jar:?] at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:888) ~[hadoop-common-3.2.2.3.2.2.4-6.jar:?]{code} was: With Viewfs overload scheme enabled in the cluster, hive insert into operation fails with MoveTask error when the path is not in encryption zone. The setup has a hive table with one of the partitions moved to S3 and mount point configured to it using Viewfs overload scheme. The rest of the partitions and other hdfs folders are mounted using linkFallback. S3 does not support encryption zones, below is the error stacktrace: {code:java} Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. Unable to move source hdfs://s3cluster/warehouse/tablespace/external/hive/sales_by_state/state=CA/.hive-staging_hive_2024-01-05_05-52-09_785_7084847051463431810-8/-ext-10002 to destination hdfs://s3cluster/warehouse/tablespace/external/hive/sales_by_state/state=CA/.hive-staging_hive_2024-01-05_05-52-09_785_7084847051463431810-8/-ext-10000 (state=08S01,code=1) Caused by: java.io.FileNotFoundException: No such file or directory: s3a://testhadoop/warehouse/tablespace/external/hive/sales_by_state/state=CA/.hive-staging_hive_2024-01-05_05-52-09_785_7084847051463431810-8/-ext-10000 at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2344) ~[hadoop-aws-3.2.2.3.2.2.4-8.jar:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2226) ~[hadoop-aws-3.2.2.3.2.2.4-8.jar:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2160) ~[hadoop-aws-3.2.2.3.2.2.4-8.jar:?] at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:888) ~[hadoop-common-3.2.2.3.2.2.4-6.jar:?]{code} Similar issue is observed for hive insert into hdfs paths without encryption zones when viewfs overload is configured in the cluster. > Hive insert into S3 with Viewfs overload scheme fails with MoveTask error > ------------------------------------------------------------------------- > > Key: HIVE-28118 > URL: https://issues.apache.org/jira/browse/HIVE-28118 > Project: Hive > Issue Type: Bug > Reporter: Palakur Eshwitha Sai > Assignee: Palakur Eshwitha Sai > Priority: Major > Labels: pull-request-available > Attachments: code.png > > > With Viewfs overload scheme enabled in the cluster, hive insert into S3 > operation fails with MoveTask error when the HDFS hive warehouse path is > encrypted and one of the partitions is present in S3. The logic checks if > encryption is enabled using the DFSUtilClient, which is true for HDFS paths > but S3 does not support encryption zones. > The setup has a hive table with one of the partitions moved to S3 and mount > point configured to it using Viewfs overload scheme. The rest of the > partitions and other hdfs folders are mounted using linkFallback. > Below is the error stacktrace: > {code:java} > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.MoveTask. Unable to move source > hdfs://s3cluster/warehouse/tablespace/external/hive/sales_by_state/state=CA/.hive-staging_hive_2024-01-05_05-52-09_785_7084847051463431810-8/-ext-10002 > to destination > hdfs://s3cluster/warehouse/tablespace/external/hive/sales_by_state/state=CA/.hive-staging_hive_2024-01-05_05-52-09_785_7084847051463431810-8/-ext-10000 > (state=08S01,code=1) > Caused by: java.io.FileNotFoundException: No such file or directory: > s3a://testhadoop/warehouse/tablespace/external/hive/sales_by_state/state=CA/.hive-staging_hive_2024-01-05_05-52-09_785_7084847051463431810-8/-ext-10000 > > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2344) > ~[hadoop-aws-3.2.2.3.2.2.4-8.jar:?] > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2226) > ~[hadoop-aws-3.2.2.3.2.2.4-8.jar:?] > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2160) > ~[hadoop-aws-3.2.2.3.2.2.4-8.jar:?] > at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:888) > ~[hadoop-common-3.2.2.3.2.2.4-6.jar:?]{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)