[ 
https://issues.apache.org/jira/browse/HIVE-27029?focusedWorklogId=844713&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-844713
 ]

ASF GitHub Bot logged work on HIVE-27029:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 10/Feb/23 08:16
            Start Date: 10/Feb/23 08:16
    Worklog Time Spent: 10m 
      Work Description: veghlaci05 merged PR #4024:
URL: https://github.com/apache/hive/pull/4024




Issue Time Tracking
-------------------

    Worklog Id:     (was: 844713)
    Time Spent: 2h 10m  (was: 2h)

> hive query fails with Filesystem closed error
> ---------------------------------------------
>
>                 Key: HIVE-27029
>                 URL: https://issues.apache.org/jira/browse/HIVE-27029
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Mahesh Raju Somalaraju
>            Assignee: Mahesh Raju Somalaraju
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This Jira is raised to modify/fix the code which is done in part of 
> *HIVE-26352.*
>  
> we should remove the finally block as this is causing the filesystem close 
> errors.
> {code:java}
> String queueName, String userName) throws IOException, InterruptedException {
>     UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
>     try {
>         ugi.doAs((PrivilegedExceptionAction < Void > )() - >            {
>                 checkQueueAccessInternal(queueName, userName);
>                 return null;
>             }        );
>     } finally {
>         {
>             try {
>                 FileSystem.closeAllForUGI(ugi);
>             } catch (IOException exception)            {
>                 LOG.error("Could not clean up file-system handles for UGI: " 
> + ugi, exception);
>             }        }
>     }
> } {code}
>  
> Caused by: java.io.IOException: Filesystem closed
> at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:483) 
> ~[hadoop-hdfs-client-3.1.1.7.1.8.11-3.jar:?]
> at org.apache.hadoop.hdfs.DFSClient.getEZForPath(DFSClient.java:2771) 
> ~[hadoop-hdfs-client-3.1.1.7.1.8.11-3.jar:?]
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$54.doCall(DistributedFileSystem.java:2796)
>  ~[hadoop-hdfs-client-3.1.1.7.1.8.11-3.jar:?]
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$54.doCall(DistributedFileSystem.java:2793)
>  ~[hadoop-hdfs-client-3.1.1.7.1.8.11-3.jar:?]
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  ~[hadoop-common-3.1.1.7.1.8.11-3.jar:?]
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getEZForPath(DistributedFileSystem.java:2812)
>  ~[hadoop-hdfs-client-3.1.1.7.1.8.11-3.jar:?]
> at 
> org.apache.hadoop.hdfs.client.HdfsAdmin.getEncryptionZoneForPath(HdfsAdmin.java:374)
>  ~[hadoop-hdfs-client-3.1.1.7.1.8.11-3.jar:?]
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.getEncryptionZoneForPath(Hadoop23Shims.java:1384)
>  ~[hive-exec-3.1.3000.7.1.8.11-3.jar:3.1.3000.7.1.8.11-3]
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.isPathEncrypted(Hadoop23Shims.java:1379)
>  ~[hive-exec-3.1.3000.7.1.8.11-3.jar:3.1.3000.7.1.8.11-3]
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.isPathEncrypted(SemanticAnalyzer.java:2484)
>  ~[hive-exec-3.1.3000.7.1.8.11-3.jar:3.1.3000.7.1.8.11-3]
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to