[ 
https://issues.apache.org/jira/browse/HIVE-18293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16296924#comment-16296924
 ] 

Johannes Alberti commented on HIVE-18293:
-----------------------------------------

See https://github.com/apache/hive/pull/277 for a patch, the CI build failed, 
looks like there is a general instability, nothing related to the PR by itself. 
I also looked at the existing tests, to determine if I could add one for this 
scenario, but could not figure out how to best incorporate a test for this 
scenario. I tested the patch locally, but only in the context of 2.1.1, the 
compaction code seems to be unchanged since then, so same issue should still 
occur in master and therefore be fixed with this patch. Need a reviewer and 
somebody to merge this, please!

> Hive is failing to compact tables contained within a folder that is not owned 
> by identity running HiveMetaStore
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-18293
>                 URL: https://issues.apache.org/jira/browse/HIVE-18293
>             Project: Hive
>          Issue Type: Bug
>          Components: Transactions
>    Affects Versions: 2.1.1
>         Environment: Centos6.5/Hadoop2.7.4/Java7
>            Reporter: Johannes Alberti
>            Assignee: Johannes Alberti
>            Priority: Critical
>
> ACID tables are not getting compacted properly due to an 
> AccessControlException, this only occurs for tables contained in the 
> non-default database.
> The root cause for the issue is the re-use of an already created 
> DistributedFileSystem instance within a new DoAs context. I will attach a 
> patch for the same.
> Stack below (anonymized)
> {noformat}
> compactor.Worker: Caught an exception in the main loop of compactor worker 
> [[hostname]]-34, org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=hive, access=EXECUTE, 
> inode="/hive/non-default.db":nothive:othergroup:drwxrwx---
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:275)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
>         at sun.reflect.GeneratedConstructorAccessor83.newInstance(Unknown 
> Source)
>         at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
>         at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>         at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2110)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>         at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
>         at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorThread$1.run(CompactorThread.java:172)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
>         at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorThread.findUserToRunAs(CompactorThread.java:169)
>         at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:151)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=hive, access=EXECUTE, 
> inode="/hive/non-default.db":nothive:othergroup:drwxrwx---
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:275)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1476)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1413)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>         at $Proxy34.getFileInfo(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
>         at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at $Proxy35.getFileInfo(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
>         ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to