[ https://issues.apache.org/jira/browse/HIVE-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342974#comment-14342974 ]
Hive QA commented on HIVE-8626: ------------------------------- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12701799/HIVE-8626.2.patch {color:green}SUCCESS:{color} +1 7576 tests passed Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2915/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2915/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2915/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12701799 - PreCommit-HIVE-TRUNK-Build > Extend HDFS super-user checks to dropPartitions > ----------------------------------------------- > > Key: HIVE-8626 > URL: https://issues.apache.org/jira/browse/HIVE-8626 > Project: Hive > Issue Type: Bug > Components: Metastore > Affects Versions: 0.14.0 > Reporter: Mithun Radhakrishnan > Assignee: Mithun Radhakrishnan > Attachments: HIVE-8626.1.patch, HIVE-8626.2.patch > > > HIVE-6392 takes care of allowing HDFS super-user accounts to register > partitions in tables whose HDFS paths don't explicitly grant > write-permissions to the super-user. > However, the dropPartitions()/dropTable()/dropDatabase() use-cases don't > handle this at all. i.e. An HDFS super-user ({{kal...@dev.grid.myth.net}}) > can't drop the very partitions that were added to a table-directory owned by > the user ({{mithunr}}). The following error is the result: > {quote} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Table metadata > not deleted since > hdfs://mythcluster-nn1.grid.myth.net:8020/user/mithunr/myth.db/myth_table is > not writable by kal...@dev.grid.myth.net) > {quote} > This is the result of redundant checks in > {{HiveMetaStore::dropPartitionsAndGetLocations()}}: > {code:title=HiveMetaStore.java|borderStyle=solid} > if (!wh.isWritable(partPath.getParent())) { > throw new MetaException("Table metadata not deleted since the partition " > + Warehouse.makePartName(partitionKeys, part.getValues()) > + " has parent location " + partPath.getParent() > + " which is not writable " > + "by " + hiveConf.getUser()); > } > {code} > This check is already made in StorageBasedAuthorizationProvider. If the > argument is that the SBAP isn't guaranteed to be in play, then this shouldn't > be checked in HMS either. If HDFS permissions need to be checked in addition > to say, ACLs, then perhaps a recursively-composed auth-provider ought to be > used. > For the moment, I'll get {{Warehouse.isWritable()}} to handle HDFS > super-users. But I think {{isWritable()}} checks oughtn't to be in > HiveMetaStore. (Perhaps fix this in another JIRA?) -- This message was sent by Atlassian JIRA (v6.3.4#6332)