[ 
https://issues.apache.org/jira/browse/HIVE-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13875377#comment-13875377
 ] 

Casey Brotherton commented on HIVE-6225:
----------------------------------------

Appears to be a problem with drop_table_core from HiveMetaStore calling 
isWritable on the parent path of a table. 

isWritable is a method in the Warehouse object, and just does a normal check of 
writable permissions.

It appears a new method is required, that will be passed in the path for the 
table, and will check for the sticky bit of the parent, and in that case, check 
the owner of the table path. 
Otherwise will call isWritable on the parent.

In addition to drop_table_core, there are other suspect calls to isWritable 
within HiveMetaStore.java. 
 ( All of the calls below to a getParent should be evaluated to see if there is 
a change, or delete to a child within the parent...In that case, the sticky bit 
may need to be checked. )

 if (!wh.isWritable(path)) { 
 if (!wh.isWritable(tablePath.getParent())) { 
 if (!wh.isWritable(tblPath.getParent())) { 
 if (!wh.isWritable(partPath.getParent())) { 
 if (!wh.isWritable(archiveParentDir.getParent())) { 
 if (!wh.isWritable(partPath.getParent())) { 
 if (!wh.isWritable(tblPath.getParent())) {

How HDFS handles the permissions is by definition correct. It is just that the 
hive.metastore.authorization.storage.checks forces Hive to guess how HDFS will 
react before starting the action.
For example: with the current code, with /user/hive/warehouse having 
permissions 1777 , and a table foo created by casey.
If you logged in as johndoe, and dropped foo...It would remove foo from the 
metastore, and then there would be an error within the Hive logs as johndoe was 
not able to remove the directory from HDFS. 

( Adapted from logs in the customer's environment to remove customer host 
names. )
2013-09-19 14:33:00,485 INFO hive.metastore.hivemetastoressimpl: deleting 
hdfs://host.com:8020/user/hive/warehouse/foo
2013-09-19 14:33:00,499 WARN org.apache.hadoop.fs.TrashPolicyDefault: Can't 
create trash directory: 
hdfs://host.com:8020/user/johndoe/.Trash/Current/user/hive/warehouse
2013-09-19 14:33:00,602 ERROR hive.log: Got exception: java.io.IOException 
Failed to move to trash: hdfs://host.com:8020/user/hive/warehouse/foo
2013-09-19 14:33:00,603 ERROR hive.log: java.io.IOException: Failed to move to 
trash: hdfs://host.com:8020/user/hive/warehouse/foo
at 
org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:154)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:109)
...

Why I suggest a new method, is that knowledge of the table ownership is needed 
as well as the permissions of the parent directory.


Interesting side note, when one enables 
hive.security.metastore.authorization.manager to be 
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
 
and enables hive.metastore.pre.event.listeners ( With 
org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener 
), the StorageBasedAuthorizationProvider 
attempts to perform similar checks, but seems to always check the current table 
directory, and not the parent at all. In addition, the code does not check the 
sticky bit of any directories.

> hive.metastore.authorization.storage.checks doesn't respect sticky bit
> ----------------------------------------------------------------------
>
>                 Key: HIVE-6225
>                 URL: https://issues.apache.org/jira/browse/HIVE-6225
>             Project: Hive
>          Issue Type: Bug
>          Components: Authorization
>            Reporter: Casey Brotherton
>
> Sticky bit works on HDFS, preventing a user from removing an item created by 
> another user in a directory with permissions of 1777.
> However, Hive allows the table to be dropped even after setting 
> hive.metastore.authorization.storage.checks to true and /user/hive/metastore 
> to 1777 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to