[ https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Fachuan Bai updated HIVE-25912: ------------------------------- Description: *new update:* I test the master branch, have the same problem. ---------- ENV: Hive 3.1.2 HDFS:3.3.1 enable OpenLDAP and Ranger . I create the external hive table using this command: {code:java} CREATE EXTERNAL TABLE `fcbai`( `inv_item_sk` int, `inv_warehouse_sk` int, `inv_quantity_on_hand` int) PARTITIONED BY ( `inv_date_sk` int) STORED AS ORC LOCATION 'hdfs://emr-master-1:8020/'; {code} The table was created successfully, but when I drop the table throw the NPE: {code:java} Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.NullPointerException) (state=08S01,code=1){code} The same bug can reproduction on the other object storage file system, such as S3 or TOS: {code:java} CREATE EXTERNAL TABLE `fcbai`( `inv_item_sk` int, `inv_warehouse_sk` int, `inv_quantity_on_hand` int) PARTITIONED BY ( `inv_date_sk` int) STORED AS ORC LOCATION 's3a://bucketname/'; // 'tos://bucketname/'{code} I see the source code found: common/src/java/org/apache/hadoop/hive/common/FileUtils.java {code:java} // check if sticky bit is set on the parent dir FileStatus parStatus = fs.getFileStatus(path.getParent()); if (!shims.hasStickyBit(parStatus.getPermission())) { // no sticky bit, so write permission on parent dir is sufficient // no further checks needed return; }{code} because I set the table location to HDFS root path (hdfs://emr-master-1:8020/), so the path.getParent() function will be return null cause the NPE. I think have four solutions to fix the bug: # modify the create table function, if the location is root dir return create table fail. # modify the FileUtils.checkDeletePermission function, check the path.getParent(), if it is null, the function return, drop successfully. # modify the RangerHiveAuthorizer.checkPrivileges function of the hive ranger plugin(in ranger rep), if the location is root dir return create table fail. # modify the HDFS Path object, if the URI is root dir, path.getParent() return not null. I recommend the first or second method, any suggestion for me? thx. was: ENV: Hive 3.1.2 HDFS:3.3.1 enable OpenLDAP and Ranger . I create the external hive table using this command: {code:java} CREATE EXTERNAL TABLE `fcbai`( `inv_item_sk` int, `inv_warehouse_sk` int, `inv_quantity_on_hand` int) PARTITIONED BY ( `inv_date_sk` int) STORED AS ORC LOCATION 'hdfs://emr-master-1:8020/'; {code} The table was created successfully, but when I drop the table throw the NPE: {code:java} Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.NullPointerException) (state=08S01,code=1){code} The same bug can reproduction on the other object storage file system, such as S3 or TOS: {code:java} CREATE EXTERNAL TABLE `fcbai`( `inv_item_sk` int, `inv_warehouse_sk` int, `inv_quantity_on_hand` int) PARTITIONED BY ( `inv_date_sk` int) STORED AS ORC LOCATION 's3a://bucketname/'; // 'tos://bucketname/'{code} I see the source code found: common/src/java/org/apache/hadoop/hive/common/FileUtils.java {code:java} // check if sticky bit is set on the parent dir FileStatus parStatus = fs.getFileStatus(path.getParent()); if (!shims.hasStickyBit(parStatus.getPermission())) { // no sticky bit, so write permission on parent dir is sufficient // no further checks needed return; }{code} because I set the table location to HDFS root path (hdfs://emr-master-1:8020/), so the path.getParent() function will be return null cause the NPE. I think have four solutions to fix the bug: # modify the create table function, if the location is root dir return create table fail. # modify the FileUtils.checkDeletePermission function, check the path.getParent(), if it is null, the function return, drop successfully. # modify the RangerHiveAuthorizer.checkPrivileges function of the hive ranger plugin(in ranger rep), if the location is root dir return create table fail. # modify the HDFS Path object, if the URI is root dir, path.getParent() return not null. I recommend the first or second method, any suggestion for me? thx. > Drop external table at root of s3 bucket throws NPE > --------------------------------------------------- > > Key: HIVE-25912 > URL: https://issues.apache.org/jira/browse/HIVE-25912 > Project: Hive > Issue Type: Bug > Components: Metastore > Affects Versions: 3.1.2 > Environment: Hive version: 3.1.2 > Reporter: Fachuan Bai > Assignee: Fachuan Bai > Priority: Major > Labels: metastore, pull-request-available > Attachments: hive bugs.png, hive-bug-01.png > > Original Estimate: 96h > Time Spent: 15h 20m > Remaining Estimate: 80h 40m > > *new update:* > I test the master branch, have the same problem. > ---------- > ENV: > Hive 3.1.2 > HDFS:3.3.1 > enable OpenLDAP and Ranger . > > I create the external hive table using this command: > > {code:java} > CREATE EXTERNAL TABLE `fcbai`( > `inv_item_sk` int, > `inv_warehouse_sk` int, > `inv_quantity_on_hand` int) > PARTITIONED BY ( > `inv_date_sk` int) STORED AS ORC > LOCATION > 'hdfs://emr-master-1:8020/'; > {code} > > The table was created successfully, but when I drop the table throw the NPE: > > {code:java} > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLTask. > MetaException(message:java.lang.NullPointerException) > (state=08S01,code=1){code} > > The same bug can reproduction on the other object storage file system, such > as S3 or TOS: > {code:java} > CREATE EXTERNAL TABLE `fcbai`( > `inv_item_sk` int, > `inv_warehouse_sk` int, > `inv_quantity_on_hand` int) > PARTITIONED BY ( > `inv_date_sk` int) STORED AS ORC > LOCATION > 's3a://bucketname/'; // 'tos://bucketname/'{code} > > I see the source code found: > common/src/java/org/apache/hadoop/hive/common/FileUtils.java > {code:java} > // check if sticky bit is set on the parent dir > FileStatus parStatus = fs.getFileStatus(path.getParent()); > if (!shims.hasStickyBit(parStatus.getPermission())) { > // no sticky bit, so write permission on parent dir is sufficient > // no further checks needed > return; > }{code} > > because I set the table location to HDFS root path > (hdfs://emr-master-1:8020/), so the path.getParent() function will be return > null cause the NPE. > I think have four solutions to fix the bug: > # modify the create table function, if the location is root dir return > create table fail. > # modify the FileUtils.checkDeletePermission function, check the > path.getParent(), if it is null, the function return, drop successfully. > # modify the RangerHiveAuthorizer.checkPrivileges function of the hive > ranger plugin(in ranger rep), if the location is root dir return create table > fail. > # modify the HDFS Path object, if the URI is root dir, path.getParent() > return not null. > I recommend the first or second method, any suggestion for me? thx. > > -- This message was sent by Atlassian Jira (v8.20.1#820001)