[ https://issues.apache.org/jira/browse/HIVE-24705?focusedWorklogId=575674&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575674 ]
ASF GitHub Bot logged work on HIVE-24705: ----------------------------------------- Author: ASF GitHub Bot Created on: 01/Apr/21 18:40 Start Date: 01/Apr/21 18:40 Worklog Time Spent: 10m Work Description: saihemanth-cloudera commented on a change in pull request #1960: URL: https://github.com/apache/hive/pull/1960#discussion_r605869470 ########## File path: ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/metastore/events/AlterTableEvent.java ########## @@ -101,6 +112,36 @@ private HiveOperationType getOperationType() { ret.add(getHivePrivilegeObjectDfsUri(newUri)); } + if(newTable.getParameters().containsKey(hive_metastoreConstants.META_TABLE_STORAGE)) { + String storageUri = ""; + DefaultStorageHandler defaultStorageHandler = null; + HiveStorageHandler hiveStorageHandler = null; + Configuration conf = new Configuration(); + Map<String, String> tableProperties = new HashMap<>(); + tableProperties.putAll(newTable.getSd().getSerdeInfo().getParameters()); Review comment: tableProperties is just a variable. I'm not setting this to the table parameters. The reason why I'm combining table parameters and serde parameters are, the user can give table properties like table name, column name or connection information in TBLPropperties or SERDE. So I need to put all the properties in a map and send the map to respective storage handler and extract table properties in it. ########## File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaStorageHandler.java ########## @@ -65,13 +68,16 @@ /** * Hive Kafka storage handler to allow user to read and write from/to Kafka message bus. */ -@SuppressWarnings("ALL") public class KafkaStorageHandler extends DefaultHiveMetaHook implements HiveStorageHandler { +@SuppressWarnings("ALL") public class KafkaStorageHandler extends DefaultHiveMetaHook implements HiveStorageHandler, HiveStorageAuthorizationHandler { private static final Logger LOG = LoggerFactory.getLogger(KafkaStorageHandler.class); private static final String KAFKA_STORAGE_HANDLER = "org.apache.hadoop.hive.kafka.KafkaStorageHandler"; private Configuration configuration; + /** Kafka prefix to form the URI for authentication */ + private static final String KAFKA_PREFIX = "kafka:"; Review comment: This prefix is used by Ranger to determine which storage handler class does the URI corresponds to. ########## File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaStorageHandler.java ########## @@ -65,13 +68,16 @@ /** * Hive Kafka storage handler to allow user to read and write from/to Kafka message bus. */ -@SuppressWarnings("ALL") public class KafkaStorageHandler extends DefaultHiveMetaHook implements HiveStorageHandler { +@SuppressWarnings("ALL") public class KafkaStorageHandler extends DefaultHiveMetaHook implements HiveStorageHandler, HiveStorageAuthorizationHandler { Review comment: No, All these storage handlers are not serializable. ########## File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStorageAuthorizationHandler.java ########## @@ -0,0 +1,49 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hive.ql.metadata; + +import org.apache.hadoop.hive.common.classification.InterfaceAudience; +import org.apache.hadoop.hive.common.classification.InterfaceStability; + +import java.net.URI; +import java.net.URISyntaxException; +import java.util.Map; + +/** + * HiveStorageAuthorizationHandler defines a pluggable interface for + * authorization of storage based tables in Hive. A Storage authorization + * handler consists of a bundle of the following: + * + *<ul> + *<li>getURI + *</ul> + * + * Storage authorization handler classes are plugged in using the STORED BY 'classname' + * clause in CREATE TABLE. + */ +@InterfaceAudience.Public +@InterfaceStability.Stable +public interface HiveStorageAuthorizationHandler{ Review comment: Initially, I was using HiveURIBasedAuthorization as class name but Thejas suggested me to use HiveStorageAuthorizationHandler as this class may be evolved into the future to handle other methods of authorization -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 575674) Time Spent: 1h (was: 50m) > Create/Alter/Drop tables based on storage handlers in HS2 should be > authorized by Ranger/Sentry > ----------------------------------------------------------------------------------------------- > > Key: HIVE-24705 > URL: https://issues.apache.org/jira/browse/HIVE-24705 > Project: Hive > Issue Type: Improvement > Reporter: Sai Hemanth Gantasala > Assignee: Sai Hemanth Gantasala > Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > With doAs=false in Hive3.x, whenever a user is trying to create a table based > on storage handlers on external storage for ex: HBase table, the end user we > are seeing is hive so we cannot really enforce the condition in Apache > Ranger/Sentry on the end-user. So, we need to enforce this condition in the > hive in the event of create/alter/drop tables based on storage handlers. > Built-in hive storage handlers like HbaseStorageHandler, KafkaStorageHandler > e.t.c should implement a method getURIForAuthentication() which returns a URI > that is formed from table properties. This URI can be sent for authorization > to Ranger/Sentry. -- This message was sent by Atlassian Jira (v8.3.4#803005)