[ 
https://issues.apache.org/jira/browse/HIVE-25621?focusedWorklogId=793567&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-793567
 ]

ASF GitHub Bot logged work on HIVE-25621:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 21/Jul/22 07:38
            Start Date: 21/Jul/22 07:38
    Worklog Time Spent: 10m 
      Work Description: dengzhhu653 commented on code in PR #2731:
URL: https://github.com/apache/hive/pull/2731#discussion_r926347947


##########
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/compact/AlterTableCompactAnalyzer.java:
##########
@@ -67,6 +73,17 @@ protected void analyzeCommand(TableName tableName, 
Map<String, String> partition
     }
 
     AlterTableCompactDesc desc = new AlterTableCompactDesc(tableName, 
partitionSpec, type, isBlocking, mapProp);
+    Table table = getTable(tableName);
+    WriteEntity.WriteType writeType = null;
+    if (AcidUtils.isTransactionalTable(table)) {
+      setAcidDdlDesc(desc);
+      writeType = WriteType.DDL_EXCLUSIVE;
+    } else {
+      writeType = 
WriteEntity.determineAlterTableWriteType(AlterTableType.COMPACT);
+    }
+    inputs.add(new ReadEntity(table));

Review Comment:
   should we take care of `partitionSpec` as well?



##########
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/concatenate/AlterTableConcatenateAnalyzer.java:
##########
@@ -95,10 +97,14 @@ protected void analyzeCommand(TableName tableName, 
Map<String, String> partition
     }
   }
 
-  private void compactAcidTable(TableName tableName, Map<String, String> 
partitionSpec) throws SemanticException {
+  private void compactAcidTable(TableName tableName, Table table, Map<String, 
String> partitionSpec) throws SemanticException {
     boolean isBlocking = !HiveConf.getBoolVar(conf, 
ConfVars.TRANSACTIONAL_CONCATENATE_NOBLOCK, false);
 
     AlterTableCompactDesc desc = new AlterTableCompactDesc(tableName, 
partitionSpec, "MAJOR", isBlocking, null);
+    WriteEntity.WriteType writeType = WriteEntity.WriteType.DDL_EXCLUSIVE;

Review Comment:
   should we take care of `partitionSpec` as well?



##########
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/compact/AlterTableCompactAnalyzer.java:
##########
@@ -67,6 +73,17 @@ protected void analyzeCommand(TableName tableName, 
Map<String, String> partition
     }
 
     AlterTableCompactDesc desc = new AlterTableCompactDesc(tableName, 
partitionSpec, type, isBlocking, mapProp);
+    Table table = getTable(tableName);
+    WriteEntity.WriteType writeType = null;
+    if (AcidUtils.isTransactionalTable(table)) {
+      setAcidDdlDesc(desc);
+      writeType = WriteType.DDL_EXCLUSIVE;

Review Comment:
   cloud you please explain a little bit why we choose DDL_EXCLUSIVE for 
transactional table?  does it works the same for insert only tables?





Issue Time Tracking
-------------------

    Worklog Id:     (was: 793567)
    Time Spent: 50m  (was: 40m)

> Alter table partition compact/concatenate commands should send 
> HivePrivilegeObjects for Authz
> ---------------------------------------------------------------------------------------------
>
>                 Key: HIVE-25621
>                 URL: https://issues.apache.org/jira/browse/HIVE-25621
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 4.0.0
>            Reporter: Sai Hemanth Gantasala
>            Assignee: Sai Hemanth Gantasala
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> # Run the following queries 
> Create table temp(c0 int) partitioned by (c1 int);
> Insert into temp values(1,1);
> ALTER TABLE temp PARTITION (c1=1) COMPACT 'minor';
> ALTER TABLE temp PARTITION (c1=1) CONCATENATE;
> Insert into temp values(1,1);
>  # The above compact/concatenate commands are currently not sending any hive 
> privilege objects for authorization. Hive needs to send these objects to 
> avoid malicious users doing any operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to