[ 
https://issues.apache.org/jira/browse/HIVE-25372?focusedWorklogId=632471&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632471
 ]

ASF GitHub Bot logged work on HIVE-25372:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 02/Aug/21 16:44
            Start Date: 02/Aug/21 16:44
    Worklog Time Spent: 10m 
      Work Description: pvary commented on a change in pull request #2524:
URL: https://github.com/apache/hive/pull/2524#discussion_r681121522



##########
File path: ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands.java
##########
@@ -432,6 +433,50 @@ public void testAddAndDropConstraintAdvancingWriteIds() 
throws Exception {
 
   }
 
+  @Test
+  public void testDDLsAdvancingWriteIds() throws Exception {
+
+    String tableName = "alter_table";
+    runStatementOnDriver("drop table if exists " + tableName);
+    runStatementOnDriver(String.format("create table %s (a int, b string, c 
BIGINT, d INT) " +
+        "PARTITIONED BY (ds STRING)" +
+        "TBLPROPERTIES ('transactional'='true', 
'transactional_properties'='insert_only')",
+        tableName));
+    runStatementOnDriver(String.format("insert into %s (a) values (0)", 
tableName));
+    IMetaStoreClient msClient = new HiveMetaStoreClient(hiveConf);
+    String validWriteIds = msClient.getValidWriteIds("default." + 
tableName).toString();
+    Assert.assertEquals("default.alter_table:1:9223372036854775807::", 
validWriteIds);
+
+    runStatementOnDriver(String.format("alter table %s SET OWNER USER 
user_name", tableName));
+    validWriteIds  = msClient.getValidWriteIds("default." + 
tableName).toString();
+    Assert.assertEquals("default.alter_table:2:9223372036854775807::", 
validWriteIds);
+
+    runStatementOnDriver(String.format("alter table %s CLUSTERED BY(c) SORTED 
BY(d) INTO 32 BUCKETS", tableName));
+    validWriteIds  = msClient.getValidWriteIds("default." + 
tableName).toString();
+    Assert.assertEquals("default.alter_table:3:9223372036854775807::", 
validWriteIds);
+
+    runStatementOnDriver(String.format("ALTER TABLE %s ADD PARTITION 
(ds='2013-04-05')", tableName));
+    validWriteIds  = msClient.getValidWriteIds("default." + 
tableName).toString();
+    Assert.assertEquals("default.alter_table:4:9223372036854775807::", 
validWriteIds);
+
+    runStatementOnDriver(String.format("ALTER TABLE %s SET SERDEPROPERTIES 
('field.delim'='\\u0001')", tableName));
+    validWriteIds  = msClient.getValidWriteIds("default." + 
tableName).toString();
+    Assert.assertEquals("default.alter_table:5:9223372036854775807::", 
validWriteIds);
+
+    runStatementOnDriver(String.format("ALTER TABLE %s PARTITION 
(ds='2013-04-05') SET FILEFORMAT PARQUET", tableName));
+    validWriteIds  = msClient.getValidWriteIds("default." + 
tableName).toString();
+    Assert.assertEquals("default.alter_table:6:9223372036854775807::", 
validWriteIds);
+
+    runStatementOnDriver(String.format("ALTER TABLE %s PARTITION 
(ds='2013-04-05') COMPACT 'minor'", tableName));

Review comment:
       Why do we want to increase writeId for compaction initialization?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 632471)
    Time Spent: 20m  (was: 10m)

> [Hive] Advance write ID for remaining DDLs
> ------------------------------------------
>
>                 Key: HIVE-25372
>                 URL: https://issues.apache.org/jira/browse/HIVE-25372
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2
>            Reporter: Kishen Das
>            Assignee: Kishen Das
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> We guarantee data consistency for table metadata, when serving data from the 
> HMS cache. HMS cache relies on Valid Write IDs to decide whether to serve 
> from cache or refresh from the backing DB and serve, so we have to ensure we 
> advance write IDs during all alter table flows. We have to ensure we advance 
> the write ID for below DDLs.
> AlterTableSetOwnerAnalyzer.java 
> AlterTableSkewedByAnalyzer.java
> AlterTableSetSerdeAnalyzer.java
> AlterTableSetSerdePropsAnalyzer.java
> AlterTableUnsetSerdePropsAnalyzer.java
> AlterTableSetPartitionSpecAnalyzer
> AlterTableClusterSortAnalyzer.java
> AlterTableIntoBucketsAnalyzer.java
> AlterTableConcatenateAnalyzer.java
> AlterTableCompactAnalyzer.java
> AlterTableSetFileFormatAnalyzer.java
> AlterTableSetSkewedLocationAnalyzer.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to