[ https://issues.apache.org/jira/browse/HIVE-21529?focusedWorklogId=227452&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-227452 ]
ASF GitHub Bot logged work on HIVE-21529: ----------------------------------------- Author: ASF GitHub Bot Created on: 15/Apr/19 06:05 Start Date: 15/Apr/19 06:05 Worklog Time Spent: 10m Work Description: ashutosh-bapat commented on pull request #581: HIVE-21529 : Bootstrap ACID tables as part of incremental dump. URL: https://github.com/apache/hive/pull/581#discussion_r275215849 ########## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java ########## @@ -397,22 +469,13 @@ private String getValidWriteIdList(String dbName, String tblName, String validTx return openTxns; } - String getValidTxnListForReplDump(Hive hiveDb) throws HiveException { - // Key design point for REPL DUMP is to not have any txns older than current txn in which dump runs. - // This is needed to ensure that Repl dump doesn't copy any data files written by any open txns - // mainly for streaming ingest case where one delta file shall have data from committed/aborted/open txns. - // It may also have data inconsistency if the on-going txns doesn't have corresponding open/write - // events captured which means, catch-up incremental phase won't be able to replicate those txns. - // So, the logic is to wait for configured amount of time to see if all open txns < current txn is - // getting aborted/committed. If not, then we forcefully abort those txns just like AcidHouseKeeperService. - ValidTxnList validTxnList = getTxnMgr().getValidTxns(); - long timeoutInMs = HiveConf.getTimeVar(conf, - HiveConf.ConfVars.REPL_BOOTSTRAP_DUMP_OPEN_TXN_TIMEOUT, TimeUnit.MILLISECONDS); - long waitUntilTime = System.currentTimeMillis() + timeoutInMs; + ValidTxnList getValidTxnListForReplDump(Hive hiveDb, ValidTxnList validTxnList, Review comment: See my explanation above about validTxnList being passed. Earlier the transaction snapshot was being obtained within getValidTxnListFoReplDump(), and thus was fine to return a String representation. Now that we are passing snapshot to this function, it's easier to return it as is if required. Furthermore the caller then use the snapshot object as is or convert to a string as required - bit of flexibility to the caller in that case. That's why I have changed the return type to snapshot instead of string, which needs to be parsed back to get snapshot object if that's required. Since there are only two callers it's an easy change to make now that later in case the number of callers increases. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 227452) Time Spent: 5h (was: 4h 50m) > Hive support bootstrap of ACID/MM tables on an existing policy. > --------------------------------------------------------------- > > Key: HIVE-21529 > URL: https://issues.apache.org/jira/browse/HIVE-21529 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions > Affects Versions: 4.0.0 > Reporter: Sankar Hariappan > Assignee: Ashutosh Bapat > Priority: Major > Labels: DR, pull-request-available, replication > Attachments: HIVE-21529.01.patch, HIVE-21529.02.patch, > HIVE-21529.03.patch > > Time Spent: 5h > Remaining Estimate: 0h > > If ACID/MM tables to be enabled (hive.repl.dump.include.acid.tables) on an > existing repl policy, then need to combine bootstrap dump of these tables > along with the ongoing incremental dump. > Shall add a one time config "hive.repl.bootstrap.acid.tables" to include > bootstrap in the given dump. > TheĀ support for hive.repl.bootstrap.cleanup.type for ACID tables to clean-up > partially bootstrapped tables in case of retry is already in place, thanks to > the work done during external tables. Need to test that it actually works. -- This message was sent by Atlassian JIRA (v7.6.3#76005)