[ https://issues.apache.org/jira/browse/HIVE-21197?focusedWorklogId=203074&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-203074 ]
ASF GitHub Bot logged work on HIVE-21197: ----------------------------------------- Author: ASF GitHub Bot Created on: 23/Feb/19 19:08 Start Date: 23/Feb/19 19:08 Worklog Time Spent: 10m Work Description: sankarh commented on pull request #541: HIVE-21197 : Hive Replication can add duplicate data during migration to a target with hive.strict.managed.tables enabled URL: https://github.com/apache/hive/pull/541#discussion_r259589143 ########## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/incremental/IncrementalLoadTasksBuilder.java ########## @@ -289,12 +296,21 @@ private boolean shouldReplayEvent(FileStatus dir, DumpType dumpType, String dbNa return updateReplIdTask; } - private Task<? extends Serializable> dbUpdateReplStateTask(String dbName, String replState, + private Task<? extends Serializable> dbUpdateReplStateTask(String dbName, String replState, String incLoadPendFlag, Task<? extends Serializable> preCursor) { HashMap<String, String> mapProp = new HashMap<>(); - mapProp.put(ReplicationSpec.KEY.CURR_STATE_ID.toString(), replState); - AlterDatabaseDesc alterDbDesc = new AlterDatabaseDesc(dbName, mapProp, new ReplicationSpec(replState, replState)); + // if the update is for incLoadPendFlag, then send replicationSpec as null to avoid replacement check. + ReplicationSpec replicationSpec = null; + if (incLoadPendFlag == null) { + mapProp.put(ReplicationSpec.KEY.CURR_STATE_ID.toString(), replState); + replicationSpec = new ReplicationSpec(replState, replState); + } else { + assert replState == null; + mapProp.put(ReplUtils.REPL_FIRST_INC_PENDING_FLAG, incLoadPendFlag); Review comment: There is one corner case for A->B->C where EVENT_ALTER_DATABASE should skip this additional parameter while dumping. Let's say, bootstrap and first incremental done in B. So, this flag is false. Now, we trigger bootstrap dump from B->C and concurrently, there is a alterDb operation which logs an event with this flag as false. Now, when we bootstrap load in C, we set it there as true. During first incremental load in C, when we process AlterDb event, we set it to false, so that we open up compaction after this event and later events may not guarantee duplicate check. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 203074) Time Spent: 7h 10m (was: 7h) > Hive replication can add duplicate data during migration to a target with > hive.strict.managed.tables enabled > ------------------------------------------------------------------------------------------------------------ > > Key: HIVE-21197 > URL: https://issues.apache.org/jira/browse/HIVE-21197 > Project: Hive > Issue Type: Task > Components: repl > Reporter: mahesh kumar behera > Assignee: mahesh kumar behera > Priority: Major > Labels: pull-request-available > Attachments: HIVE-21197.01.patch, HIVE-21197.02.patch > > Time Spent: 7h 10m > Remaining Estimate: 0h > > During bootstrap phase it may happen that the files copied to target are > created by events which are not part of the bootstrap. This is because of the > fact that, bootstrap first gets the last event id and then the file list. > During this period if some event are added, then bootstrap will include files > created by these events also.The same files will be copied again during the > first incremental replication just after the bootstrap. In normal scenario, > the duplicate copy does not cause any issue as hive allows the use of target > database only after the first incremental. But in case of migration, the file > at source and target are copied to different location (based on the write id > at target) and thus this may lead to duplicate data at target. This can be > avoided by having at check at load time for duplicate file. This check can be > done only for the first incremental and the search can be done in the > bootstrap directory (with write id 1). if the file is already present then > just ignore the copy. -- This message was sent by Atlassian JIRA (v7.6.3#76005)