[ https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Karen Coppage updated HIVE-21266: --------------------------------- Summary: Unit test for potential issue with single delta file (was: Issue with single delta file) > Unit test for potential issue with single delta file > ---------------------------------------------------- > > Key: HIVE-21266 > URL: https://issues.apache.org/jira/browse/HIVE-21266 > Project: Hive > Issue Type: Sub-task > Components: Transactions > Affects Versions: 4.0.0 > Reporter: Eugene Koifman > Assignee: Karen Coppage > Priority: Major > > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357] > > {noformat} > if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) > { > LOG.debug("Not compacting {}; current base is {} and there are {} > deltas and {} originals", sd.getLocation(), dir > .getBaseDirectory(), deltaCount, origCount); > return; > } > {noformat} > Is problematic. > Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where > {{txnid:13}} was aborted. The code above will not rewrite the delta (which > drops anything that belongs to the aborted txn) and transition the compaction > to "ready_for_cleaning" state which will drop the metadata about the aborted > txn in {{markCleaned()}}. Now aborted data will come back as committed. -- This message was sent by Atlassian Jira (v8.3.4#803005)