[ https://issues.apache.org/jira/browse/HIVE-24291?focusedWorklogId=506561&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506561 ]
ASF GitHub Bot logged work on HIVE-24291: ----------------------------------------- Author: ASF GitHub Bot Created on: 30/Oct/20 07:47 Start Date: 30/Oct/20 07:47 Worklog Time Spent: 10m Work Description: pvargacl commented on a change in pull request #1592: URL: https://github.com/apache/hive/pull/1592#discussion_r514919991 ########## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java ########## @@ -281,9 +280,14 @@ public void markCompacted(CompactionInfo info) throws MetaException { try { dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED); stmt = dbConn.createStatement(); - String s = "SELECT \"CQ_ID\", \"CQ_DATABASE\", \"CQ_TABLE\", \"CQ_PARTITION\", " + - "\"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\" FROM \"COMPACTION_QUEUE\" " + - "WHERE \"CQ_STATE\" = '" + READY_FOR_CLEANING + "'"; + /* + * By filtering on minOpenTxnWaterMark, we will only cleanup after every transaction is committed, that could see + * the uncompacted deltas. This way the cleaner can clean up everything that was made obsolete by this compaction. + */ + long minOpenTxnWaterMark = getMinOpenTxnIdWaterMark(dbConn); Review comment: 1. Passing the minOpenTxn as an argument now 2. Changed the findMinOpenTxnIdForCleaner to use getMinOpenTxnIdWaterMark. The timeout boundary checking is needed, since HIVE-23084, because it might be possible for an open txn to appear later, that has txnId lower than the current minOpen and higher the timeout boundary. Probably it wouldn't cause any problem for the Cleaner, but better safe than sorry, this way it always gives correct result. This also means that the max(cq_next_txnid) check is removed, but I think this will only mean, that if there were any txn after the compaction that were aborted, we are going to clean those up also, which is a good side effect. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 506561) Time Spent: 50m (was: 40m) > Compaction Cleaner prematurely cleans up deltas > ----------------------------------------------- > > Key: HIVE-24291 > URL: https://issues.apache.org/jira/browse/HIVE-24291 > Project: Hive > Issue Type: Bug > Reporter: Peter Varga > Assignee: Peter Varga > Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Since HIVE-23107 the cleaner can clean up deltas that are still used by > running queries. > Example: > * TxnId 1-5 writes to a partition, all commits > * Compactor starts with txnId=6 > * Long running query starts with txnId=7, it sees txnId=6 as open in its > snapshot > * Compaction commits > * Cleaner runs > Previously min_history_level table would have prevented the Cleaner to delete > the deltas1-5 until txnId=7 is open, but now they will be deleted and the > long running query may fail if its tries to access the files. > Solution could be to not run the cleaner until any txn is open that was > opened before the compaction was committed (CQ_NEXT_TXN_ID) -- This message was sent by Atlassian Jira (v8.3.4#803005)