[ https://issues.apache.org/jira/browse/HIVE-24291?focusedWorklogId=506627&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506627 ]
ASF GitHub Bot logged work on HIVE-24291: ----------------------------------------- Author: ASF GitHub Bot Created on: 30/Oct/20 10:21 Start Date: 30/Oct/20 10:21 Worklog Time Spent: 10m Work Description: deniskuzZ commented on a change in pull request #1592: URL: https://github.com/apache/hive/pull/1592#discussion_r514997057 ########## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java ########## @@ -2228,38 +2235,45 @@ public void addWriteNotificationLog(AcidWriteEvent acidWriteEvent) public void performWriteSetGC() throws MetaException { Connection dbConn = null; Statement stmt = null; - ResultSet rs = null; try { dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED); stmt = dbConn.createStatement(); - - long minOpenTxn; - rs = stmt.executeQuery("SELECT MIN(\"TXN_ID\") FROM \"TXNS\" WHERE \"TXN_STATE\"=" + TxnStatus.OPEN); - if (!rs.next()) { - throw new IllegalStateException("Scalar query returned no rows?!?!!"); - } - minOpenTxn = rs.getLong(1); - if (rs.wasNull()) { - minOpenTxn = Long.MAX_VALUE; - } - long lowWaterMark = getOpenTxnTimeoutLowBoundaryTxnId(dbConn); - /** - * We try to find the highest transactionId below everything was committed or aborted. - * For that we look for the lowest open transaction in the TXNS and the TxnMinTimeout boundary, - * because it is guaranteed there won't be open transactions below that. - */ - long commitHighWaterMark = Long.min(minOpenTxn, lowWaterMark + 1); - LOG.debug("Perform WriteSet GC with minOpenTxn {}, lowWaterMark {}", minOpenTxn, lowWaterMark); + long commitHighWaterMark = getMinOpenTxnIdWaterMark(dbConn); Review comment: Question to getOpenTxnTimeoutLowBoundaryTxnId. If we need min open txn why we are doing: SELECT MAX(TXN_ID) FROM TXNS WHERE TXN_STARTED < (sysdate - openTxnTimeOutMillis) shouldn't in be MIN() ? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 506627) Time Spent: 1h 50m (was: 1h 40m) > Compaction Cleaner prematurely cleans up deltas > ----------------------------------------------- > > Key: HIVE-24291 > URL: https://issues.apache.org/jira/browse/HIVE-24291 > Project: Hive > Issue Type: Bug > Reporter: Peter Varga > Assignee: Peter Varga > Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > Since HIVE-23107 the cleaner can clean up deltas that are still used by > running queries. > Example: > * TxnId 1-5 writes to a partition, all commits > * Compactor starts with txnId=6 > * Long running query starts with txnId=7, it sees txnId=6 as open in its > snapshot > * Compaction commits > * Cleaner runs > Previously min_history_level table would have prevented the Cleaner to delete > the deltas1-5 until txnId=7 is open, but now they will be deleted and the > long running query may fail if its tries to access the files. > Solution could be to not run the cleaner until any txn is open that was > opened before the compaction was committed (CQ_NEXT_TXN_ID) -- This message was sent by Atlassian Jira (v8.3.4#803005)