[ 
https://issues.apache.org/jira/browse/HIVE-25943?focusedWorklogId=737421&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-737421
 ]

ASF GitHub Bot logged work on HIVE-25943:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 07/Mar/22 08:23
            Start Date: 07/Mar/22 08:23
    Worklog Time Spent: 10m 
      Work Description: veghlaci05 commented on a change in pull request #3034:
URL: https://github.com/apache/hive/pull/3034#discussion_r820470355



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -288,14 +285,30 @@ private void clean(CompactionInfo ci, long minOpenTxnGLB, 
boolean metricsEnabled
       if (metricsEnabled) {
         
Metrics.getOrCreateCounter(MetricsConstants.COMPACTION_CLEANER_FAILURE_COUNTER).inc();
       }
-      txnHandler.markFailed(ci);
-    } finally {
+      handleCleanerAttemptFailure(ci);
+    }  finally {
       if (metricsEnabled) {
         perfLogger.perfLogEnd(CLASS_NAME, cleanerMetric);
       }
     }
   }
 
+  private void handleCleanerAttemptFailure(CompactionInfo ci) throws 
MetaException {
+    long defaultRetention = getTimeVar(conf, 
HIVE_COMPACTOR_CLEANER_RETRY_RETENTION_TIME, TimeUnit.MILLISECONDS);
+    int cleanAttempts = 0;
+    if (ci.retryRetention > 0) {
+      cleanAttempts = (int)(Math.log(ci.retryRetention / defaultRetention) / 
Math.log(2)) + 1;
+    }
+    if (cleanAttempts >= getIntVar(conf, 
HIVE_COMPACTOR_CLEANER_MAX_RETRY_ATTEMPTS)) {
+      //Mark it as failed if the max attempt threshold is reached.
+      txnHandler.markFailed(ci);
+    } else {
+      //Calculate retry retention time and update record.
+      ci.retryRetention = (long)Math.pow(2, cleanAttempts) * defaultRetention;

Review comment:
       Yes, the plan was to have exponential backoff, the wait time doubles in 
every iteration (`2^n * 5m` where `n` is the number of attempts). Originally I 
wanted to add the backoff value to the last retry attempt time, but that would 
need one more new field. So to keep it simple I add it to the submit time.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 737421)
    Time Spent: 5h 20m  (was: 5h 10m)

> Introduce compaction cleaner failed attempts threshold
> ------------------------------------------------------
>
>                 Key: HIVE-25943
>                 URL: https://issues.apache.org/jira/browse/HIVE-25943
>             Project: Hive
>          Issue Type: Improvement
>          Components: Hive
>            Reporter: László Végh
>            Assignee: László Végh
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> If the cleaner fails for some reason, the compaction entity status remains in 
> "ready for cleaning", therefore the cleaner will pick up this entity 
> resulting in an endless try. The number of failed cleaning attempts should be 
> counted and if they reach a certain threshold the cleaner must skip all the 
> cleaning attempts on that compaction entity. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to