[ 
https://issues.apache.org/jira/browse/FLINK-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336000#comment-15336000
 ] 

ASF GitHub Bot commented on FLINK-3190:
---------------------------------------

Github user tillrohrmann commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1954#discussion_r67506447
  
    --- Diff: 
flink-core/src/main/java/org/apache/flink/api/common/restartstrategy/RestartStrategies.java
 ---
    @@ -54,6 +55,18 @@ public static RestartStrategyConfiguration 
fixedDelayRestart(
                return new 
FixedDelayRestartStrategyConfiguration(restartAttempts, delayBetweenAttempts);
        }
     
    +   /**
    +    * Generates a FailureRateRestartStrategyConfiguration.
    +    *
    +    * @param maxFailureRate Maximum number of restarts in given time unit 
{@code failureRateUnit} before failing a job
    +    * @param failureRateUnit Time unit for measuring failure rate
    +    * @param delayBetweenRestartAttempts Delay in-between restart attempts
    +    */
    +   public static FailureRateRestartStrategyConfiguration 
failureRateRestart(
    +                   int maxFailureRate, TimeUnit failureRateUnit, long 
delayBetweenRestartAttempts) {
    --- End diff --
    
    Maybe we could change the parameter list to `failureRateRestart(int rate, 
long interval, TimeUnit intervalUnit, long delay, TimeUnit delayUnit)`


> Retry rate limits for DataStream API
> ------------------------------------
>
>                 Key: FLINK-3190
>                 URL: https://issues.apache.org/jira/browse/FLINK-3190
>             Project: Flink
>          Issue Type: Improvement
>            Reporter: Sebastian Klemke
>            Assignee: Michał Fijołek
>            Priority: Minor
>
> For a long running stream processing job, absolute numbers of retries don't 
> make much sense: The job will accumulate transient errors over time and will 
> die eventually when thresholds are exceeded. Rate limits are better suited in 
> this scenario: A job should only die, if it fails too often in a given time 
> frame. To better overcome transient errors, retry delays could be used, as 
> suggested in other issues.
> Absolute numbers of retries can still make sense, if failing operators don't 
> make any progress at all. We can measure progress by OperatorState changes 
> and by observing output, as long as the operator in question is not a sink. 
> If operator state changes and/or operator produces output, we can assume it 
> makes progress.
> As an example, let's say we configured a retry rate limit of 10 retries per 
> hour and a non-sink operator A. If the operator fails once every 10 minutes 
> and produces output between failures, it should not lead to job termination. 
> But if the operator fails 11 times in an hour or does not produce output 
> between 11 consecutive failures, job should be terminated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to