huyuanfeng2018 commented on code in PR #921: URL: https://github.com/apache/flink-kubernetes-operator/pull/921#discussion_r1875302925
########## flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricEvaluator.java: ########## @@ -296,8 +297,14 @@ protected static void computeProcessingRateThresholds( upperUtilization = 1.0; lowerUtilization = 0.0; } else { - upperUtilization = targetUtilization + utilizationBoundary; - lowerUtilization = targetUtilization - utilizationBoundary; + upperUtilization = + conf.getOptional(TARGET_UTILIZATION_BOUNDARY) + .map(boundary -> targetUtilization + boundary) + .orElseGet(() -> conf.get(UTILIZATION_MAX)); + lowerUtilization = + conf.getOptional(TARGET_UTILIZATION_BOUNDARY) + .map(boundary -> targetUtilization - boundary) + .orElseGet(() -> conf.get(UTILIZATION_MIN)); } double scaleUpThreshold = Review Comment: > `DefaultValidator` is a class of `flink-kubernetes-operator` module. It doesn't work for Autoscaler Standalone or other scenarios. Could we validate autoscaler related options inside of `flink-autoscaler` module? and call it in `JobAutoScalerImpl#scale`? Currently, Autoscaler standlone does not have such a mechanism. `flink-kubernetes-operator` can throw an exception before deploying the flink job. But how does `Autoscaler standlone` handle exceptions where these parameters are unreasonable. by event report? I'm not sure if it needs to be resolved via another PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org