1996fanrui commented on code in PR #921:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/921#discussion_r1872491367


##########
flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/validation/DefaultValidatorTest.java:
##########
@@ -972,8 +990,9 @@ private Map<String, String> 
getDefaultTestAutoScalerFlinkConfigurationMap() {
         conf.put(AutoScalerOptions.MAX_SCALE_UP_FACTOR.key(), "100000.0");
         conf.put(AutoScalerOptions.MAX_SCALE_DOWN_FACTOR.key(), "0.6");
         
conf.put(AutoScalerOptions.SCALING_EFFECTIVENESS_DETECTION_ENABLED.key(), 
"0.1");
-        conf.put(AutoScalerOptions.TARGET_UTILIZATION.key(), "0.7");
-        conf.put(AutoScalerOptions.TARGET_UTILIZATION_BOUNDARY.key(), "0.4");
+        conf.put(AutoScalerOptions.UTILIZATION_TARGET.key(), "0.7");
+        conf.put(AutoScalerOptions.UTILIZATION_MAX.key(), "1.");
+        conf.put(AutoScalerOptions.UTILIZATION_MIN.key(), "0.1");

Review Comment:
   Why not 0.3? As I understand, the original logic is 0.7 - 0.4.



##########
flink-autoscaler/src/test/java/org/apache/flink/autoscaler/ScalingExecutorTest.java:
##########
@@ -206,10 +204,11 @@ op1, evaluated(1, 65, 100),
     @Test
     public void testNoScaleDownOnZeroLowerUtilizationBoundary() throws 
Exception {
         var conf = context.getConfiguration();
-        // Target utilization and boundary are identical
+        // Utilization min max is set from 0 to 1
         // which will set the scale down boundary to infinity
-        conf.set(AutoScalerOptions.TARGET_UTILIZATION, 0.6);
-        conf.set(AutoScalerOptions.TARGET_UTILIZATION_BOUNDARY, 0.6);
+        conf.set(AutoScalerOptions.UTILIZATION_TARGET, 0.6);
+        conf.set(AutoScalerOptions.UTILIZATION_MAX, 1.);

Review Comment:
   For this test, I think max should be 1.2
   
   But in the real logic, we could do some logic like min(1.0, 
conf.get(AutoScalerOptions.UTILIZATION_MAX)) to limit the upper limitation.



##########
flink-autoscaler/src/test/java/org/apache/flink/autoscaler/ScalingExecutorTest.java:
##########
@@ -160,12 +162,6 @@ op1, evaluated(1, 70, 100),
         assertTrue(
                 ScalingExecutor.allChangedVerticesWithinUtilizationTarget(
                         evaluated, evaluated.keySet()));
-
-        // Test with backlog based scaling
-        evaluated = Map.of(op1, evaluated(1, 70, 100, 15));
-        assertFalse(
-                ScalingExecutor.allChangedVerticesWithinUtilizationTarget(
-                        evaluated, evaluated.keySet()));

Review Comment:
   why this test is deleted?



##########
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricEvaluator.java:
##########
@@ -296,8 +297,14 @@ protected static void computeProcessingRateThresholds(
             upperUtilization = 1.0;
             lowerUtilization = 0.0;
         } else {
-            upperUtilization = targetUtilization + utilizationBoundary;
-            lowerUtilization = targetUtilization - utilizationBoundary;
+            upperUtilization =
+                    conf.getOptional(TARGET_UTILIZATION_BOUNDARY)
+                            .map(boundary -> targetUtilization + boundary)
+                            .orElseGet(() -> conf.get(UTILIZATION_MAX));
+            lowerUtilization =
+                    conf.getOptional(TARGET_UTILIZATION_BOUNDARY)
+                            .map(boundary -> targetUtilization - boundary)
+                            .orElseGet(() -> conf.get(UTILIZATION_MIN));

Review Comment:
   IIUC, the compatibility logic is not as expected. 
`TARGET_UTILIZATION_BOUNDARY` should be only used when  `UTILIZATION_MAX` or 
`UTILIZATION_MIN` are not set by users.
   
   If `UTILIZATION_MAX` or `UTILIZATION_MIN` is set by users, it means users 
are using the latest options. So we should ignore `TARGET_UTILIZATION_BOUNDARY` 
even if it's set as well. WDYT?
   
   Also, it should be tested as well.



##########
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricEvaluator.java:
##########
@@ -296,8 +297,14 @@ protected static void computeProcessingRateThresholds(
             upperUtilization = 1.0;
             lowerUtilization = 0.0;
         } else {
-            upperUtilization = targetUtilization + utilizationBoundary;
-            lowerUtilization = targetUtilization - utilizationBoundary;
+            upperUtilization =
+                    conf.getOptional(TARGET_UTILIZATION_BOUNDARY)
+                            .map(boundary -> targetUtilization + boundary)
+                            .orElseGet(() -> conf.get(UTILIZATION_MAX));
+            lowerUtilization =
+                    conf.getOptional(TARGET_UTILIZATION_BOUNDARY)
+                            .map(boundary -> targetUtilization - boundary)
+                            .orElseGet(() -> conf.get(UTILIZATION_MIN));
         }
 
         double scaleUpThreshold =

Review Comment:
   In `AutoScalerUtils.getTargetProcessingCapacity`, we adjust 
`targetUtilization` between 0.0 and 1.0, it's reasonable.
   
   After introducing `UTILIZATION_MAX` and `UTILIZATION_MIN`, it's better to 
ensure:
   -  1.0 >= UTILIZATION_MAX >= UTILIZATION_TARGET
   - UTILIZATION_TARGET >= UTILIZATION_MIN >= 0.0
   
   And it's needed to add some tests to check them.
   
   Also, could we unify the logic in this method instead of 
`AutoScalerUtils.getTargetProcessingCapacity`?
   
   
   Of course, another solution is when `UTILIZATION_MAX < UTILIZATION_TARGET` 
or `UTILIZATION_TARGET < UTILIZATION_MIN`, we could throw exception.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to