noorall commented on code in PR #25552:
URL: https://github.com/apache/flink/pull/25552#discussion_r1899276380


##########
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptivebatch/DefaultVertexParallelismAndInputInfosDeciderTest.java:
##########
@@ -293,21 +295,20 @@ void testHavePointwiseEdges() {
                 
parallelismAndInputInfos.getJobVertexInputInfos().get(resultInfo1.getResultId()),
                 Arrays.asList(
                         new IndexRange(0, 1),
-                        new IndexRange(2, 4),
-                        new IndexRange(5, 6),
-                        new IndexRange(7, 9)));
-        checkPointwiseJobVertexInputInfo(
+                        new IndexRange(2, 5),

Review Comment:
   > Why is the result affected? Ideally the result of existing cases should 
remain the same.
   
   In the original algorithm, for scenarios where AllToAll and Pointwise are 
decide together, balanced partitioning is performed based on the number of 
subpartitions. In the new algorithm, the data volume of the two will be evenly 
divided based on the data amount of subpartitions, resulting in different 
results. The results obtained by the new algorithm are more balanced and meet 
expectations.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to