wanglijie95 commented on code in PR #21695:
URL: https://github.com/apache/flink/pull/21695#discussion_r1071905434


##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/PointwiseBlockingResultInfo.java:
##########
@@ -60,4 +61,23 @@ public long getNumBytesProduced() {
                 .flatMapToLong(Arrays::stream)
                 .reduce(0L, Long::sum);
     }
+
+    @Override
+    public long getExecutionVertexInputNumBytes(
+            IndexRange partitionIndexRange, IndexRange subpartitionIndexRange) 
{
+        checkState(
+                subpartitionBytesByPartitionIndex.size() > 
partitionIndexRange.getEndIndex(),
+                "Not all partition infos are ready");

Review Comment:
   I think we should check the `subpartitionBytesByPartitionIndex.get(i)` is 
not null, and the check the `subpartitionIndexRange.getEndIndex() < 
subpartitionBytesByPartitionIndex.get(i).lengh`



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/AllToAllBlockingResultInfo.java:
##########
@@ -119,4 +120,17 @@ public List<Long> getAggregatedSubpartitionBytes() {
         checkState(aggregatedSubpartitionBytes != null, "Not all partition 
infos are ready");
         return Collections.unmodifiableList(aggregatedSubpartitionBytes);
     }
+
+    @Override
+    public long getExecutionVertexInputNumBytes(
+            IndexRange partitionIndexRange, IndexRange subpartitionIndexRange) 
{

Review Comment:
   Maybe add a check for the `partitionIndexRange`,  for All-To-All edges, the 
partition range should always be [0, numOfPartitions).



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/AdaptiveBatchScheduler.java:
##########
@@ -179,6 +183,7 @@ protected void onTaskFinished(final Execution execution, 
final IOMetrics ioMetri
         checkNotNull(ioMetrics);
         updateResultPartitionBytesMetrics(ioMetrics.getResultPartitionBytes());
         initializeVerticesIfPossible();
+        tryEnrichInputNumBytesForExecutionVertexInputInfos();

Review Comment:
   How about calculating the `InputNumBytes` when it 's really needed? We can 
calculate it in `ExecutionVertex#getExecutionVertexInputNumBytes`, and then we 
don't need the `setAggregatedInputNumBytes` and `getAggregatedInputNumBytes`



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/AbstractBlockingResultInfo.java:
##########
@@ -72,4 +73,15 @@ public void resetPartitionInfo(int partitionIndex) {
     int getNumOfRecordedPartitions() {
         return subpartitionBytesByPartitionIndex.size();
     }
+
+    /**
+     * Aggregate the input number bytes of a subtask according to the index 
range for partition and
+     * subpartition.
+     *
+     * @param partitionIndexRange Range of the index of the consumed partition.
+     * @param subpartitionIndexRange Range of the index of the consumed 
subpartition.
+     * @return Aggregated input number bytes of current result info.
+     */
+    public abstract long getExecutionVertexInputNumBytes(

Review Comment:
   Mayebe `getExecutionVertexInputNumBytes` -> `getNumBytes`, because the 
execution vertex input should be transparent to the blocking result info. And 
we should add it to the interface `BlockingResultInfo`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to