azagrebin commented on a change in pull request #7631: [FLINK-11391][shuffle] 
Introduce PartitionShuffleDescriptor and ShuffleDeploymentDescriptor
URL: https://github.com/apache/flink/pull/7631#discussion_r255611695
 
 

 ##########
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java
 ##########
 @@ -813,24 +815,27 @@ TaskDeploymentDescriptor createDeploymentDescriptor(
                boolean lazyScheduling = 
getExecutionGraph().getScheduleMode().allowLazyDeployment();
 
                for (IntermediateResultPartition partition : 
resultPartitions.values()) {
-
                        List<List<ExecutionEdge>> consumers = 
partition.getConsumers();
-
+                       int maxParallelism;
                        if (consumers.isEmpty()) {
                                //TODO this case only exists for test, 
currently there has to be exactly one consumer in real jobs!
-                               
producedPartitions.add(ResultPartitionDeploymentDescriptor.from(
-                                               partition,
-                                               
KeyGroupRangeAssignment.UPPER_BOUND_MAX_PARALLELISM,
-                                               lazyScheduling));
+                               maxParallelism = 
KeyGroupRangeAssignment.UPPER_BOUND_MAX_PARALLELISM;
                        } else {
                                Preconditions.checkState(1 == consumers.size(),
-                                               "Only one consumer supported in 
the current implementation! Found: " + consumers.size());
+                                       "Only one consumer supported in the 
current implementation! Found: " + consumers.size());
 
                                List<ExecutionEdge> consumer = consumers.get(0);
                                ExecutionJobVertex vertex = 
consumer.get(0).getTarget().getJobVertex();
-                               int maxParallelism = vertex.getMaxParallelism();
-                               
producedPartitions.add(ResultPartitionDeploymentDescriptor.from(partition, 
maxParallelism, lazyScheduling));
+                               maxParallelism = vertex.getMaxParallelism();
                        }
+
+                       PartitionShuffleDescriptor psd = 
PartitionShuffleDescriptor.from(targetSlot, executionId, partition, 
maxParallelism);
+
+                       
producedPartitions.add(ResultPartitionDeploymentDescriptor.fromShuffleDescriptor(psd));
+                       
getCurrentExecutionAttempt().cachePartitionShuffleDescriptor(partition.getIntermediateResult().getId(),
 psd);
 
 Review comment:
   Would it work if the complete `TaskDeploymentDescriptor` would be just 
cached as volatile field in `Execution`? Maybe we would not need any of three 
descriptors caches, what do think?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to