zhijiangW commented on a change in pull request #7549: [FLINK-11403][network] Remove ResultPartitionConsumableNotifier from ResultPartition URL: https://github.com/apache/flink/pull/7549#discussion_r267285108
########## File path: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultPartition.java ########## @@ -19,432 +19,105 @@ package org.apache.flink.runtime.io.network.partition; import org.apache.flink.api.common.JobID; -import org.apache.flink.runtime.executiongraph.IntermediateResultPartition; -import org.apache.flink.runtime.io.disk.iomanager.IOManager; import org.apache.flink.runtime.io.network.api.writer.ResultPartitionWriter; -import org.apache.flink.runtime.io.network.buffer.Buffer; import org.apache.flink.runtime.io.network.buffer.BufferConsumer; -import org.apache.flink.runtime.io.network.buffer.BufferPool; -import org.apache.flink.runtime.io.network.buffer.BufferPoolOwner; import org.apache.flink.runtime.io.network.buffer.BufferProvider; -import org.apache.flink.runtime.io.network.partition.consumer.LocalInputChannel; -import org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel; -import org.apache.flink.runtime.jobgraph.DistributionPattern; import org.apache.flink.runtime.taskmanager.TaskActions; -import org.apache.flink.runtime.taskmanager.TaskManager; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; import java.io.IOException; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicInteger; -import static org.apache.flink.util.Preconditions.checkArgument; -import static org.apache.flink.util.Preconditions.checkElementIndex; import static org.apache.flink.util.Preconditions.checkNotNull; -import static org.apache.flink.util.Preconditions.checkState; /** - * A result partition for data produced by a single task. - * - * <p>This class is the runtime part of a logical {@link IntermediateResultPartition}. Essentially, - * a result partition is a collection of {@link Buffer} instances. The buffers are organized in one - * or more {@link ResultSubpartition} instances, which further partition the data depending on the - * number of consuming tasks and the data {@link DistributionPattern}. - * - * <p>Tasks, which consume a result partition have to request one of its subpartitions. The request - * happens either remotely (see {@link RemoteInputChannel}) or locally (see {@link LocalInputChannel}) - * - * <h2>Life-cycle</h2> - * - * <p>The life-cycle of each result partition has three (possibly overlapping) phases: - * <ol> - * <li><strong>Produce</strong>: </li> - * <li><strong>Consume</strong>: </li> - * <li><strong>Release</strong>: </li> - * </ol> - * - * <h2>Lazy deployment and updates of consuming tasks</h2> - * - * <p>Before a consuming task can request the result, it has to be deployed. The time of deployment - * depends on the PIPELINED vs. BLOCKING characteristic of the result partition. With pipelined - * results, receivers are deployed as soon as the first buffer is added to the result partition. - * With blocking results on the other hand, receivers are deployed after the partition is finished. - * - * <h2>Buffer management</h2> - * - * <h2>State management</h2> + * A wrapper of result partition writer for handling notification of the consumable + * partition which is added a {@link BufferConsumer} or finished. */ -public class ResultPartition implements ResultPartitionWriter, BufferPoolOwner { - - private static final Logger LOG = LoggerFactory.getLogger(ResultPartition.class); - - private final String owningTaskName; +public class ResultPartition implements ResultPartitionWriter { private final TaskActions taskActions; private final JobID jobId; - private final ResultPartitionID partitionId; - - /** Type of this partition. Defines the concrete subpartition implementation to use. */ private final ResultPartitionType partitionType; - /** The subpartitions of this partition. At least one. */ - private final ResultSubpartition[] subpartitions; - - private final ResultPartitionManager partitionManager; + private final ResultPartitionWriter partitionWriter; Review comment: It is no problem to let `ResultPartition` also implement `ResultPartitionWriter`. And it might also make sense to not implement the interface. Because `ResultPartition` would be refactored to task package finally, if it implements the interface, it seems still coupled with network stack. In addition, there already exists a `ResultPartitionWriter` component in `ResultPartition`. If the `ResultPartition` itself is also a `ResultPartitionWriter`, it might seem a bit duplicated. ShuffleService would create regular `ResultPartitionWriter` which is wrapped into `ResultPartitionWithConsumableNotification` in task stack. And ` ResultPartitionWithConsumableNotification` would be passed into `RuntimeEnvironment` to be used for creating `RecordWriter` later. It might also makes sense to reference with `ResultPartitionWithConsumableNotification` instead of `ResultPartitionWriter` in `RecordWriter` and `RuntimeEnvironment`. As you said, we could keep the current status. And after other parts ready, we could further check how it goes. :) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services