AHeise commented on a change in pull request #13735:
URL: https://github.com/apache/flink/pull/13735#discussion_r518128784



##########
File path: 
flink-core/src/main/java/org/apache/flink/api/common/functions/Partitioner.java
##########
@@ -37,4 +40,20 @@
         * @return The partition index.
         */
        int partition(K key, int numPartitions);
+
+       /**
+        * Returns all partitions that need to be read to restore the given new 
partition. The partitioner is then
+        * applied on the key of the restored record to filter all irrelevant 
records.
+        *
+        * <p>In particular, to create a partition X after rescaling, all 
partitions returned by this method are fully read
+        * and the key of each record is then fed into {@link 
#partition(Object, int)} to check if it belongs to X.
+        *
+        * <p>The default implementation states that all partitions need to be 
scanned and should be overwritten to improve
+        * performance.
+        */
+       @PublicEvolving
+       default int[] rescaleIntersections(int newPartition, int 
oldNumPartitions, int newNumPartitions) {
+               // any old partition may contain a record that should be in the 
new partition after rescaling
+               return IntStream.range(0, oldNumPartitions).toArray();
+       }

Review comment:
       Removed from `CustomerPartitioner`. We officially do not support it (and 
inofficially just use full replication and downstream filtering).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to