Github user twalthr commented on a diff in the pull request:

    https://github.com/apache/flink/pull/2282#discussion_r73290760
  
    --- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/nodes/dataset/DataSetSort.scala
 ---
    @@ -71,11 +78,57 @@ class DataSetSort(
           partitionedDs = partitionedDs.sortPartition(fieldCollation._1, 
fieldCollation._2)
         }
     
    +    val offsetAndFetchDS = if (offset != null) {
    +      val offsetIndex = RexLiteral.intValue(offset)
    +      val fetchIndex = if (fetch != null) {
    +        RexLiteral.intValue(fetch) + offsetIndex
    +      } else {
    +        Int.MaxValue
    +      }
    +      if (currentParallelism != 1) {
    +        val partitionCount = partitionedDs.mapPartition(
    +          new MapPartitionFunction[Any, Int] {
    +            override def mapPartition(value: lang.Iterable[Any], out: 
Collector[Int]): Unit = {
    +              val iterator = value.iterator()
    +              var elementCount = 0
    +              while (iterator.hasNext) {
    +                elementCount += 1
    +                iterator -> iterator.next()
    +              }
    +              out.collect(elementCount)
    +            }
    +          }).collect().asScala
    --- End diff --
    
    I agree that the number of elements in every partition is necessary but 
triggering additional jobs during a plan construction phase is not what we 
want. It should be up to the user if a job plan is executed at all. Have you 
thought about using a broadcast variable as a side input of your filter 
function?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to