[ 
https://issues.apache.org/jira/browse/FLINK-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949432#comment-15949432
 ] 

ASF GitHub Bot commented on FLINK-6200:
---------------------------------------

Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/3649#discussion_r108983942
  
    --- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/UnboundedEventTimeOverProcessFunction.scala
 ---
    @@ -204,21 +183,145 @@ class UnboundedEventTimeOverProcessFunction(
        * If timestamps arrive in order (as in case of using the RocksDB state 
backend) this is just
        * an append with O(1).
        */
    -  private def insertToSortedList(recordTimeStamp: Long) = {
    +  private def insertToSortedList(recordTimestamp: Long) = {
         val listIterator = sortedTimestamps.listIterator(sortedTimestamps.size)
         var continue = true
         while (listIterator.hasPrevious && continue) {
           val timestamp = listIterator.previous
    -      if (recordTimeStamp >= timestamp) {
    +      if (recordTimestamp >= timestamp) {
             listIterator.next
    -        listIterator.add(recordTimeStamp)
    +        listIterator.add(recordTimestamp)
             continue = false
           }
         }
     
         if (continue) {
    -      sortedTimestamps.addFirst(recordTimeStamp)
    +      sortedTimestamps.addFirst(recordTimestamp)
         }
       }
     
    +  /**
    +   * Process the same timestamp datas, the mechanism is different between
    +   * rows and range window.
    +   */
    +  def processElementsWithSameTimestamp(
    +    curRowList: JList[Row],
    +    lastAccumulator: Row,
    +    out: Collector[Row]): Unit
    +
    +}
    +
    +/**
    +  * A ProcessFunction to support unbounded ROWS window.
    +  * With the ROWS option you define on a physical level how many rows are 
included in your window frame
    --- End diff --
    
    This line violates the 100 character limit of the Scala code style.
    Please run a local build before opening a PR to capture such problems (`mvn 
clean install` inside of the `./flink-libraries/flink-table` folder is usually 
sufficient and takes ~5 mins).


> Add event time OVER RANGE BETWEEN UNBOUNDED PRECEDING aggregation to SQL
> ------------------------------------------------------------------------
>
>                 Key: FLINK-6200
>                 URL: https://issues.apache.org/jira/browse/FLINK-6200
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API & SQL
>            Reporter: sunjincheng
>            Assignee: hongyuhong
>
> The goal of this issue is to add support for OVER RANGE aggregations on event 
> time streams to the SQL interface.
> Queries similar to the following should be supported:
> SELECT 
>   a, 
>   SUM(b) OVER (PARTITION BY c ORDER BY rowTime() RANGE BETWEEN UNBOUNDED 
> PRECEDING AND CURRENT ROW) AS sumB,
>   MIN(b) OVER (PARTITION BY c ORDER BY rowTime() RANGE BETWEEN UNBOUNDED 
> PRECEDING AND CURRENT ROW) AS minB
> FROM myStream
> The following restrictions should initially apply:
> All OVER clauses in the same SELECT clause must be exactly the same.
> The PARTITION BY clause is optional (no partitioning results in single 
> threaded execution).
> The ORDER BY clause may only have rowTime() as parameter. rowTime() is a 
> parameterless scalar function that just indicates processing time mode.
> bounded PRECEDING is not supported (see FLINK-5655)
> FOLLOWING is not supported.
> The restrictions will be resolved in follow up issues. If we find that some 
> of the restrictions are trivial to address, we can add the functionality in 
> this issue as well.
> This issue includes:
> Design of the DataStream operator to compute OVER ROW aggregates
> Translation from Calcite's RelNode representation (LogicalProject with 
> RexOver expression).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to