[ 
https://issues.apache.org/jira/browse/HIVE-21217?focusedWorklogId=200686&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-200686
 ]

ASF GitHub Bot logged work on HIVE-21217:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 19/Feb/19 15:30
            Start Date: 19/Feb/19 15:30
    Worklog Time Spent: 10m 
      Work Description: szlta commented on pull request #538: HIVE-21217: 
Optimize range calculation for PTF
URL: https://github.com/apache/hive/pull/538#discussion_r258090284
 
 

 ##########
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/udf/ptf/ValueBoundaryScanner.java
 ##########
 @@ -44,10 +49,207 @@ public ValueBoundaryScanner(BoundaryDef start, 
BoundaryDef end, boolean nullsLas
     this.nullsLast = nullsLast;
   }
 
+  public abstract Object computeValue(Object row) throws HiveException;
+
+  /**
+   * Checks if the distance of v2 to v1 is greater than the given amt.
+   * @return True if the value of v1 - v2 is greater than amt or either value 
is null.
+   */
+  public abstract boolean isDistanceGreater(Object v1, Object v2, int amt);
+
+  /**
+   * Checks if the values of v1 or v2 are the same.
+   * @return True if both values are the same or both are nulls.
+   */
+  public abstract boolean isEqual(Object v1, Object v2);
+
   public abstract int computeStart(int rowIdx, PTFPartition p) throws 
HiveException;
 
   public abstract int computeEnd(int rowIdx, PTFPartition p) throws 
HiveException;
 
+  /**
+   * Checks and maintains cache content - optimizes cache window to always be 
around current row
+   * thereby makes it follow the current progress.
+   * @param rowIdx current row
+   * @param p current partition for the PTF operator
+   * @throws HiveException
+   */
+  public void handleCache(int rowIdx, PTFPartition p) throws HiveException {
+    BoundaryCache cache = p.getBoundaryCache();
+    if (cache == null) {
+      return;
+    }
+
+    //Start of partition
+    if (rowIdx == 0) {
+      cache.clear();
+    }
+    if (cache.isComplete()) {
+      return;
+    }
+
+    int cachePos = cache.approxCachePositionOf(rowIdx);
+
+    if (cache.isEmpty()) {
+      fillCacheUntilEndOrFull(rowIdx, p);
+    } else if (cachePos > 50 && cachePos <= 75) {
 
 Review comment:
   We don't know the sizes beforehand. The numbers defined by user in the 
window definition are matched to values of orderby col, e.g. preceding 2, 
following 2 might mean a few rows but can also mean thousands if we have more 
of the same values or the orderby col is of double type.
   That said I've updated the cache window moving code so that it is optimized 
on doing minimum number of cache misses thereby reads (as discussed offline)
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 200686)
    Time Spent: 50m  (was: 40m)

> Optimize range calculation for PTF
> ----------------------------------
>
>                 Key: HIVE-21217
>                 URL: https://issues.apache.org/jira/browse/HIVE-21217
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Adam Szita
>            Assignee: Adam Szita
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-21217.0.patch, HIVE-21217.1.patch, 
> HIVE-21217.2.patch
>
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> During window function execution Hive has to iterate on neighbouring rows of 
> the current row to find the beginning and end of the proper range (on which 
> the aggregation will be executed).
> When we're using range based windows and have many rows with a certain key 
> value this can take a lot of time. (e.g. partition size of 80M, in which we 
> have 2 ranges of 40M rows according to the orderby column: within these 40M 
> rowsets we're doing 40M x 40M/2 steps.. which is of n^2 time complexity)
> I propose to introduce a cache that keeps track of already calculated range 
> ends so it can be reused in future scans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to