[ 
https://issues.apache.org/jira/browse/HIVE-22239?focusedWorklogId=326801&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326801
 ]

ASF GitHub Bot logged work on HIVE-22239:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 11/Oct/19 09:00
            Start Date: 11/Oct/19 09:00
    Worklog Time Spent: 10m 
      Work Description: kgyrtkirk commented on pull request #787: HIVE-22239
URL: https://github.com/apache/hive/pull/787#discussion_r333892055
 
 

 ##########
 File path: ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java
 ##########
 @@ -944,7 +948,7 @@ else 
if(colTypeLowerCase.equals(serdeConstants.SMALLINT_TYPE_NAME)){
     } else if (colTypeLowerCase.equals(serdeConstants.DATE_TYPE_NAME)) {
       cs.setAvgColLen(JavaDataModel.get().lengthOfDate());
       // epoch, days since epoch
-      cs.setRange(0, 25201);
+      cs.setRange(DATE_RANGE_LOWER_LIMIT, DATE_RANGE_UPPER_LIMIT);
 
 Review comment:
   I feel like we should be setting this range to the maximum as possible
   * let's say the user has data from 1920-1940 
   * and  submits a query to <1930 which would mean 1/2 of the rows
   * if the stats are estimated hive will go straight to 0 rows with the new 
uniform estimation logic (earlier it was 1/3 or something)
   
   so I think either this should be set for the whole range; or have toggle to 
change how this supposed to work....or our archelogist end users will get angry 
and grab some rusty :dagger: to cut our necks :D
   
   ...or we should tell them to calculate statistics properly?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 326801)
    Time Spent: 4h 50m  (was: 4h 40m)

> Scale data size using column value ranges
> -----------------------------------------
>
>                 Key: HIVE-22239
>                 URL: https://issues.apache.org/jira/browse/HIVE-22239
>             Project: Hive
>          Issue Type: Improvement
>          Components: Physical Optimizer
>            Reporter: Jesus Camacho Rodriguez
>            Assignee: Jesus Camacho Rodriguez
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-22239.01.patch, HIVE-22239.02.patch, 
> HIVE-22239.03.patch, HIVE-22239.04.patch, HIVE-22239.04.patch, 
> HIVE-22239.05.patch, HIVE-22239.05.patch, HIVE-22239.patch
>
>          Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Currently, min/max values for columns are only used to determine whether a 
> certain range filter falls out of range and thus filters all rows or none at 
> all. If it does not, we just use a heuristic that the condition will filter 
> 1/3 of the input rows. Instead of using that heuristic, we can use another 
> one that assumes that data will be uniformly distributed across that range, 
> and calculate the selectivity for the condition accordingly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to