[ 
https://issues.apache.org/jira/browse/HIVE-23965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176300#comment-17176300
 ] 

Zoltan Haindrich commented on HIVE-23965:
-----------------------------------------

* the description clearly describes that the metastore data is a composition of 
questionable quality stuff...so we are running our planning tests against some 
weird metastore content....
* I don't think adding more tests will increase test coverage - in this case we 
are talking about queries which are already run 2 times already - I've seen 
people updating q.out's like crazy....so adding an extra 100 q.out-s will not 
neccessarily increase coverage...
* the independence from having docker setup is a great - the new approach uses 
docker - but if that's a problem we could try to come up with some other 
approach - I'm wondering about using an archived derby database with metastore 
data
* the metastore content lodader approach is quite unfortunate - IIRC once I had 
to fix up something in the loader once... because I made some changes to the 
column statistics

I think we should remove the old approach...and run tests against the new; 
more-realistic schema.




> Improve plan regression tests using TPCDS30TB metastore dump and custom 
> configs
> -------------------------------------------------------------------------------
>
>                 Key: HIVE-23965
>                 URL: https://issues.apache.org/jira/browse/HIVE-23965
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Stamatis Zampetakis
>            Assignee: Stamatis Zampetakis
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The existing regression tests (HIVE-12586) based on TPC-DS have certain 
> shortcomings:
> The table statistics do not reflect cardinalities from a specific TPC-DS 
> scale factor (SF). Some tables are from a 30TB dataset, others from 200GB 
> dataset, and others from a 3GB dataset. This mix leads to plans that may 
> never appear when using an actual TPC-DS dataset. 
> The existing statistics do not contain information about partitions something 
> that can have a big impact on the resulting plans.
> The existing regression tests rely on more or less on the default 
> configuration (hive-site.xml). In real-life scenarios though some of the 
> configurations differ and may impact the choices of the optimizer.
> This issue aims to address the above shortcomings by using a curated 
> TPCDS30TB metastore dump along with some custom hive configurations. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to