[ 
https://issues.apache.org/jira/browse/KUDU-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17560605#comment-17560605
 ] 

ASF subversion and git services commented on KUDU-2671:
-------------------------------------------------------

Commit 3c415ea2f4418d60d94ff870154b3128100a7d77 in kudu's branch 
refs/heads/master from Alexey Serbin
[ https://gitbox.apache.org/repos/asf?p=kudu.git;h=3c415ea2f ]

KUDU-2671 introduce PartitionPruner::PrepareRangeSet()

Since PartitionSchema now provides and persists information only on
ranges with custom hash schemas [1], but PartitionPruner::Init()'s
logic assumed receiving information on all the existing ranges in case
of tables with range-specific hash schemas, it needed an update.

This patch does exactly so, adding a new PrepareRangeSet() method into
the PartitionPruner class.  The new method produces the preliminary
set of scanner ranges with proper hash schemas per each range using the
information on the table-wide hash schema and range-specific hash
schemas provided.  It splits the predicate-based range into sub-ranges
and assigns corresponding hash schemas to them.  In essence, the hash
schemas for the ranges with custom hash schemas are known, and the rest
of the sub-ranges have the table-wide hash schema.

This patch also contains unit test for the newly introduced method.

I updated TestHashSchemasPerRangeWithPartialPrimaryKeyRangePruning
and TestInListHashPruningPerRange scenarios of the PartitionPrunerTest
accordingly since now the number of initial ranges for pruning changed
even if the number of non-pruned ranges to scan stayed the same.

This is a follow-up to [1].

[1] https://gerrit.cloudera.org/#/c/18642/

Change-Id: I7f1903a444d47d30bbd7e119977cbb87bf1aa458
Reviewed-on: http://gerrit.cloudera.org:8080/18672
Tested-by: Alexey Serbin <ale...@apache.org>
Reviewed-by: Attila Bukor <abu...@apache.org>


> Change hash number for range partitioning
> -----------------------------------------
>
>                 Key: KUDU-2671
>                 URL: https://issues.apache.org/jira/browse/KUDU-2671
>             Project: Kudu
>          Issue Type: Improvement
>          Components: client, java, master, server
>    Affects Versions: 1.8.0
>            Reporter: yangz
>            Assignee: Mahesh Reddy
>            Priority: Major
>              Labels: feature, roadmap-candidate, scalability
>         Attachments: 屏幕快照 2019-01-24 下午12.03.41.png
>
>
> For our usage, the kudu schema design isn't flexible enough.
> We create our table for day range such as dt='20181112' as hive table.
> But our data size change a lot every day, for one day it will be 50G, but for 
> some other day it will be 500G. For this case, it be hard to set the hash 
> schema. If too big, for most case, it will be too wasteful. But too small, 
> there is a performance problem in the case of a large amount of data.
>  
> So we suggest a solution we can change the hash number by the history data of 
> a table.
> for example
>  # we create schema with one estimated value.
>  # we collect the data size by day range
>  # we create new day range partition by our collected day size.
> We use this feature for half a year, and it work well. We hope this feature 
> will be useful for the community. Maybe the solution isn't so complete. 
> Please help us make it better.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to