[ 
https://issues.apache.org/jira/browse/KUDU-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17565227#comment-17565227
 ] 

ASF subversion and git services commented on KUDU-2671:
-------------------------------------------------------

Commit e0f96b9c838e33b93690a76f771d1eeaf3a99222 in kudu's branch 
refs/heads/master from Alexey Serbin
[ https://gitbox.apache.org/repos/asf?p=kudu.git;h=e0f96b9c8 ]

KUDU-2671 forward-looking provision for AddRangePartition

The way how the information on range-specific hash schema is specified
in AlterTableRequestPB::AddRangePartition introduced by [1] assumes
there should not be an empty custom hash schema for a range when the
table-wide hash schema isn't empty.  As of now, the assumption holds
true since there is an artificial restriction on the variability of the
number of dimensions in per-range hash schemas in a table introduced
by changelist [2].  However, once the restriction introduced in [2] is
removed, the current type of the AddRangePartition::custom_hash_schema
deprives the code to tell between the case of a range-specific hash
schema with zero hash dimensions (a.k.a. empty hash schema, i.e. no hash
bucketing at all) and the case of table-wide hash schema for a newly
added range partition.

This patch fixes the deficiency, so now it's possible to call
has_custom_hash_schema() and hasCustomHashSchema() on AddRangePartition
object in C++ and Java code correspondingly, not relying on the
emptiness of the repeated field representing the set of hash dimensions
for the range-specific hash schema.

This patch would break backwards compatibility if there were a version
of Kudu released with the change introduced in changlist [1], but that's
not the case.  So, it was possible to simply change the type of the
AddRangePartition::custom_hash_schema field.

[1] 
https://github.com/apache/kudu/commit/11db3f28b36d92ce1515bcaace51a3586838abcb
[2] 
https://github.com/apache/kudu/commit/6998193e69eeda497f912d1d806470c95b591ad4

Change-Id: I30f654443c7f51a76dea9d980588b399b06c2dd1
Reviewed-on: http://gerrit.cloudera.org:8080/18713
Tested-by: Alexey Serbin <ale...@apache.org>
Reviewed-by: Mahesh Reddy <mre...@cloudera.com>
Reviewed-by: Abhishek Chennaka <achenn...@cloudera.com>
Reviewed-by: Alexey Serbin <ale...@apache.org>


> Change hash number for range partitioning
> -----------------------------------------
>
>                 Key: KUDU-2671
>                 URL: https://issues.apache.org/jira/browse/KUDU-2671
>             Project: Kudu
>          Issue Type: Improvement
>          Components: client, java, master, server
>    Affects Versions: 1.8.0
>            Reporter: yangz
>            Assignee: Mahesh Reddy
>            Priority: Major
>              Labels: feature, roadmap-candidate, scalability
>         Attachments: 屏幕快照 2019-01-24 下午12.03.41.png
>
>
> For our usage, the kudu schema design isn't flexible enough.
> We create our table for day range such as dt='20181112' as hive table.
> But our data size change a lot every day, for one day it will be 50G, but for 
> some other day it will be 500G. For this case, it be hard to set the hash 
> schema. If too big, for most case, it will be too wasteful. But too small, 
> there is a performance problem in the case of a large amount of data.
>  
> So we suggest a solution we can change the hash number by the history data of 
> a table.
> for example
>  # we create schema with one estimated value.
>  # we collect the data size by day range
>  # we create new day range partition by our collected day size.
> We use this feature for half a year, and it work well. We hope this feature 
> will be useful for the community. Maybe the solution isn't so complete. 
> Please help us make it better.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to