[ https://issues.apache.org/jira/browse/KUDU-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396990#comment-17396990 ]
ASF subversion and git services commented on KUDU-2671: ------------------------------------------------------- Commit 607d9d0a7e95e220864f43b88a64644bb6402163 in kudu's branch refs/heads/master from Alexey Serbin [ https://gitbox.apache.org/repos/asf?p=kudu.git;h=607d9d0 ] [common] more generic API for IN list predicate pruning While working on KUDU-2671, I found that the exposed internals of the PartitionSchema class doesn't allow for updating the implementation of the partition-related code to include per-range custom hash bucket schemas in a consistent manner. This patch introduces a bit more generic interface for pruning values of IN list predicates by adding a new PartitionMayContainRow() method and removes the following methods from the public API of the PartitionSchema class: * HashPartitionContainsRow() * RangePartitionContainsRow() * IsColumnSingleRangeSchema() * TryGetSingleColumnHashPartitionIndex() I also added one extra test scenario and updated existing ones to increase readability of the assertion messages if they are triggered. This is a follow-up to 6a7cadc7e and 83b8caf4f. Change-Id: I2e2390cc4747864fdac71656dd7125ac3b15bf9d Reviewed-on: http://gerrit.cloudera.org:8080/17764 Tested-by: Kudu Jenkins Reviewed-by: Mahesh Reddy <mre...@cloudera.com> Reviewed-by: Andrew Wong <aw...@cloudera.com> > Change hash number for range partitioning > ----------------------------------------- > > Key: KUDU-2671 > URL: https://issues.apache.org/jira/browse/KUDU-2671 > Project: Kudu > Issue Type: Improvement > Components: client, java, master, server > Affects Versions: 1.8.0 > Reporter: yangz > Assignee: Mahesh Reddy > Priority: Major > Labels: feature, roadmap-candidate, scalability > Attachments: 屏幕快照 2019-01-24 下午12.03.41.png > > > For our usage, the kudu schema design isn't flexible enough. > We create our table for day range such as dt='20181112' as hive table. > But our data size change a lot every day, for one day it will be 50G, but for > some other day it will be 500G. For this case, it be hard to set the hash > schema. If too big, for most case, it will be too wasteful. But too small, > there is a performance problem in the case of a large amount of data. > > So we suggest a solution we can change the hash number by the history data of > a table. > for example > # we create schema with one estimated value. > # we collect the data size by day range > # we create new day range partition by our collected day size. > We use this feature for half a year, and it work well. We hope this feature > will be useful for the community. Maybe the solution isn't so complete. > Please help us make it better. -- This message was sent by Atlassian Jira (v8.3.4#803005)