[ 
https://issues.apache.org/jira/browse/KUDU-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17747276#comment-17747276
 ] 

ASF subversion and git services commented on KUDU-3476:
-------------------------------------------------------

Commit afc82e323b0527ca7b96b51bad6583f935a34e85 in kudu's branch 
refs/heads/branch-1.17.x from Mahesh Reddy
[ https://gitbox.apache.org/repos/asf?p=kudu.git;h=afc82e323 ]

KUDU-3476: Update 1.17 release notes

This patch udpates the release notes for the 1.17 release
to include the range aware replica placement feature.

Change-Id: I430cf540731860ec9209f5c6026cdc4431a3d2bf
Reviewed-on: http://gerrit.cloudera.org:8080/20242
Reviewed-by: Yingchun Lai <laiyingc...@apache.org>
Tested-by: Kudu Jenkins


> Make replica placement range and table aware
> --------------------------------------------
>
>                 Key: KUDU-3476
>                 URL: https://issues.apache.org/jira/browse/KUDU-3476
>             Project: Kudu
>          Issue Type: New Feature
>          Components: master, tserver
>            Reporter: Mahesh Reddy
>            Assignee: Mahesh Reddy
>            Priority: Major
>             Fix For: 1.17.0
>
>
> The current replica placement algorithm uses the power of two choices 
> algorithm. This algorithm randomly selects tservers and places the replica on 
> the tserver with less replicas. This can lead to potential hotspotting as it 
> doesn't discriminate by range or table so many tablets from the same 
> range/table can be disproportionally distributed.
> With this new feature, the replicas will be placed in a way that the tablets 
> per range will be equally distributed amongst the available tservers. If 
> multiple tservers have the same amount of replicas per range, then the 
> tserver with less replicas for that table will be selected. If multiple 
> tservers have the same amount of replicas for that table, the tserver with 
> less total replicas will be chosen.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to