Hi
for best performance you can start form node  that has lower disk capacity and 
set it to 256 and for other nods use proportion to this node to find num of 
tokens.

> On Shahrivar 9, 1397 AP, at 10:40, Max C. <mc_cassan...@core43.com> wrote:
> 
> Jeff/Kurt/Alex — thanks so much for your feedback on this issue, and thanks 
> for all of the help you guys have lent to people on this list over the years. 
>  :-)
> 
> - Max
> 
>> On Aug 29, 2018, at 11:38 pm, Oleksandr Shulgin 
>> <oleksandr.shul...@zalando.de <mailto:oleksandr.shul...@zalando.de>> wrote:
>> 
>> On Thu, Aug 30, 2018 at 12:05 AM kurt greaves <k...@instaclustr.com 
>> <mailto:k...@instaclustr.com>> wrote:
>> For 10 nodes you probably want to use between 32 and 64. Make sure you use 
>> the token allocation algorithm by specifying allocate_tokens_for_keyspace
>> 
>> We are using 16 tokens with 30 nodes on Cassandra 3.0.  And yes, we have 
>> used allocate_tokens_for_keyspace option to achieve better load distribution 
>> than with the random allocation (which is the default).  Currently we see 
>> the disk usage between 1.5 and 1.7TB, which is acceptable variance for us.
>> 
>> If you're using DSE, you're more lucky because it's easier to bootstrap new 
>> DC with the smart token allocation algorithm.  Simply because the parameter 
>> you need to specify does not depend on any keyspaces being replicated to the 
>> new nodes, you just specify the target replication factor to optimize for.
>> 
>> Cheers,
>> --
>> Alex
>> 
> 

Reply via email to