On Mon, Sep 3, 2018 at 10:41 AM onmstester onmstester <onmstes...@zoho.com> wrote:
> I'm going to add more 6 nodes to my cluster (already has 4 nodesand RF=2) > using GossipingPropertyFileSnitch, and *NetworkTopologyStrategy and > default num_tokens = 256.* > It recommended to join nodes one by one, although there is < 200GB on each > node, i will do so. > In the document mentioned that i should run nodetool cleanup after joining > a new node: > *Run* *nodetool cleanup* *on the source node and on neighboring nodes > that shared the same subrange after the new node is up and running. Failure > to run this command after adding a node causes Cassandra to include the old > data to rebalance the load on that node* > It also mentioned that > > *Cleanup can be safely postponed for low-usage hours.* > Should i run nodetool cleanup on each node, after adding every node? > (considering that cleanup too should be done one-by-one , it would be a lot > of tasks to do! ) is it possible to run clean-up once (after all new nodes > joined the cluster) on all the nodes? > Hi, It makes a lot of sense to run cleanup once after you have added all the new nodes. > I also don't understand the part for: > allocate_tokens_for_local_replication_factor > <https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html#configCassandra_yaml__allocate_tokens_for_local_replication_factor>, > i didn't change num_tokes:256 and anything related to vnode config in yaml > conf and load already distributed evenly (is this a good approach and good > num_tokens, while i'm using nodes with same spec?), so should i consider > this config ( allocate_tokens_for_local_replication_factor > <https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html#configCassandra_yaml__allocate_tokens_for_local_replication_factor>) > while adding new node having a single keyspace with RF=2? > I would not recommend touching these while adding nodes to an existing ring. You might want to have another look if you add a new DC. Then pick smaller number of vnodes and use the smart allocation option. Cheers, -- Alex