Hello to everyone!

Please, can someone explain where we made a mistake?

We have cluster with 4 nodes which uses vnodes(256 per node, default settings), 
snitch is default on every node: SimpleSnitch.  
These four nodes was from beginning of a cluster.
In this cluster we have keyspace with this options:
Keyspace: K:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
    Options: [replication_factor:3]


All was normal and nodetool status K shows that each node owns 75% of all key 
range. All 4 nodes are located in same datacenter and have same first two bytes 
in IP address(others are different).  

Then we buy new server on different datacenter and add it to the cluster with 
same settings as in previous four nodes(difference only in listen_address), 
assuming that the effective own of each node for this keyspace will be 
300/5=60% or near. But after 3-5 minutes after start nodetool status K show 
this:
nodetool status K;
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns (effective)  Host ID                 
              Rack
UN  N1   6,06 GB    256     50.0%             
62f295b3-0da6-4854-a53a-f03d6b424b03  rack1
UN  N2   5,89 GB    256     50.0%             
af4e4a23-2610-44dd-9061-09c7a6512a54  rack1
UN  N3   6,02 GB    256     50.0%             
0f0e4e78-6fb2-479f-ad76-477006f76795  rack1
UN  N4   5,8 GB     256     50.0%             
670344c0-9856-48cf-9ec9-1a98f9a89460  rack1
UN  N5   7,51 GB    256     100.0%            
82473d14-9e36-4ae7-86d2-a3e526efb53f  rack1


N5 is newly added node

nodetool repair -pr on N5 doesn't change anything

nodetool describering K shows that new node N5 participate in EACH range. This 
is not we want at all.  

It looks like cassandra add new node to each range because it located in 
different datacenter, but all settings and output are exactly prevent this.

Also interesting point is that while in all config files snitch is defined as 
SimpleSnitch the output of the command nodetool describecluster is:
Cluster Information:
        Name: Some Cluster Name
        Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
        Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
        Schema versions:
                26b8fa37-e666-31ed-aa3b-85be75f2aa1a: [N1, N2, N3, N4, N5]


We use Cassandra 2.0.6

Questions we have at this moment:
1. How to rebalance ring so all nodes will own 60% of range?
   1a. Removing node from cluster and adding it again is a solution?
2. Where we possibly make a mistake when adding new node?
3. If we add new 6th node to ring it will take 50% from N5 or some portion from 
each node?

Thanks in advance!

--  
С уважением,  
Владимир Рудев
(With regards, Vladimir Rudev)
vladimir.ru...@gmail.com (mailto:vladimir.ru...@gmail.com)


Reply via email to