I was assuming my node a1 would always own token 0, but we just added 5 of 6 
more nodes and a1 no longer owns that token range.

I have a few questions on the table at the bottom

 1.  Is this supposed to happen where host a1 no longer owns token range 0(but 
that is in his cassandra.yaml file), but instead a2, a8, and a7 own the token 
range now?  (this happened after adding nodes)
 2.  If I run nodetool repair –pr on node a1, it is going to run repair for 
token range 0 on nodes a2, a7, and a8?  Is that correct?
 3.  I now need all 4 nodes up to run repair instead of just the three nodes, 
correct?  Since a1 owns token 0 in cassandra.yaml.

(Make your window wide enough so this table shows up properly)…

a1 UP Normal 292.24 GB 33.3% 0    0 NA 1733.92 7860.00 a2, a8, a7 1.2.2-SNAPSHOT
a7 UP Normal 196.76 GB 25.0% 14178431955039102644307275309657008810 0 NA 
4042.77 7860.00 a2, a3, a8 1.2.2-SNAPSHOT
a2 UP Normal 249.04 GB 25.0% 28356863910078205288614550619314017621 0 NA 
5690.07 7862.00 a3, a8, a9 1.2.2-SNAPSHOT
a8 UP Normal 114.54 GB 25.0% 42535295865117307932921825928971026432 0 NA 
4219.95 7860.00 a3, a4, a9 1.2.2-SNAPSHOT
a3 UP Normal 246.88 GB 25.0% 56713727820156410577229101238628035242 0 NA 
4175.78 7862.00 a4, a9, a10 1.2.2-SNAPSHOT
a9 UP Normal 119.94 GB 25.0% 70892159775195513221536376548285044053 0 NA 
2981.60 7860.00 a4, a5, a10 1.2.2-SNAPSHOT
a4 UP Normal 232.92 GB 25.0% 85070591730234615865843651857942052863 0 NA 
4840.72 7862.00 a5, a10, a6 1.2.2-SNAPSHOT
a10 UP Normal 114.3 GB 25.0% 99249023685273718510150927167599061674 00 NA 0 
3682.63 7860.00 a12, a5, a6 1.2.2-SNAPSHOT
a5 UP Normal 259.34 GB 25.0% 113427455640312821154458202477256070484 0 NA 
5258.74 7862.00 a12, a1, a6 1.2.2-SNAPSHOT
a6 DO Normal 248.36 GB 33.3% 141784319550391026443072753096570088105 0 NA 
4042.77 7860.00 a12, a1, a7 1.2.2-SNAPSHOT
a12 UP Normal 206.63 GB 33.3% 155962751505430129087380028406227096917 02 NA 2 
5112.88 7862.00 a2, a1, a7 1.2.2-SNAPSHOT

Reply via email to