Actually, doing a nodetool ring is always showing the current node as owning 99% of the ring

From db-1a-1:

Address DC Rack Status State Load Effective-Ownership Token
Token(bytes[eaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa8])
10.0.4.22 us-east 1a Up Normal 77.72 GB 99.89% Token(bytes[00000000000000000000000000000001]) 10.0.10.23 us-east 1d Up Normal 82.74 GB 64.13% Token(bytes[15555555555555555555555555555555]) 10.0.8.20 us-east 1c Up Normal 81.79 GB 30.55% Token(bytes[2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa]) 10.0.4.23 us-east 1a Up Normal 82.66 GB 0.04% Token(bytes[40000000000000000000000000000000]) 10.0.10.20 us-east 1d Up Normal 80.21 GB 0.04% Token(bytes[55555555555555555555555555555554]) 10.0.8.23 us-east 1c Up Normal 77.07 GB 0.04% Token(bytes[6aaaaaaaaaaaaaaaaaaaaaaaaaaaaaac]) 10.0.4.21 us-east 1a Up Normal 81.38 GB 0.04% Token(bytes[80000000000000000000000000000000]) 10.0.10.24 us-east 1d Up Normal 83.43 GB 0.04% Token(bytes[95555555555555555555555555555558]) 10.0.8.21 us-east 1c Up Normal 84.42 GB 0.04% Token(bytes[aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa8]) 10.0.4.25 us-east 1a Up Normal 80.06 GB 0.04% Token(bytes[c0000000000000000000000000000000]) 10.0.10.21 us-east 1d Up Normal 83.49 GB 35.80% Token(bytes[d5555555555555555555555555555558]) 10.0.8.24 us-east 1c Up Normal 90.72 GB 69.37% Token(bytes[eaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa8])


From db-1c-3:

Address DC Rack Status State Load Effective-Ownership Token
Token(bytes[eaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa8])
10.0.4.22 us-east 1a Up Normal 77.72 GB 0.04% Token(bytes[00000000000000000000000000000001]) 10.0.10.23 us-east 1d Up Normal 82.78 GB 0.04% Token(bytes[15555555555555555555555555555555]) 10.0.8.20 us-east 1c Up Normal 81.79 GB 0.04% Token(bytes[2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa]) 10.0.4.23 us-east 1a Up Normal 82.66 GB 33.84% Token(bytes[40000000000000000000000000000000]) 10.0.10.20 us-east 1d Up Normal 80.21 GB 67.51% Token(bytes[55555555555555555555555555555554]) 10.0.8.23 us-east 1c Up Normal 77.07 GB 99.89% Token(bytes[6aaaaaaaaaaaaaaaaaaaaaaaaaaaaaac]) 10.0.4.21 us-east 1a Up Normal 81.38 GB 66.09% Token(bytes[80000000000000000000000000000000]) 10.0.10.24 us-east 1d Up Normal 83.43 GB 32.41% Token(bytes[95555555555555555555555555555558]) 10.0.8.21 us-east 1c Up Normal 84.42 GB 0.04% Token(bytes[aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa8]) 10.0.4.25 us-east 1a Up Normal 80.06 GB 0.04% Token(bytes[c0000000000000000000000000000000]) 10.0.10.21 us-east 1d Up Normal 83.49 GB 0.04% Token(bytes[d5555555555555555555555555555558]) 10.0.8.24 us-east 1c Up Normal 90.72 GB 0.04% Token(bytes[eaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa8])

Any help would be appreciated, as if something is going drastically wrong we need to go back to backups and revert back to 1.1.2.

Thanks,
-Mike

On 2/14/2013 8:32 AM, Mike wrote:
Hello,

We just upgraded from 1.1.2->1.1.9. We utilize the byte ordered partitioner (we generate our own hashes). We have not yet upgraded sstables.

Before the upgrade, we had a balanced ring.

After the upgrade, we see:

10.0.4.22 us-east 1a Up Normal 77.66 GB 0.04% Token(bytes[00000000000000000000000000000001]) 10.0.10.23 us-east 1d Up Normal 82.74 GB 0.04% Token(bytes[15555555555555555555555555555555]) 10.0.8.20 us-east 1c Up Normal 81.79 GB 0.04% Token(bytes[2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa]) 10.0.4.23 us-east 1a Up Normal 82.66 GB 33.84% Token(bytes[40000000000000000000000000000000]) 10.0.10.20 us-east 1d Up Normal 80.21 GB 67.51% Token(bytes[55555555555555555555555555555554]) 10.0.8.23 us-east 1c Up Normal 77.12 GB 99.89% Token(bytes[6aaaaaaaaaaaaaaaaaaaaaaaaaaaaaac]) 10.0.4.21 us-east 1a Up Normal 81.38 GB 66.09% Token(bytes[80000000000000000000000000000000]) 10.0.10.24 us-east 1d Up Normal 83.43 GB 32.41% Token(bytes[95555555555555555555555555555558]) 10.0.8.21 us-east 1c Up Normal 84.42 GB 0.04% Token(bytes[aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa8]) 10.0.4.25 us-east 1a Up Normal 80.06 GB 0.04% Token(bytes[c0000000000000000000000000000000]) 10.0.10.21 us-east 1d Up Normal 83.57 GB 0.04% Token(bytes[d5555555555555555555555555555558]) 10.0.8.24 us-east 1c Up Normal 90.74 GB 0.04% Token(bytes[eaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa8])


Restarting a node essentially changes who own 99% of the ring.

Given we use an RF of 3, and LOCAL_QUORUM consistency for everything, and we are not seeing errors, something seems to be working correctly. Any idea what is going on above? Should I be alarmed?

-Mike

Reply via email to