Thanks for the explanation, Kane!
In case anyone is curious I decommissioned node7 and things re-balanced
themselves automatically: https://i.imgur.com/EOxzJu9.png
(node8 received 422 GiB, while the others did receive 82-153 GiB,
as reported by "nodetool netstats -H")
Lapo
On 2021-03-03 23:59
Well, that looks like your problem. They are logical racks and they come
into play when NetworkTopologyStrategy is deciding which replicas to put
data on. NTS will ensure a replica goes on the first node in a different
rack when traversing the ring, with the idea of keeping only one set of
replicas
Hi! The nodes are all in different racks… except for node7 and node8!
Which is something more that makes them similar (which I didn't notice
at first), other than timeline of being added to the cluster.
About the token ring calculation… I'll retry that in NodeJS instead of
awk as a double chec
The load calculation always has issues so I wouldn't count on it, although
in this case it does seem to roughly line up. Are you sure your ring
calculation was accurate? It doesn't really seem to line up with the owns %
for the 33% node, and it is feasible (although unlikely) that you could
roll a
I had a 5 nodes cluster, then increased to 6, then to 7, then to 8, then
back to 7. I installed 3.11.6 back when node_tokens defaulted to 256, so
as far as I understand at the expense of long repairs it should have an
excellent capacity to scale to new nodes, but I get this status:
Status=Up/D