Update again:
The dc1-cass14 node stopped accepting/bootstrapping/streaming early on and now
there is just a bunch of
WARN [OptionalTasks:1] 2022-08-24 13:10:16,761 CassandraRoleManager.java:344 -
CassandraRoleManager skipped default role setup: some nodes were not ready
INFO [OptionalTasks:
Update:
I shut the server down and the node finally disappeared from the status.
I then restarted the server on the similarly named node (dc1-cass14) and it
came up...however, it is UJ. Was this due to the amount of time spent
unavailable?
M
-Original Message-
From: Marc Hoppins
Se
Also, I just had some changes made to the cass.yml config so thought that is I
rolling restart the nodes it might help the problem.
Now I have a startup problem with an existing node...with a similar name
Original problem node = dc2-cass14
Existing node = dc1-cass14
and am getting:
ERROR [main
Hi all,
I added a node but forgot to specify the correct rack so I stopped the join and
removed it. When I tried adding it again it was taking a LONG time to join. I
tried draining before stopping the service but that failed. I killed the
process and cleared the directories but the cluster st