Hi All,
I am very new to Cassandra. I have a 5 nodes cluster setup in Centos
servers for our internal team testing, couple of days ago our network team
has asked us to stop 3 of the nodes let's say C1,C2,C3 for OS patching
activity. After the activity I started the nodes again but now
interestingl
What logs of /172.16.20.16:7000 say when repair failed. It indicates
"validation failed". Can you check system.log for /172.16.20.16:7000 and
see what they say. Looks like you have some issue with *doc/origdoc,
probably some corrupt sstable. *Try to run repair for individual table and
see for wh
Thank you. I've tried:
nodetool repair --full
nodetool repair -pr
They all get to 57% on any of the nodes, and then fail. Interestingly
the debug log only has INFO - there are no errors.
[2023-08-07 14:02:09,828] Repair command #6 failed with error
Incremental repair session 83dc17d0-354c-11e