To add on to what Bowen already wrote, if you cannot find any reason in the
logs at all, I would retry using different hardware.
In the recent past I have seen two cases where strange Cassandra problems were
actually caused by broken hardware (in both cases, a faulty memory module
caused the i
Is it bad to leave the replacement node up and running for hours even
when the cluster forgets it for the old node being replaced? I'll have
to set the logging to trace. debug produced nothing. I did stop the
service, which produced errors in the other nodes in the datacenter
since they had ope
In my experience, failed bootstrap / node replacement always leave some
traces in the logs. At the very minimal, there's going to be logs about
streaming sessions failing or aborting. I have never seen it silently
fails or stops without leaving any traces in the log. I can't think of
anything t