Hi,
while replacing a node in a cluster I saw this log:
2019-08-27 16:35:31,439 Gossiper.java:995 - InetAddress /10.15.53.27 is now
DOWN
it caught my attention because that ip address doesn't exist anymore in the
cluster and it hasn't for a long time.
After some reading I ran `nodetool gossi
Based on what you've posted, I assume the instances are not visible in
`nodetool ring` or `nodetool status`, and the only reason you know they're
still in gossipinfo is you see them in the logs? If that's the case, then
yes, I would do `nodetool assassinate`.
On Wed, Aug 28, 2019 at 7:33 AM Vinc
So you have deleted the partition. Do not delete the sstables directly.
By default cassandra will keep the tombstones untouched for 10 days.
Once 10 days have passed (should be done now since your message was on
august 12) a compaction is needed to actually reclaim the data.
You could force a com
Yep, they're not visible in both ring and status.
On Wed, Aug 28, 2019, at 17:08, Jeff Jirsa wrote:
> Based on what you've posted, I assume the instances are not visible in
> `nodetool ring` or `nodetool status`, and the only reason you know they're
> still in gossipinfo is you see them in the l
telnet from node 1 -> node2 7001 (and 7000) works.
However, I can't rule out a JKS keystore/truststore issue. I have tried a
number of configurations and none of them have seemed to help (or emit any
further error logging). We have a root and intermediate CA cert, and a
private key + signed CSR
For clarity for anybody that comes to this chain in the archive. This
might be an issue with Ec2MultiRegionSnitch all together; not sure. But if
I create a local 3 node cluster using ccm (cassandra v 3.11.4). I can drop
the keystore/truststore jks files in, and flip encryption and everything
wor
I've seen something similar if there is a node still referring to that IP as a
seed node in cassandra.yaml. You might want to check that.
From: Vincent Rischmann
Sent: Wednesday, August 28, 2019 10:10 AM
To: user@cassandra.apache.org
Subject: Re: gossipinfo cont