If I have messed up with the rack thing I would like to get this node out
of the cluster so the cluster is functioning as quickly as possible. Then
do some more research and try again. So I am looking for the safest way to
do that.
On Mon, Apr 3, 2023 at 7:27 AM Jeff Jirsa wrote:
> Just run node
Should I use assassinate or removenode? Given that there is some data on
the node. Or will that be found on the other nodes? Sorry for all the
questions but I really don't want to mess up.
On Mon, Apr 3, 2023 at 7:21 AM Carlos Diaz wrote:
> That's what nodetool assassinte will do.
>
> On Sun, Ap
Just run nodetool rebuild on the new node If you assassinate it now you violate consistency for your most recent writes On Apr 2, 2023, at 10:22 PM, Carlos Diaz wrote:That's what nodetool assassinte will do.On Sun, Apr 2, 2023 at 10:19 PM David Tinker wrote:Is it possible
That's what nodetool assassinte will do.
On Sun, Apr 2, 2023 at 10:19 PM David Tinker wrote:
> Is it possible for me to remove the node from the cluster i.e. to undo
> this mess and get the cluster operating again?
>
> On Mon, Apr 3, 2023 at 7:13 AM Carlos Diaz wrote:
>
>> You can leave it in t
Is it possible for me to remove the node from the cluster i.e. to undo this
mess and get the cluster operating again?
On Mon, Apr 3, 2023 at 7:13 AM Carlos Diaz wrote:
> You can leave it in the seed list of the other nodes, just make sure it's
> not included in this node's seed list. However, i
You can leave it in the seed list of the other nodes, just make sure it's
not included in this node's seed list. However, if you do decide to fix
the issue with the racks first assassinate this node (nodetool assassinate
), and update the rack name before you restart.
On Sun, Apr 2, 2023 at 10:06
I should add that the new node does have some data.
On Mon, Apr 3, 2023 at 7:04 AM David Tinker wrote:
> It is also in the seeds list for the other nodes. Should I remove it from
> those, restart them one at a time, then restart it?
>
> /etc/cassandra # grep -i bootstrap *
> doesn't show anythin
It is also in the seeds list for the other nodes. Should I remove it from
those, restart them one at a time, then restart it?
/etc/cassandra # grep -i bootstrap *
doesn't show anything so I don't think I have auto_bootstrap false.
Thanks very much for the help.
On Mon, Apr 3, 2023 at 7:01 AM Ca
Just remove it from the seed list in the cassandra.yaml file and restart
the node. Make sure that auto_bootstrap is set to true first though.
On Sun, Apr 2, 2023 at 9:59 PM David Tinker wrote:
> So likely because I made it a seed node when I added it to the cluster it
> didn't do the bootstrap
So likely because I made it a seed node when I added it to the cluster it
didn't do the bootstrap process. How can I recover this?
On Mon, Apr 3, 2023 at 6:41 AM David Tinker wrote:
> Yes replication factor is 3.
>
> I ran nodetool repair -pr on all the nodes (one at a time) and am still
> havin
Yes replication factor is 3.
I ran nodetool repair -pr on all the nodes (one at a time) and am still
having issues getting data back from queries.
I did make the new node a seed node.
Re "rack4": I assumed that was just an indication as to the physical
location of the server for redundancy. This
I'm assuming that your replication factor is 3. If that's the case, did
you intentionally put this node in rack 4? Typically, you want to add
nodes in multiples of your replication factor in order to keep the "racks"
balanced. In other words, this node should have been added to rack 1, 2 or
3.
Looks like it joined with no data. Did you set auto_bootstrap to false? Or does
the node think it’s a seed?
You want to use “nodetool rebuild” to stream data to that host.
You can potentially end the production outage / incident by taking the host
offline, or making it less likely to be querie
Hi All
I recently added a node to my 3 node Cassandra 4.0.5 cluster and now many
reads are not returning rows! What do I need to do to fix this? There
weren't any errors in the logs or other problems that I could see. I
expected the cluster to balance itself but this hasn't happened (yet?). The
no
14 matches
Mail list logo