It is not necessary, but recommended to run repair before adding nodes.
That's because deleted data may be resurrected if the time between two
repair runs is longer than the gc_grace_period, and adding nodes can
take a lots of time.
Running nodetool cleanup is also not required, but recommende
The Datastax doc says to run cleanup one node at a time after bootstrapping
has completed. The myadventuresincoding post says to run a repair on each
node first. Is it necessary to run the repairs first? Thanks.
On Tue, Apr 4, 2023 at 1:11 PM Bowen Song via user <
user@cassandra.apache.org> wrote:
Thanks. I also found this useful:
https://myadventuresincoding.wordpress.com/2020/08/03/cassandra-how-to-add-a-new-node-to-an-existing-cluster/
The node seems to be joining fine and is streaming in lots of data. Cluster
is still operating normally.
On Tue, Apr 4, 2023 at 1:11 PM Bowen Song via
Perhaps have a read here?
https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/operations/opsAddNodeToCluster.html
On 04/04/2023 06:41, David Tinker wrote:
Ok. Have to psych myself up to the add node task a bit. Didn't go well
the first time round!
Tasks
- Make sure the new node is not i
Ok. Have to psych myself up to the add node task a bit. Didn't go well the
first time round!
Tasks
- Make sure the new node is not in seeds list!
- Check cluster name, listen address, rpc address
- Give it its own rack in cassandra-rackdc.properties
- Delete cassandra-topology.properties if it exi
Because executing “removenode” streamed extra data from live nodes to the “gaining” replicaOversimplified (if you had one token per node) If you start with A B CThen add DD should bootstrap a range from each of A B and C, but at the end, some of the data that was A B C becomes B C DWhen you remove
Looks like the remove has sorted things out. Thanks.
One thing I am wondering about is why the nodes are carrying a lot more
data? The loads were about 2.7T before, now 3.4T.
# nodetool status
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address
That's correct. nodetool removenode is strongly preferred when your node
is already down. If the node is still functional, use nodetool
decommission on the node instead.
On 03/04/2023 16:32, Jeff Jirsa wrote:
FWIW, `nodetool decommission` is strongly preferred. `nodetool
removenode` is designe
FWIW, `nodetool decommission` is strongly preferred. `nodetool removenode`
is designed to be run when a host is offline. Only decommission is
guaranteed to maintain consistency / correctness, and removemode probably
streams a lot more data around than decommission.
On Mon, Apr 3, 2023 at 6:47 AM
>
> I just asked that question on this list and the answer was that adding the
> new nodes as rack4, rack5 and rack6 is fine. They are all on
> separate physical racks. Is that ok?
>
Yes, Jeff is right, all 6 nodes each on their own rack will work just fine.
Should I do a full repair first or is
Thanks. Yes my big screwup here was to make the new node a seed node so it
didn't get any data. I am going to add 3 more nodes, one at a time when the
cluster has finished with the remove and everything seems stable. Should I
do a full repair first or is the remove node operation basically doing th
The time it takes to stream data off of a node varies by network, cloud
region, and other factors. So it's not unheard of for it to take a bit to
finish.
Just thought I'd mention that auto_bootstrap is true by default. So if
you're not setting it, the node should bootstrap as long as it's not a
Thanks. Hmm, the remove has been busy for hours but seems to be progressing.
I have been running this on the nodes to monitor progress:
# nodetool netstats | grep Already
Receiving 92 files, 843934103369 bytes total. Already received 82
files (89.13%), 590204687299 bytes total (69.93%)
Use nodetool removenode is strongly preferred in most circumstances, and
only resort to assassinate if you do not care about data consistency or
you know there won't be any consistency issue (e.g. no new writes and
did not run nodetool cleanup).
Since the size of data on the new node is small,
If I have messed up with the rack thing I would like to get this node out
of the cluster so the cluster is functioning as quickly as possible. Then
do some more research and try again. So I am looking for the safest way to
do that.
On Mon, Apr 3, 2023 at 7:27 AM Jeff Jirsa wrote:
> Just run node
Should I use assassinate or removenode? Given that there is some data on
the node. Or will that be found on the other nodes? Sorry for all the
questions but I really don't want to mess up.
On Mon, Apr 3, 2023 at 7:21 AM Carlos Diaz wrote:
> That's what nodetool assassinte will do.
>
> On Sun, Ap
Just run nodetool rebuild on the new node If you assassinate it now you violate consistency for your most recent writes On Apr 2, 2023, at 10:22 PM, Carlos Diaz wrote:That's what nodetool assassinte will do.On Sun, Apr 2, 2023 at 10:19 PM David Tinker wrote:Is it possible
That's what nodetool assassinte will do.
On Sun, Apr 2, 2023 at 10:19 PM David Tinker wrote:
> Is it possible for me to remove the node from the cluster i.e. to undo
> this mess and get the cluster operating again?
>
> On Mon, Apr 3, 2023 at 7:13 AM Carlos Diaz wrote:
>
>> You can leave it in t
Is it possible for me to remove the node from the cluster i.e. to undo this
mess and get the cluster operating again?
On Mon, Apr 3, 2023 at 7:13 AM Carlos Diaz wrote:
> You can leave it in the seed list of the other nodes, just make sure it's
> not included in this node's seed list. However, i
You can leave it in the seed list of the other nodes, just make sure it's
not included in this node's seed list. However, if you do decide to fix
the issue with the racks first assassinate this node (nodetool assassinate
), and update the rack name before you restart.
On Sun, Apr 2, 2023 at 10:06
I should add that the new node does have some data.
On Mon, Apr 3, 2023 at 7:04 AM David Tinker wrote:
> It is also in the seeds list for the other nodes. Should I remove it from
> those, restart them one at a time, then restart it?
>
> /etc/cassandra # grep -i bootstrap *
> doesn't show anythin
It is also in the seeds list for the other nodes. Should I remove it from
those, restart them one at a time, then restart it?
/etc/cassandra # grep -i bootstrap *
doesn't show anything so I don't think I have auto_bootstrap false.
Thanks very much for the help.
On Mon, Apr 3, 2023 at 7:01 AM Ca
Just remove it from the seed list in the cassandra.yaml file and restart
the node. Make sure that auto_bootstrap is set to true first though.
On Sun, Apr 2, 2023 at 9:59 PM David Tinker wrote:
> So likely because I made it a seed node when I added it to the cluster it
> didn't do the bootstrap
So likely because I made it a seed node when I added it to the cluster it
didn't do the bootstrap process. How can I recover this?
On Mon, Apr 3, 2023 at 6:41 AM David Tinker wrote:
> Yes replication factor is 3.
>
> I ran nodetool repair -pr on all the nodes (one at a time) and am still
> havin
Yes replication factor is 3.
I ran nodetool repair -pr on all the nodes (one at a time) and am still
having issues getting data back from queries.
I did make the new node a seed node.
Re "rack4": I assumed that was just an indication as to the physical
location of the server for redundancy. This
I'm assuming that your replication factor is 3. If that's the case, did
you intentionally put this node in rack 4? Typically, you want to add
nodes in multiples of your replication factor in order to keep the "racks"
balanced. In other words, this node should have been added to rack 1, 2 or
3.
Looks like it joined with no data. Did you set auto_bootstrap to false? Or does
the node think it’s a seed?
You want to use “nodetool rebuild” to stream data to that host.
You can potentially end the production outage / incident by taking the host
offline, or making it less likely to be querie
Hi All
I recently added a node to my 3 node Cassandra 4.0.5 cluster and now many
reads are not returning rows! What do I need to do to fix this? There
weren't any errors in the logs or other problems that I could see. I
expected the cluster to balance itself but this hasn't happened (yet?). The
no
28 matches
Mail list logo