In AWS we are doing os upgrades by standing up an equivalent set of nodes,
and then in a rolling fashion we move the EBS mounts, selectively sync
necessary cassandra settings, and then start up the new node.
The downtime is fairly minimal since the ebs detach/attach is pretty quick,
you don't need
The original question was OS (and / or JDK) upgrades, and for those "do it
in QA first", bounce non-replicas and let them come up before proceeding.
If you're doing Cassandra itself, it's a REALLY REALLY REALLY good idea to
try the upgrade on a backup of your production cluster first. While it's
u
Another thing I'll add, since I don't think any of the other responders brought
it up.
This all assumes that you already believe that the update is safe. If you have
any kind of test cluster, I'd evaluate the change there first.
While I haven't hit it with C* specifically, I have seen databa
That is some good info. To add just a little more, knowing what the
pending security updates are for your nodes helps in knowing what to do
after. Read the security update notes from your vendor.
Java or Cassandra update? Of course the service needs restarted -
rolling upgrade and restart the
There is no need to shutdown the application because you should be able to
carry out the operating system upgraded without an outage to the database
particularly since you have a lot of nodes in your cluster.
Provided your cluster has sufficient capacity, you might even have the
ability to upgrade
Hi Team,
What is the best way to patch OS of 1000 nodes Multi DC Cassandra cluster
where we cannot suspend application traffic( we can redirect traffic to one
DC).
Please suggest if anyone has any best practice around it.
--
*C*heers,*
*Anshu V*