Hi,
it's been years since I last upgraded a non-cephadm cluster, but what
I remember for MDS is: as soon as you have an upgraded MDS, it will
take over as active. So your last Reef MDS will immediately become a
standby. At least that's how it was until Nautilus, after that we
migrated all clusters to cephadm. It might work differently, but I
wanted to mention it anyway, just so you're aware of that possibility.
Regards,
Eugen
Zitat von Christopher Durham <caduceu...@aol.com>:
Hi,
The non-cephadm update procedure to update reef to squid (for
rpm-based clusters) here:
https://docs.ceph.com/en/latest/releases/squid/#upgrading-non-cephadm-clusters
suggests that your monitors/mds/radosgw/osds are all on separate
servers. While perhaps theyshould be that is not possible at current.
If for example, I have a mon and an mds on a single server, I really
don't want to updatejust a mon as the mds and mon may use common
libraries.
As such, I am thinking that to do an update in this scenario I do
the following
set noout
set max_mds to for my filesystem, and note the remaining active mds
disable standby_replay
Then, for each of the servers running an mds (but not the remaining
active mds):
1. stop all ceph daemons (mon,mds,radosgw,osd) and do the update,
reboot and/or restart the mon and mds
2. Proceed to all the other mds servers except the last one that is
on reef3. Finish all other remaining mon updates as in #1
4. Then when the last server with a reef mds is left stop it and let
a standby squid mds take over and then do the update and either
restart all the daemons or reboot
5. if there are other osd-only nodes, update them and restart the osds
Then, afterwards, allow standby replay, reset max_mds to what it was
before, unset noout, require squid for osds.
Does this make sense? Am I forgetting something? I know setting
max_mds 1 is important but want to be sure I have not forgotten
anything.
Thanks.
-Chris
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io