Everything is correct except for shutting down the VM's.  There is no need for 
downtime during this upgrade.  As long as your cluster comes back to health_ok 
(or just showing that the noout flag is set and nothing else), then you are 
free to move on to the next node.

________________________________

[cid:imagea428dd.JPG@82c9251a.418f8338]<https://storagecraft.com>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943

________________________________

If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

________________________________

________________________________
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Mike 
Jacobacci [mi...@flowjo.com]
Sent: Tuesday, November 29, 2016 11:41 AM
To: ceph-users
Subject: [ceph-users] Ceph Maintenance

Hello,

I would like to install OS updates on the ceph cluster and activate a second 
10gb port on the OSD nodes, so I wanted to verify the correct steps to perform 
maintenance on the cluster.  We are only using rbd to back our xenserver vm's 
at this point, and our cluster consists of 3 OSD nodes, 3 Mon nodes and 1 admin 
node...  So would this be the correct steps:

1. Shut down VM's?
2. run "ceph osd set noout" on admin node
3. install updates on each monitoring node and reboot one at a time.
4. install updates on OSD nodes and activate second 10gb port, reboot one OSD 
node at a time
5. once all nodes back up, run "ceph osd unset noout"
6. bring VM's back online

Does this sound correct?


Cheers,
Mike

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to