[ceph-users] Re: Patching Ceph cluster

2024-06-14 Thread Sake Ceph
Edit: someone made some changes which broke some tasks when selecting the cephadm host to use. Just keep in mind it's an example > Op 14-06-2024 10:28 CEST schreef Sake Ceph : > > > I needed to do some cleaning before I could share this :) > Maybe you or someone else can use it. > > Kind re

[ceph-users] Re: Patching Ceph cluster

2024-06-14 Thread Sake Ceph
his. > > -- Michael > > From: Sake Ceph > Sent: Thursday, June 13, 2024 4:05 PM > To: ceph-users@ceph.io > Subject: [ceph-users] Re: Patching Ceph cluster > > This is an external email. Please take care when clicking links or opening > attachments. When in doubt,

[ceph-users] Re: Patching Ceph cluster

2024-06-13 Thread Michael Worsham
I'd love to see what your playbook(s) looks like for doing this. -- Michael From: Sake Ceph Sent: Thursday, June 13, 2024 4:05 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: Patching Ceph cluster This is an external email. Please take care when cli

[ceph-users] Re: Patching Ceph cluster

2024-06-13 Thread Sake Ceph
Yeah we fully automated this with Ansible. In short we do the following. 1. Check if cluster is healthy before continuing (via REST-API) only health_ok is good 2. Disable scrub and deep-scrub 3. Update all applications on all the hosts in the cluster 4. For every host, one by one, do the followi

[ceph-users] Re: Patching Ceph cluster

2024-06-12 Thread Daniel Brown
I have two ansible roles, one for enter, one for exit. There’s likely better ways to do this — and I’ll not be surprised if someone here lets me know. They’re using orch commands via the cephadm shell. I’m using Ansible for other configuration management in my environment, as well, including s

[ceph-users] Re: Patching Ceph cluster

2024-06-12 Thread Eugen Block
There’s also a maintenance mode available for the orchestrator: https://docs.ceph.com/en/reef/cephadm/host-management/#maintenance-mode There’s some more information about that in the dev section: https://docs.ceph.com/en/reef/dev/cephadm/host-maintenance/ Zitat von Anthony D'Atri : That's ju

[ceph-users] Re: Patching Ceph cluster

2024-06-12 Thread Anthony D'Atri
That's just setting noout, norebalance, etc. > On Jun 12, 2024, at 11:28, Michael Worsham > wrote: > > Interesting. How do you set this "maintenance mode"? If you have a series of > documented steps that you have to do and could provide as an example, that > would be beneficial for my efforts

[ceph-users] Re: Patching Ceph cluster

2024-06-12 Thread Michael Worsham
Interesting. How do you set this "maintenance mode"? If you have a series of documented steps that you have to do and could provide as an example, that would be beneficial for my efforts. We are in the process of standing up both a dev-test environment consisting of 3 Ceph servers (strictly for

[ceph-users] Re: Patching Ceph cluster

2024-06-12 Thread Daniel Brown
There’s also a Maintenance mode that you can set for each server, as you’re doing updates, so that the cluster doesn’t try to move data from affected OSD’s, while the server being updated is offline or down. I’ve worked some on automating this with Ansible, but have found my process (and/or my

[ceph-users] Re: Patching Ceph cluster

2024-06-12 Thread Anthony D'Atri
Do you mean patching the OS? If so, easy -- one node at a time, then after it comes back up, wait until all PGs are active+clean and the mon quorum is complete before proceeding. > On Jun 12, 2024, at 07:56, Michael Worsham > wrote: > > What is the proper way to patch a Ceph cluster and reb