Edit: someone made some changes which broke some tasks when selecting the
cephadm host to use. Just keep in mind it's an example
> Op 14-06-2024 10:28 CEST schreef Sake Ceph :
>
>
> I needed to do some cleaning before I could share this :)
> Maybe you or someone else can use it.
>
> Kind re
his.
>
> -- Michael
>
> From: Sake Ceph
> Sent: Thursday, June 13, 2024 4:05 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Patching Ceph cluster
>
> This is an external email. Please take care when clicking links or opening
> attachments. When in doubt,
I'd love to see what your playbook(s) looks like for doing this.
-- Michael
From: Sake Ceph
Sent: Thursday, June 13, 2024 4:05 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Patching Ceph cluster
This is an external email. Please take care when cli
Yeah we fully automated this with Ansible. In short we do the following.
1. Check if cluster is healthy before continuing (via REST-API) only health_ok
is good
2. Disable scrub and deep-scrub
3. Update all applications on all the hosts in the cluster
4. For every host, one by one, do the followi
I have two ansible roles, one for enter, one for exit. There’s likely better
ways to do this — and I’ll not be surprised if someone here lets me know.
They’re using orch commands via the cephadm shell. I’m using Ansible for other
configuration management in my environment, as well, including s
There’s also a maintenance mode available for the orchestrator:
https://docs.ceph.com/en/reef/cephadm/host-management/#maintenance-mode
There’s some more information about that in the dev section:
https://docs.ceph.com/en/reef/dev/cephadm/host-maintenance/
Zitat von Anthony D'Atri :
That's ju
That's just setting noout, norebalance, etc.
> On Jun 12, 2024, at 11:28, Michael Worsham
> wrote:
>
> Interesting. How do you set this "maintenance mode"? If you have a series of
> documented steps that you have to do and could provide as an example, that
> would be beneficial for my efforts
Interesting. How do you set this "maintenance mode"? If you have a series of
documented steps that you have to do and could provide as an example, that
would be beneficial for my efforts.
We are in the process of standing up both a dev-test environment consisting of
3 Ceph servers (strictly for
There’s also a Maintenance mode that you can set for each server, as you’re
doing updates, so that the cluster doesn’t try to move data from affected
OSD’s, while the server being updated is offline or down. I’ve worked some on
automating this with Ansible, but have found my process (and/or my
Do you mean patching the OS?
If so, easy -- one node at a time, then after it comes back up, wait until all
PGs are active+clean and the mon quorum is complete before proceeding.
> On Jun 12, 2024, at 07:56, Michael Worsham
> wrote:
>
> What is the proper way to patch a Ceph cluster and reb
10 matches
Mail list logo