I replaced another disk, this time everything worked as expected
following this procedure:
1) Drain and destroy the OSD:
ceph orch osd rm --replace
2) Replace the disk.
3) Zap the new disk:
ceph orch device zap /dev/sd --force
4) Manually create the new OSD:
ceph orch daemon add
Ceph uses the next free ID available, so IDs definitely will be reused
if you free them up at some point. I'm not sure why
'--all-available-devices' would suddenly choose a different ID than
the OSD had when you marked it as "destroyed". But I also don't use
that 'all-available-devices' fla
ceph-volume lvm list ...
ceph-volume lvm activate --bluestore
...
up to now that seems to work fine
cheers, toBias
From: "Nicola Mori"
To: "Janne Johansson"
Cc: "ceph-users"
Sent: Wednesday, 11 December, 2024 11:44:57
Subject: [ceph-users] Re: Correct w
Thanks for your insight. So if I remove an OSD without --replace its ID
won't be reused when I e.g. add a new host with new disks? Even if I
completely remove it from the cluster? I'm asking because I maintain a
failure log per OSD and I'd like to avoid that an OSD previously in host
A migrates
>
> I'm struggling to unbderstand what is the correct way to remove a
> working disk and replace it (e.g. for a disk upgrade) while keeping the
> same OSD ID. I did this several times following this procedure:
>
>https://docs.ceph.com/en/reef/cephadm/services/osd/#replacing-an-osd
>
> eve
> Dear Ceph users,
> I'm struggling to unbderstand what is the correct way to remove a
> working disk and replace it (e.g. for a disk upgrade) while keeping the
> same OSD ID.
There may or may not be good guides for reaching this goal, but as a
long time ceph user I can only say that you should no