The only obvious difference between 19.2.2 and 19.2.3 I found was this commit:

https://github.com/ceph/ceph/commit/32fae5ca4942f80a9604c6aa123442449594284e

cephadm: check "ceph_device_lvm" field instead of "ceph_device" during zap

And it's the only different line between those two versions [0],[1]. I created a tracker for this [2].

[0] https://github.com/ceph/ceph/blob/v19.2.2/src/cephadm/cephadm.py#L4273
[1] https://github.com/ceph/ceph/blob/v19.2.3/src/cephadm/cephadm.py#L4273
[2] https://tracker.ceph.com/issues/72513

Zitat von Eugen Block <ebl...@nde.ag>:

Hi *,

I have a VM which I use frequently to test cephadm bootstrap operations as well as upgrades, it's a single node with a few devices attached. After successfully testing the upgrade to 19.2.3, I wanted to test the bootstrap again, but removing the cluster with the --zap-osds flag doesn't actually remove the VGs/LVs. This used to work just fine until 19.2.2.
This is the command I used:

ceph:~ # cephadm --image myregistry/ceph_v19.2.3 rm-cluster --fsid {FSID} --zap-osds --force

There's not much to see in the cephadm.log after these sort of lines:

2025-08-01 09:04:02,103 7f3bf5206b80 DEBUG systemctl: Removed "/etc/systemd/system/ceph-6b501d0a-6ea3-11f0-a251-fa163e2ad8c5.target.wants/ceph-6b501d0a-6ea3-11f0-a251-fa163e2ad8c5@osd.0.service".

But it looks like the inventory output in json format, it's quite lengthy, so I'll spare you the output here. I can add it to a tracker though, if there isn't one yet. Has this already been reported?

Thanks,
Eugen


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to