Hi all,

I just updated a Ceph cluster from Nautilus to Octopus and followed the 
documentation in order to migrate from the original ceph-ansible setup to 
cephadm.

Overall, this worked great, but there's one part that I couldn't figure out yet 
and that doesn't seem to be documented: How do I migrate the OSDs to the new 
managed approach using service specifications?

Currently, "ceph orch ps" shows me each OSD and "ceph orch ls" lists them as 
"osd.2", with "9/0" running with unmanaged placement (iirc osd.2 was the first 
one I adopted so that's probably where the name comes from).

I tried writing a service specification that should match the current 
deployment and applying that, but the new entries are just sitting there at 0/3 
running.

For node-exporter, I solved this problem by just removing the old containers 
and services manually and waiting for Ceph to recreate the new ones, but for 
OSDs that approach doesn't really seem practical (unless it was a matter of 
just stopping/removing the old container, but that doesn't seem to do the trick 
in my tests).

Is there a proper way to do this? Or is the cluster just stuck with unmanaged 
OSDs if it was created without cephadm?

Thanks,
Lukas
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to