I was [poorly] following the instructions for migrating the wal/db to an SSD

https://docs.clyso.com/blog/ceph-volume-create-wal-db-on-separate-device-for-existing-osd/

and I didn't add the '--no-systemd' when I did 'ceph-volume lvm activate' 
command (3 f***ing times). The result is that I've "twinned" 3 of my OSDs: 
There's a container version managed by cephadm and there's an instantiated 
systemd unit that runs directly. Surprisingly, this has not done a lot of 
damage, but it does result in the dashboard reporting 3 failed cephadm daemons 
when the "native" OSDs start before the containerized ones.

I've disabled the systemd units for ceph-osd@9.service, ceph-osd@11.service and 
ceph-osd@25.service, but I'd like to remove them completely. I will eventually 
badger The Google into giving me an answer, but could someone tell me what I 
need to do? The semester starts soon and I don't really have the bandwidth for 
this right now.

Thanks in advance. I will forever be in your debt. (Seriously, I'm ready to 
give you a kidney, if you need it.)
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to