Hello,

I am running a Ceph cluster installed with cephadm.  Version is 18.2.2 reef 
(stable).

I am moving the DB/WAL from my HDDs to SSD, and have been doing fine on all the 
OSDs until I got to one in particular (osd.14)

>From the cephadm shell, when I run 'ceph orch daemon stop osd.14', nothing 
>happens: it does not get marked as Down.  If I mark is as Down in the GUI, it 
>does show as Down then a few seconds later it gets marks as Up again.

I was running 'journalctl -xf | grep osd.14' to see if any errors came up, but 
nothing did.

Not sure where to check next to try to sort this out?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to