Running on Octopus:

While attempting to install a bunch of new OSDs on multiple hosts, I ran some 
ceph orchestrator commands to install them, such as 

ceph orch apply osd --all-available-devices
ceph orch apply osd -I HDD_drive_group.yaml

I assumed these were just helper processes, and they would be short-lived.  In 
fact, they didn’t actually work and I ended up installing each drive by hand 
like this:
ceph orch daemon add osd ceph4.iri.columbia.edu:/dev/sdag

However, now I have these services running:
# ceph orch ls --service-type=osd
NAME                       RUNNING  REFRESHED  AGE  PLACEMENT                   
    IMAGE NAME               IMAGE ID
osd.HDD_drive_group            2/2  7m ago     6w   ceph[456].iri.columbia.edu  
    docker.io/ceph/ceph:v15  2cf504fded39
osd.None                      54/0  7m ago     -    <unmanaged>                 
    docker.io/ceph/ceph:v15  2cf504fded39
osd.all-available-devices      1/0  7m ago     -    <unmanaged>                 
    docker.io/ceph/ceph:v15  2cf504fded39

I’m certain none of these actually created any of my running OSD daemons, but 
I’m not sure if it’s ok to remove them.

For example:
ceph orch daemon rm osd.all-available-devices
ceph orch daemon rm osd.HDD_drive_group
ceph orch daemon rm osd.None

Does anyone have any insight to this?  I can just leave them there, they don’t 
seem to be doing anything, but on the other hand, I don’t want any new devices 
to be automatically loaded or any other unintended consequences of these.

Thanks for any guidance,


Jeff Turmelle
International Research Institute for Climate & Society 
<https://iri.columbia.edu/>
The Climate School <https://climate.columbia.edu/> at Columbia University 
<https://columbia.edu/>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to