Hi,

We currently remove drives without --zap if we do not want them to be
automatically re-added. After full removal from the cluster or on addition
of new drives we set `ceph orch pause` to do be able to work on the drives
without ceph interfering. To add the drives we resume the background
orchestrator task using `ceph orch resume`.

Thanks,
David


On Tue, Apr 26, 2022, 10:28 Anthony D'Atri <anthony.da...@gmail.com> wrote:

>
>
> > Was the osd spec responsible for creating this osd set to unmanaged?
> Having
> > it re-pickup available disks is the expected behavior right now (see
> > https://docs.ceph.com/en/latest/cephadm/services/osd/#declarative-state)
> > although we've been considering changing this as it seems like in the
> > majority of cases users want to only pick up the disks available at apply
> > time and not every matching disk forever.
>
> I would vote to change the default.
>
> * Local hands may pull / insert the wrong drive in the wrong place
> * New / replacement drives may have issues; I like to do a sanity check
> before deploying an OSD
> * Drives used for boot volume mirrors
> * etc
>
>
> > But if you have set the service
> > to unmanaged and it's still picking up the disks that's a whole different
> > issue entirely.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to