On Tue, Nov 5, 2019 at 3:18 AM Paul Emmerich <paul.emmer...@croit.io> wrote:
> could be a new feature, I've only realized this exists/works since Nautilus.
> You seem to be a relatively old version since you still have ceph-disk 
> installed

None of this is using ceph-disk?  It's all done with ceph-volume.

The ceph clusters are all running Luminous 12.2.12, which shouldn't be
*that* old!  (Looking forward to Nautilus but it hasn't been qualified
for production use by our team yet.)

But a couple of our ceph clusters, including the ones at issue here,
originally date back to Firefly, so who knows what artifacts of the
past are still lurking around?

The next approach may be to just try to stop udev while ceph-volume
lvm zap is running.

It seems like we have a couple of months to figure this out since
we've moved on to HDD OSD's and it takes a day or so to drain a single
one. :-/

Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to