Hi,

I was always in the same situation: I couldn't remove an OSD without
have some PGs definitely stuck to the "active+remapped" state.

But I remembered I read on IRC that, before to mark out an OSD, it
could be sometimes a good idea to reweight it to 0. So, instead of
doing [1]:

    ceph osd out 3

I have tried [2]:

    ceph osd crush reweight osd.3 0 # waiting for the rebalancing...
    ceph osd out 3

and it worked. Then I could remove my osd with the online documentation:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual

Now, the osd is removed and my cluster is HEALTH_OK. \o/

Now, my question is: why my cluster was definitely stuck to "active+remapped"
with [1] but was not with [2]? Personally, I have absolutely no explanation.
If you have an explanation, I'd love to know it. 

Should the "reweight" command be present in the online documentation?
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual
If yes, I can make a pull request on the doc with pleasure. ;)

Regards.

-- 
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to