> > I have a clean cluster state, with the osd's that I am going to remove a
> reweight of 0. And then after executing 'ceph osd purge 19', I have again
> remapping+backfilling done?
> >
> > Is this indeed the correct procedure, or is this old?
> > https://docs.ceph.com/en/latest/rados/operations/add-or-rm-
> osds/#removing-osds-manual
> 
> When you either 1) purge an OSD, or 2) ceph osd crush reweight to 0.0
> you change the total weight of the OSD-host, so if you ceph osd
> reweight an OSD, it will push its PGs to other OSDs on the same host
> and empty itself, but that host is now having more PGs than it really
> should. When you do one of the two above steps, the host weight
> becomes corrected and the extra PGs move to other osd hosts. This will
> also affect the total weight of the whole subtree, so other PGs might
> start moving aswell, on hosts not directly related, but this is more
> uncommon.
>

You are right, I did not read my own manual correctly applied the reweight and 
not the crush reweight.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to