In that case, I'd set the crush weight to the disk's size in TiB, and mark
the osd out:
ceph osd crush reweight osd.<OSDID> <weight>
ceph osd out <OSDID>

Then your tree should look like:
-9      *2.72*               host ithome
30      *2.72*                     osd.30  up      *0*



An OSD can be UP and OUT, which causes Ceph to migrate all of it's data
away.



On Thu, Apr 2, 2015 at 10:20 PM, Chris Kitzmiller <ca...@hampshire.edu>
wrote:

> On Apr 3, 2015, at 12:37 AM, LOPEZ Jean-Charles <jelo...@redhat.com>
> wrote:
> >
> > according to your ceph osd tree capture, although the OSD reweight is
> set to 1, the OSD CRUSH weight is set to 0 (2nd column). You need to assign
> the OSD a CRUSH weight so that it can be selected by CRUSH: ceph osd crush
> reweight osd.30 x.y (where 1.0=1TB)
> >
> > Only when this is done will you see if it joins.
>
> I don't really want osd.30 to join my cluster though. It is a purely
> temporary device that I restored just those two PGs to. It should still be
> able to (and be trying to) push out those two PGs with a weight of zero,
> right? I don't want any of my production data to migrate towards osd.30.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to