Yes. It's awkward and the whole "two weights" thing needs a bit of UI
reworking, but it's expected behavior.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Thu, Apr 10, 2014 at 3:59 PM, Craig Lewis <cle...@centraldesktop.com> wrote:
> I've got some OSDs that are nearfull.  Hardware is ordered, and I've been
> using ceph osd reweight (not ceph osd crush reweight) to keep the cluster
> healthy until the new hardware arrives.
>
> Is it expected behavior that marking an osd down removes the ceph osd
> reweight?
>
>
> root@ceph1:~# ceph osd dump | grep ^osd.12
> osd.12 up   in  weight 0.929291 ...
> root@ceph1:~# stop ceph-osd id=12
> ceph-osd stop/waiting
> root@ceph1:~# ceph osd out 12
> marked out osd.12.
> root@ceph1:~# ceph osd dump | grep ^osd.12
> osd.12 down out weight 0
> root@ceph1:~# start ceph-osd id=12
> ceph-osd (ceph/12) start/running, process 8152
> root@ceph1:~# ceph osd in 12
> marked in osd.12.
> root@ceph1:~# ceph osd dump | grep ^osd.12
> osd.12 up   in  weight 1
>
> This bit me when I rebooted a node yesterday.  The primary boot disk was
> bad, and the boot hung.  By the time I got it up, all of that node's OSDs
> were down and out.  Once they were up and in, there was a lot more
> backfilling than I was expecting.
>
>
>
> --
>
> Craig Lewis
> Senior Systems Engineer
> Office +1.714.602.1309
> Email cle...@centraldesktop.com
>
> Central Desktop. Work together in ways you never thought possible.
> Connect with us   Website  |  Twitter  |  Facebook  |  LinkedIn  |  Blog
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to