I've got some OSDs that are nearfull. Hardware is ordered, and I've
been using ceph osd reweight (notceph osd crush reweight) to keep the
cluster healthy until the new hardware arrives.
Is it expected behavior that marking an osd down removes the ceph osd
reweight?
root@ceph1:~# ceph osd dump | grep ^osd.12
osd.12 up in weight 0.929291 ...
root@ceph1:~# stop ceph-osd id=12
ceph-osd stop/waiting
root@ceph1:~# ceph osd out 12
marked out osd.12.
root@ceph1:~# ceph osd dump | grep ^osd.12
osd.12 down out weight 0
root@ceph1:~# start ceph-osd id=12
ceph-osd (ceph/12) start/running, process 8152
root@ceph1:~# ceph osd in 12
marked in osd.12.
root@ceph1:~# ceph osd dump | grep ^osd.12
osd.12 up in weight 1
This bit me when I rebooted a node yesterday. The primary boot disk was
bad, and the boot hung. By the time I got it up, all of that node's
OSDs were down and out. Once they were up and in, there was a lot more
backfilling than I was expecting.
--
*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com <mailto:cle...@centraldesktop.com>
*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website <http://www.centraldesktop.com/> | Twitter
<http://www.twitter.com/centraldesktop> | Facebook
<http://www.facebook.com/CentralDesktop> | LinkedIn
<http://www.linkedin.com/groups?gid=147417> | Blog
<http://cdblog.centraldesktop.com/>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com