I've seen that before (over 100%) but I forget the cause.  At any rate, the way 
I replace disks is to first set the osd weight to 0, wait for data to 
rebalance, then down / out the osd.  I don't think ceph does any reads from a 
disk once you've marked it out so hopefully there are other copies.

Mike Kuriger
Sr. Unix Systems Engineer
T: 818-649-7235 M: 818-434-6195
[ttp://www.hotyellow.com/deximages/dex-thryv-logo.jpg]<http://www.dexyp.com/>

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Drew 
Weaver
Sent: Monday, December 04, 2017 8:39 AM
To: 'ceph-us...@ceph.com'
Subject: [ceph-users] Replaced a disk, first time. Quick question

Howdy,

I replaced a disk today because it was marked as Predicted failure. These were 
the steps I took

ceph osd out osd17
ceph -w #waited for it to get done
systemctl stop ceph-osd@osd17
ceph osd purge osd17 --yes-i-really-mean-it
umount /var/lib/ceph/osd/ceph-osdX

I noticed that after I ran the 'osd out' command that it started moving data 
around.

19446/16764 objects degraded (115.999%) <-- I noticed that number seems odd

So then I replaced the disk
Created a new label on it
Ceph-deploy osd prepare OSD5:sdd

THIS time, it started rebuilding

40795/16764 objects degraded (243.349%) <-- Now I'm really concerned.

Perhaps I don't quite understand what the numbers are telling me but is it 
normal for it to rebuilding more objects than exist?

Thanks,
-Drew


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to