SOLUTION FOUND!
Reweight the osd to 0, then set it back to where it belongs.
ceph osd crush reweight osd.0 0.0

Original
ceph tell osd.0 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 4.03434 sec at 254 MiB/sec 63 IOPS

After reweight of osd.0
ceph tell osd.0 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 1.54555 sec at 663 MiB/sec 165 IOPS
ceph tell osd.1 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 3.54652 sec at 289 MiB/sec 72 IOPS

After reweight of osd.1
ceph tell osd.0 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 0.948457 sec at 1.1 GiB/sec 269 IOPS
ceph tell osd.1 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 0.949384 sec at 1.1 GiB/sec 269 IOPS
ceph tell osd.2 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 3.56726 sec at 287 MiB/sec 71 IOPS

I have finished reweight proceedure on osd node 1 and all 6 osd's are back 
where they belong, but have 4 more nodes to go. Looks like this should fix it. 
If anyone has an alternative method for getting around this I am all ears.

Dave, would be interested to hear if this works for you.

-Jim
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to