What does 'ceph osd tree' look like for this cluster? Also have you done
anything special to your CRUSH rules?
I've usually found this to be caused by modifying OSD weights a little too
much.
As for the inconsistent PG, you should be able to run 'ceph pg repair' on
it:
http://docs.ceph.com/docs
Hi,
Are there any tips and tricks around getting rid of misplaced objects? I did
check the archive but didn’t find anything.
Right now my cluster looks like this:
pgmap v43288593: 16384 pgs, 4 pools, 45439 GB data, 10383 kobjects
109 TB used, 349 TB / 458 TB avail