Hi Dave,
We have checked the hardware and it seems fine.
The same OSDs host numerous other PGs which are unaffected by this issue.
All of the OSDs reported as inconsistent/repair_failed belong to the
same metadata pool.
We did run a `ceph repair` on the initially which is when the "to many
rep
Hi,
I can't comment on the CephFS side but "Too many repaired reads on 2
OSDs" makes me suggest you check the hardware -- when I've seen that
recently it was due to failing HDDs. I say "failing" not "failed"
because the disks were giving errors on a few sectors but most I/O was
working OK, so neit