As a part of the repair operation it runs a deep-scrub on the PG. If it
showed active+clean after the repair and deep-scrub finished, then the next
run of a scrub on the PG shouldn't change the PG status at all.
On Wed, Jun 6, 2018 at 8:57 PM Adrian wrote:
> Update to this.
>
> The affected pg
Update to this.
The affected pg didn't seem inconsistent:
[root@admin-ceph1-qh2 ~]# ceph health detail
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 6.20 is active+clean+inconsistent, act
Hi Cephers,
We recently upgraded one of our clusters from hammer to jewel and then to
luminous (12.2.5, 5 mons/mgr, 21 storage nodes * 9 osd's). After some
deep-scubs we have an inconsistent pg with a log message we've not seen
before:
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsi