Hi,

You should check for inconsistency root cause. 
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/#pgs-inconsistent
 
<https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/#pgs-inconsistent>
 
-
Etienne Menguy
etienne.men...@croit.io




> On 20 Oct 2021, at 09:21, Szabo, Istvan (Agoda) <istvan.sz...@agoda.com> 
> wrote:
> 
> Have you tried to repair pg?
> 
> Istvan Szabo
> Senior Infrastructure Engineer
> ---------------------------------------------------
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com<mailto:istvan.sz...@agoda.com>
> ---------------------------------------------------
> 
> On 2021. Oct 20., at 9:04, Glaza <gla...@wp.pl> wrote:
> 
> Email received from the internet. If in doubt, don't click any link nor open 
> any attachment !
> ________________________________
> 
> Hi Everyone,   I am in the process of
> upgrading nautilus (14.2.22) to octopus (15.2.14) on centos7 (Mon/Mgr
> were additionally migrated to centos8 beforehand). Each day I upgraded
> one host and after all osd&#39;s were up, I manually compacted them one by
> one.  Today (8 hosts upgraded, 7 still to go) I started
> getting errors like &#34;Possible data damage: 1 pg inconsistent&#34;. For the
> first time it was &#34;acting [56,58,62]&#34; but I thought &#34;OK&#34; in 
> osd.62 logs
> there are many lines like &#34;osd.62 39892 class rgw_gc open got (1)
> Operation not permitted&#34; Maybe rgw did not cleaned some omaps properly,
> and ceph did not noticed it until scrub happened. But now I have got
> &#34;acting [56,57,58]&#34; and none of this osd&#39;s has those errors with 
> rgw_gc
> in logs. All affected osd&#39;s are octopus 15.2.14 on NVMe hosting
> default.rgw.buckets.index pool.  Has anyone experience with this problem?  
> Any help appreciated.
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to