Hi Frank,
It would just be great to have confirmation or a "no, its critical".
unfortunately, I'm not able to confirm that, I hope someone else can.
By the way, I have these on more than one rank, so it is probably
not a fall-out of the recent recovery efforts.
In that case I would defini
Hi Eugen,
my hypothesis is that these recursive counters are uncritical and, in fact,
updated when the dir/file is modified/accessed. Attributes like ceph.dir.rbytes
will show somewhat incorrect values, but these are approximate anyway (updates
are propagated asynchronously).
It would just be
I think we will wait with another forward scrub
for a while.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eugen Block
Sent: Friday, January 24, 2025 11:40 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re:
rward scrub for a while.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eugen Block
Sent: Friday, January 24, 2025 11:40 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: unmatched rstat rbytes on single dirfrag
Hi, a quick search [0] shows the same messages. A scrub with repair
seems to fix that. But wasn’t scrubbing causing the recent issue in
the first place?
[0] https://silvenga.com/posts/notes-on-cephfs-metadata-recovery/
Zitat von Frank Schilder :
Hi all,
I see error messages like these in