Just last week on 14.2.22, the customer is currently in the process of
rebuilding OSD nodes to migrate to lvm.
Zitat von Stefan Kooman :
On 8/25/22 20:56, Eugen Block wrote:
Hi,
I’ve seen this many times in older clusters, mostly Nautilus (can’t
say much about Octopus or later). Apparentl
This was seen today in Pacific 16.2.9.
From: Stefan Kooman
Sent: Thursday, August 25, 2022 3:17 PM
To: Eugen Block ; Wyll Ingersoll
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: backfillfull osd - but it is only at 68% capacity
On 8/25/22 20:56
That problem seems to have cleared up. We are in the middle of a massive
rebalancing effort for a 700 OSD, 10PB cluster that is wildly out of whack
(because it got too full) and see lots of strange numbers reported occasionally.
From: Eugen Block
Sent: Thurs
Hi,
I’ve seen this many times in older clusters, mostly Nautilus (can’t
say much about Octopus or later). Apparently the root cause hasn’t
been fixed yet, but it should resolve after the recovery has finished.
Zitat von Wyll Ingersoll :
My cluster (ceph pacific) is complaining about one of