[ceph-users] Re: backfillfull osd - but it is only at 68% capacity

2022-08-25 Thread Eugen Block
Just last week on 14.2.22, the customer is currently in the process of rebuilding OSD nodes to migrate to lvm. Zitat von Stefan Kooman : On 8/25/22 20:56, Eugen Block wrote: Hi, I’ve seen this many times in older clusters, mostly Nautilus (can’t say much about Octopus or later). Apparentl

[ceph-users] Re: backfillfull osd - but it is only at 68% capacity

2022-08-25 Thread Wyll Ingersoll
This was seen today in Pacific 16.2.9. From: Stefan Kooman Sent: Thursday, August 25, 2022 3:17 PM To: Eugen Block ; Wyll Ingersoll Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: backfillfull osd - but it is only at 68% capacity On 8/25/22 20:56

[ceph-users] Re: backfillfull osd - but it is only at 68% capacity

2022-08-25 Thread Wyll Ingersoll
That problem seems to have cleared up. We are in the middle of a massive rebalancing effort for a 700 OSD, 10PB cluster that is wildly out of whack (because it got too full) and see lots of strange numbers reported occasionally. From: Eugen Block Sent: Thurs

[ceph-users] Re: backfillfull osd - but it is only at 68% capacity

2022-08-25 Thread Eugen Block
Hi, I’ve seen this many times in older clusters, mostly Nautilus (can’t say much about Octopus or later). Apparently the root cause hasn’t been fixed yet, but it should resolve after the recovery has finished. Zitat von Wyll Ingersoll : My cluster (ceph pacific) is complaining about one of