[ceph-users] CephFS snaptrim bug?

2022-02-23 Thread Linkriver Technology
Hello, I have upgraded our Ceph cluster from Nautilus to Octopus (15.2.15) over the weekend. The upgrade went well as far as I can tell. Earlier today, noticing that our CephFS data pool was approaching capacity, I removed some old CephFS snapshots (taken weekly at the root of the filesystem), ke

[ceph-users] Re: CephFS snaptrim bug?

2022-03-16 Thread Linkriver Technology
restrict > osd_pg_max_concurrent_snap_trims to >= 1. > > Cheers, Dan > > On Wed, Feb 23, 2022 at 9:44 PM Linkriver Technology > wrote: > > > > Hello, > > > > I have upgraded our Ceph cluster from Nautilus to Octopus (15.2.15) over the > > weekend. The

[ceph-users] Re: CephFS snaptrim bug?

2022-03-18 Thread Linkriver Technology
ve an issue close to your Can you tell us if your strays dirs are full ? What does this command output to you ? ceph tell mds.0 perf dump | grep strays Does the value change over time ? All the best Arnaud Le mer. 16 mars 2022 à 15:35, Linkriver Technology < technol...@linkriver-capital.com

[ceph-users] Re: CephFS snaptrim bug?

2024-09-26 Thread Linkriver Technology
reading of the code involved suggests that the scrubber in Quincy has acquired the capacity of detecting and removing the lost snapshots from Octopus, if I understand it correctly. Cheers, Linkriver Technology On Sat, 2022-06-25 at 19:36 +, Kári Bertilsson wrote: > Hello > > I am al