Hello,
I have upgraded our Ceph cluster from Nautilus to Octopus (15.2.15) over the
weekend. The upgrade went well as far as I can tell.
Earlier today, noticing that our CephFS data pool was approaching capacity, I
removed some old CephFS snapshots (taken weekly at the root of the filesystem),
ke
restrict
> osd_pg_max_concurrent_snap_trims to >= 1.
>
> Cheers, Dan
>
> On Wed, Feb 23, 2022 at 9:44 PM Linkriver Technology
> wrote:
> >
> > Hello,
> >
> > I have upgraded our Ceph cluster from Nautilus to Octopus (15.2.15) over the
> > weekend. The
ve an issue close to your
Can you tell us if your strays dirs are full ?
What does this command output to you ?
ceph tell mds.0 perf dump | grep strays
Does the value change over time ?
All the best
Arnaud
Le mer. 16 mars 2022 à 15:35, Linkriver Technology <
technol...@linkriver-capital.com
reading of the code involved suggests that the scrubber in
Quincy has acquired the capacity of detecting and removing the lost
snapshots from Octopus, if I understand it correctly.
Cheers,
Linkriver Technology
On Sat, 2022-06-25 at 19:36 +, Kári Bertilsson wrote:
> Hello
>
> I am al