Dear Istvan,
The first thing that stands out:
Ubuntu 20.04 (EOL in April 2025)
and
Ceph v15 Octopus (EOL since 2022)
Is there a possibility to upgrade these things?
Best regards
Gunnar
--- Original Nachricht ---
Betreff: [ceph-users] Snaptriming speed degrade with pg increase
Von: "Szabo,
Hi Björn,
have a look at the Type "max". Thus the proxmox cluster should choose
the maximum cpu type, which allows for live migration and than ceph
should run with v18.2.4 images.
https://forum.proxmox.com/threads/max-cpu-type.94736/
Best regards,
Gunnar
--- Original Nachricht ---Betreff: [
Hi Torkil,
Maybe im overlooking something, but how about just renaming the
datacenter buckets?
Best regards,
Gunnar
--- Original Nachricht ---
Betreff: [ceph-users] Re: Safe to move misplaced hosts between
failure domains in the crush tree?
Von: "Torkil Svensgaard"
An: "Matthias Grandl"
CC:
Hi Erich,
im not sure about this specific error message, but "ceph fs status"
sometimes did fail me end of last year/in the beginning of the year.
Restarting ALL mon, mgr AND mds fixed it at the time.
Best regards,
Gunnar
===
Gunnar Bande
808 deep scrubbing
> for 306667s
142126
> 0
> 5.47 220849 0
0 0 0
> 926233448448 0 0 5592
0 5592
> active+clean+scrubbing+deep 2024-03-12T08:10:39.413186+
> 128382'
Hi,
i just wanted to mention, that i am running a cluster with reef 18.2.1
with the same issue.
4 PGs start to deepscrub but dont finish since mid february. In the pg
dump they are shown as scheduled for deep scrub. They sometimes change
their status from active+clean to active+clean+scrubbing+de
Hi Eugen,
thank you for your contribution. I will definitvely think about
leaving a number of spare hosts, very good point.
My main problem remains the Health Warning of "Too few PGs". This
implies that my PG Number in the pool is too low and i cant increase
it with an erasure profile.
I al