[ceph-users] Re: Snaptriming speed degrade with pg increase

2024-11-28 Thread Bandelow, Gunnar
Dear Istvan, The first thing that stands out: Ubuntu 20.04  (EOL in April 2025) and Ceph v15 Octopus (EOL since 2022) Is there a possibility to upgrade these things? Best regards Gunnar --- Original Nachricht --- Betreff: [ceph-users] Snaptriming speed degrade with pg increase Von: "Szabo,

[ceph-users] Re: Ceph Upgrade 18.2.2 -> 18.2.4 fails | Fatal glibc error: CPU does not support x86-64-v2 on virtualized hosts

2024-07-25 Thread Bandelow, Gunnar
Hi Björn, have a look at the Type "max". Thus the proxmox cluster should choose the maximum cpu type, which allows for live migration and than ceph should run with v18.2.4 images. https://forum.proxmox.com/threads/max-cpu-type.94736/ Best regards, Gunnar --- Original Nachricht ---Betreff: [

[ceph-users] Re: Safe to move misplaced hosts between failure domains in the crush tree?

2024-06-13 Thread Bandelow, Gunnar
Hi Torkil, Maybe im overlooking something, but how about just renaming the datacenter buckets? Best regards, Gunnar --- Original Nachricht --- Betreff: [ceph-users] Re: Safe to move misplaced hosts between failure domains in the crush tree? Von: "Torkil Svensgaard" An: "Matthias Grandl" CC: 

[ceph-users] Re: 'ceph fs status' no longer works?

2024-05-02 Thread Bandelow, Gunnar
Hi Erich, im not sure about this specific error message, but "ceph fs status" sometimes did fail me end of last year/in the beginning of the year. Restarting ALL mon, mgr AND mds fixed it at the time. Best regards, Gunnar === Gunnar Bande

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-22 Thread Bandelow, Gunnar
808  deep scrubbing > for 306667s    142126 >    0 > 5.47  220849   0     0  0    0 >  926233448448    0   0  5592 0  5592 >  active+clean+scrubbing+deep  2024-03-12T08:10:39.413186+ >  128382'

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-20 Thread Bandelow, Gunnar
Hi, i just wanted to mention, that i am running a cluster with reef 18.2.1 with the same issue. 4 PGs start to deepscrub but dont finish since mid february. In the pg dump they are shown as scheduled for deep scrub. They sometimes change their status from active+clean to active+clean+scrubbing+de

[ceph-users] Re: Erasure Profile Pool caps at pg_num 1024

2020-02-17 Thread Bandelow, Gunnar
Hi Eugen, thank you for your contribution. I will definitvely think about leaving a number of spare hosts, very good point. My main problem remains the Health Warning of "Too few PGs". This implies that my PG Number in the pool is too low and i cant increase it with an erasure profile. I al