[ceph-users] High number of Cephfs Subvolumes compared to Cephfs persistent volumes in K8S environnement

2024-10-23 Thread Edouard FAZENDA
a persistent volume in Kubernetes ? Thanks in advance for the help Have a nice day Best Regards, Edouard Fazenda. Edouard FAZENDA Technical Support <https://www.csti.ch/> www.csti.ch smime.p7s Description: S/MIME cryptographic signature

[ceph-users] MANY_OBJECT_PER_PG on 1 pool which is cephfs_metadata

2024-03-08 Thread Edouard FAZENDA
width 0 application rgw Why the autoscaler is not acting to increase the pg_num of the pool in warning ? As the pgcalc is not available on ceph now , do you thing it is a good idea to increase manually the pg_num of the cephfs_metadata , but which value should I set ? I have 18 OSD

[ceph-users] Re: Upgarde from 16.2.1 to 16.2.2 pacific stuck

2024-03-06 Thread Edouard FAZENDA
another issue when having more than two MGRs, maybe you're hitting that (https://tracker.ceph.com/issues/57675, https://github.com/ceph/ceph/pull/48258). I believe my workaround was to set the global config to a newer image (target version) and then deployed a new mgr. Zitat von Edouard FA

[ceph-users] Re: Upgarde from 16.2.1 to 16.2.2 pacific stuck

2024-03-06 Thread Edouard FAZENDA
the mgr is crashlooping on the second node Thanks for the help. Edouard FAZENDA Technical Support Chemin du Curé-Desclouds 2, CH-1226 THONEX +41 (0)22 869 04 40 www.csti.ch -Original Message- From: Eugen Block Sent: mercredi, 6 mars 2024 10:33 To: ceph-users@ceph.io Subject

[ceph-users] Re: Upgarde from 16.2.1 to 16.2.2 pacific stuck

2024-03-06 Thread Edouard FAZENDA
d[1]: ceph-fcb373ce-7aaa-11eb-984f-e7c6e0038...@mgr.rke-sh1-2.lxmguj.service: Failed with result 'exit-code'. Mar 06 09:27:18 rke-sh1-2 systemd[1]: Stopped Ceph mgr.rke-sh1-2.lxmguj for fcb373ce-7aaa-11eb-984f-e7c6e0038e87. Mar 06 09:27:19 rke-sh1-2 systemd[1]: Started Ceph mgr.rke-sh1

[ceph-users] Upgarde from 16.2.1 to 16.2.2 pacific stuck

2024-03-06 Thread Edouard FAZENDA
c2d9439499b5cf2", "in_progress": true, "services_complete": [], "progress": "0/35 ceph daemons upgraded", "message": "Currently upgrading mgr daemons" } progress: Upgrade to 16.2.2 (24m) [........

[ceph-users] Re: MDS in ReadOnly and 2 MDS behind on trimming

2024-02-23 Thread Edouard FAZENDA
TOTAL 26 TiB 10 TiB 10 TiB 254 MiB 82 GiB 16 TiB 38.59 MIN/MAX VAR: 0.74/1.17 STDDEV: 3.88 Thanks for the help ! Best Regards, Edouard FAZENDA Technical Support Chemin du Curé-Desclouds 2, CH-1226 THONEX +41 (0)22 869 04 40 www.csti.ch -Original

[ceph-users] Re: MDS in ReadOnly and 2 MDS behind on trimming

2024-02-23 Thread Edouard FAZENDA
at The logs of the MDS are in verbose 20 , do you want me to provide on a archive ? Is there a way to compact all the logs ? Best Regards, Edouard FAZENDA Technical Support Chemin du Curé-Desclouds 2, CH-1226 THONEX +41 (0)22 869 04 40 www.csti.ch -Original Message- Fr

[ceph-users] MDS in ReadOnly and 2 MDS behind on trimming

2024-02-23 Thread Edouard FAZENDA
ea on what could be my next steps to bring the cluster healthy ? Help will very be appreciated. Thank a lot for your feedback. Best Regards, Edouard FAZENDA Technical Support Chemin du Curé-Desclouds 2, CH-1226 THONEX +41 (0)22 869 04 40 <https://www.csti.ch/>