[ceph-users] Number of pgs

2024-03-05 Thread Nikolaos Dandoulakis
Hi all, Pretty sure not the first time you see a thread like this. Our cluster consists of 12 nodes/153 OSDs/1.2 PiB used, 708 TiB /1.9 PiB avail The data pool is 2048 pgs big exactly the same number as when the cluster started. We have no issues with the cluster, everything runs as expected an

[ceph-users] Re: Number of pgs

2024-03-05 Thread Nikolaos Dandoulakis
Hi Anthony, I should have said, it’s replicated (3) Best, Nick Sent from my phone, apologies for any typos! From: Anthony D'Atri Sent: Tuesday, March 5, 2024 7:22:42 PM To: Nikolaos Dandoulakis Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Number o

[ceph-users] Re: Number of pgs

2024-03-05 Thread Nikolaos Dandoulakis
of more pgs? Best, Nick From: Anthony D'Atri Sent: 05 March 2024 19:54 To: Nikolaos Dandoulakis Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Number of pgs This email was sent to you by someone outside the University. You should only click on lin

[ceph-users] Error EPERM: error setting 'osd_op_queue' to 'wpq': (1) Operation not permitted

2023-09-18 Thread Nikolaos Dandoulakis
Hi, After upgrading our cluster to 17.2.6 all OSDs appear to have "osd_op_queue": "mclock_scheduler" (used to be wpq). As we see several OSDs reporting unjustifiable heavy load, we would like to revert this back to "wpq" but any attempt yells the following error: root@store14:~# ceph tell osd.