Hi all,
Pretty sure not the first time you see a thread like this.
Our cluster consists of 12 nodes/153 OSDs/1.2 PiB used, 708 TiB /1.9 PiB avail
The data pool is 2048 pgs big exactly the same number as when the cluster
started. We have no issues with the cluster, everything runs as expected an
Hi Anthony,
I should have said, it’s replicated (3)
Best,
Nick
Sent from my phone, apologies for any typos!
From: Anthony D'Atri
Sent: Tuesday, March 5, 2024 7:22:42 PM
To: Nikolaos Dandoulakis
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Number o
of more
pgs?
Best,
Nick
From: Anthony D'Atri
Sent: 05 March 2024 19:54
To: Nikolaos Dandoulakis
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Number of pgs
This email was sent to you by someone outside the University.
You should only click on lin
Hi,
After upgrading our cluster to 17.2.6 all OSDs appear to have "osd_op_queue":
"mclock_scheduler" (used to be wpq). As we see several OSDs reporting
unjustifiable heavy load, we would like to revert this back to "wpq" but any
attempt yells the following error:
root@store14:~# ceph tell osd.