Hello,
We are using a Ceph Pacific (16.2.10) cluster and enabled the balancer module,
but the usage of some OSDs keeps growing and reached up to
mon_osd_nearfull_ratio, which we use 85% by default, and we think the balancer
module should do some balancer work.
So I checked our balancer configu
Hello,
We are running a pacific 16.2.10 cluster and enabled the balancer module, here
is the configuration:
[root@ceph-1 ~]# ceph balancer status
{
"active": true,
"last_optimize_duration": "0:00:00.052548",
"last_optimize_started": "Fri Nov 17 17:09:57 2023",
"mode": "upmap",
We have a volume in our cluster:
[r...@ceph-1.lab-a ~]# rbd ls volume-ssd
volume-8a30615b-1c91-4e44-8482-3c7d15026c28
[r...@ceph-1.lab-a ~]# rbd rm
volume-ssd/volume-8a30615b-1c91-4e44-8482-3c7d15026c28
Removing image: 0% complete...failed.
rbd: error opening image volume-8a30615b-1c91-4e44-8482