How many pools do you have? What does your CRUSH map look like?
Wild guess: it's related to your tiny tiny disks (10 GiB) and the
distribution you are seeing in df is due to uneven db/metadata allocations.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://cr
These OSDs are far too small at only 10GiB for the balancer to try and
do any work. It's not uncommon for metadata like OSDMaps to exceed
that size in error states and in any real deployment a single PG will
be at least that large.
There are probably parameters you can tweak to try and make it work
key:
> mgr/balancer/max_misplaced
> 2019-05-29 17:06:54.327 7f40cd3e8700 4 mgr[balancer] Mode upmap, max
> misplaced 0.50
> 2019-05-29 17:06:54.327 7f40cd3e8700 4 mgr[balancer] do_upmap
> 2019-05-29 17:06:54.327 7f40cd3e8700 4 mgr get_config get_config key:
:06:54.327 7f40cd3e8700 4 mgr get_config get_config key:
mgr/balancer/upmap_max_iterations
2019-05-29 17:06:54.327 7f40cd3e8700 4 mgr get_config get_config key:
mgr/balancer/upmap_max_deviation
2019-05-29 17:06:54.327 7f40cd3e8700 4 mgr[balancer] pools ['rbd']
2019-05-29 17:06:54.327 7f40cd3e870
fffb",
> "release": "luminous",
> "num": 1
> }
> ],
> "mgr": [
> {
> "features": "0x3ffddff8ffacfffb",
> "release": "lu
"features": "0x3ffddff8ffacfffb",
"release": "luminous",
"num": 3
}
]
}
From: Oliver Freyermuth
To: ceph-users@lists.ceph.com
Date: 05/29/2019 11:13 AM
Subject:[EXTERNAL] Re: [ceph-users
Hi Tarek,
what's the output of "ceph balancer status"?
In case you are using "upmap" mode, you must make sure to have a
min-client-compat-level of at least Luminous:
http://docs.ceph.com/docs/mimic/rados/operations/upmap/
Of course, please be aware that your clients must be recent enough (especi
al Message-
From: Tarek Zegar [mailto:tze...@us.ibm.com]
Sent: woensdag 29 mei 2019 17:52
To: ceph-users
Subject: *SPAM***** [ceph-users] Balancer: uneven OSDs
Can anyone help with this? Why can't I optimize this cluster, the pg
counts and data distribution is way off.
__
Can anyone help with this? Why can't I optimize this cluster, the pg counts
and data distribution is way off.
__
I enabled the balancer plugin and even tried to manually invoke it but it
won't allow any changes. Looking at ceph osd df, it's not even at all.
Thoughts?
root@hostadm
I enabled the balancer plugin and even tried to manually invoke it but it
won't allow any changes. Looking at ceph osd df, it's not even at all.
Thoughts?
root@hostadmin:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL%USE VAR PGS
1 hdd 0.0098000 B 0 B 0
10 matches
Mail list logo