Yes, I set disabled the autoscaler manually because I feared it could
increase the count to much and that my old machines could go OOM.
Anyway I issued `ceph config set global mon_target_pg_per_osd 200` but
nothing happened. I guess this is expected given the autoscaler is off,
Should I enable
I would enable it, yes. Historically the autoscaler has arguably aimed too low
not too high, and I think with BlueStore OSDs the OOM potential is very low.
Did I suggest enabling the bulk flag for all pools (the .mgr pool doesn’t
matter)?
ceph osd pool set bulk true
> On Jan 3, 2025, at 1:59
> So you suggest to give this command:
>
> ceph config set global mon_target_pg_per_osd 200
>
> right? If I understood the meaning of this parameter then it is meaningful
> when automated PG scaling is on, but it is currently off for the data
> partition:
>
> # ceph osd pool get wizard_dat
Hello, ceph users,
TL;DR: how can I look into ceph cluster write latency issues?
Details: we have a HDD-based cluster (with NVMe for metadata), about 20 hosts,
2 OSD per host, mostly used as RBD storage for QEMU/KVM virtual machines.
>From time to time our users complain about write laten
First of all, thank you so much again for the time you spend in trying
to help me, it's much appreciated.
Then:
- here's the dump of the CRUSH rules:
# ceph osd crush rule dump
[
{
"rule_id": 0,
"rule_name": "replicated_rule",
"type": 1,
"steps": [
>
> First of all, thank you so much again for the time you spend in trying to
> help me, it's much appreciated.
Prego. In my book I assert that the community is a core Ceph component, and I
tell people all the time that it’s one of many reasons to choose Ceph over
alternatives. Like Red Gre
I wouldn’t decrease mon_target_pg_per_osd below the default (250),
Anthony is usually someone who recommends the opposite and wants to
increase the default. So I’m not sure what exactly he’s aiming for… 😉
Zitat von Nicola Mori :
So you suggest to give this command:
ceph config set global
So you suggest to give this command:
ceph config set global mon_target_pg_per_osd 200
right? If I understood the meaning of this parameter then it is
meaningful when automated PG scaling is on, but it is currently off for
the data partition:
# ceph osd pool get wizard_data pg_autoscale_m
Default is 100, no?. I have a pr open to double it.
The data pool disables the autoscaler so you would need to either enable it or
increase pg_num manually
> On Jan 3, 2025, at 11:03 AM, Eugen Block wrote:
>
> I wouldn’t decrease mon_target_pg_per_osd below the default (250), Anthony
> is
Oh wait, I got confused, I thought you meant the max_pg_per_osd
setting, please ignore my last comment. 😁
Zitat von Anthony D'Atri :
Default is 100, no?. I have a pr open to double it.
The data pool disables the autoscaler so you would need to either
enable it or increase pg_num manually
10 matches
Mail list logo