Oh wait, I got confused, I thought you meant the max_pg_per_osd setting, please ignore my last comment. 😁

Zitat von Anthony D'Atri <anthony.da...@gmail.com>:

Default is 100, no?.  I have a pr open to double it.

The data pool disables the autoscaler so you would need to either enable it or increase pg_num manually

On Jan 3, 2025, at 11:03 AM, Eugen Block <ebl...@nde.ag> wrote:

I wouldn’t decrease mon_target_pg_per_osd below the default (250), Anthony is usually someone who recommends the opposite and wants to increase the default. So I’m not sure what exactly he’s aiming for… 😉

Zitat von Nicola Mori <m...@fi.infn.it>:

So you suggest to give this command:

 ceph config set global 200

right? If I understood the meaning of this parameter then it is meaningful when automated PG scaling is on, but it is currently off for the data partition:

 # ceph osd pool get wizard_data pg_autoscale_mode
 pg_autoscale_mode: off

So should I proceed anyway? Sorry to bother you but I'm not sure I understood your suggestion and I fear I could make a mistake at this point.


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to