Hi,

On 4/1/25 10:03, Michel Jouvin wrote:
Hi Bukhard,

Thanks for your answer. Your explanation seems to match well our observations, in particular the fact that new misplaced objects are added when we fall under something like 0.5% of misplaced objects. What is not clear for me anyway is that 'ceph osd pool ls detail' for the pool modified is not reporting the new pg_num target (2048) but the old one (256):

pool 62 'ias-z1.rgw.buckets.data' erasure profile k9_m6_host size 15 min_size 10 crush_rule 3 object_hash rjenkins pg_num 323 pgp_num 307 pg_num_target 256 pgp_num_target 256 autoscale_mode off last_change 439681 lfor 0/439680/439678 flags hashpspool,bulk max_bytes 200000000000000 stripe_width 36864 application rgw

- Is this caused by the fact that autoscaler was still on when I increased the number of PG and that I disabled it on the pool ~12h after entering the command to extend it?

This seems to be the case. The pg(p)_target settings are the number of PG the pool _should_ have; pg(p)_num is the number of PGs the pool current has. So the cluster is not splitting PGs, but merging them. If you want to have 2048, you should increase it again.

There are also setting for the autoscaler, e.g. 'pg_num_min'. You can use them to prevent the autoscaler from switching back to 256 PGs again.


- Or was it a mistake of mine to enter only extend the pg_num and not the pgp_num. According to the doc that I just read again, both should be extended at the same time or it not causing the expected result? If it is the case, should I just reenter the command to extend pg_num and pgp_num? (and wait for the resulting remapping!)

In current ceph release only pg_num can be changed. pgp_num is automatically adopted.


Best regards,

Burkhard Linke

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to