Indeed I have no explanation for that.  I personally would target pg_num for 
the metadata pool being the number of OSDs rounded to the next power of two.

The way I believe this normally works is that the autoscaler bumps pg_num for a 
pool, and the mon/mgr gradually increases pgp_num to match.

> On Jan 14, 2025, at 11:53 AM, Nicola Mori <m...@fi.infn.it> wrote:
> 
> Here is it:
> 
> # ceph osd dump | grep pool
> pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins 
> pg_num 1 pgp_num 1 autoscale_mode on last_change 191543 flags hashpspool 
> stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 
> 150.00
> pool 2 'wizard_metadata' replicated size 3 min_size 2 crush_rule 0 
> object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 
> 254279 lfor 0/8092/8090 flags hashpspool,bulk stripe_width 0 
> pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs 
> read_balance_score 7.86
> pool 3 'wizard_data' erasure profile k6_m2_host size 8 min_size 7 crush_rule 
> 1 object_hash rjenkins pg_num 2048 pgp_num 2048 autoscale_mode on last_change 
> 266071 lfor 0/0/265366 flags hashpspool,ec_overwrites,bulk stripe_width 24576 
> application cephfs
> 
> 
> According to the documentation:
> 
> NEW PG_NUM (if present) is the value that the system recommends that the 
> pg_num of the pool should be. It is always a power of two, and it is present 
> only if the recommended value varies from the current value by more than the 
> default factor of 3
> 
> So it's not the target, but just a recommendation if I correctly understand. 
> I find it quite strange that 16384 PGs are recommended for a pool hosting ~ 1 
> GB of data.
> 
> Nicola
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to