I've been too impatient: after some minutes the autoscaler kicked in and now the situation is the following:

# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK .mgr 246.1M 3.0 323.8T 0.0000 1.0 1 on False wizard_metadata 1176M 3.0 323.8T 0.0000 4.0 16 16384 on True wizard_data 80443G 1.3333333730697632 323.8T 0.3235 1.0 2048 on True

So it seems that the data pool is increasing the number of PGs from 512 to 2048 (currently there are 711 PG in total for the three pools).

I'll report back after the backfill operations finish.

Nicola

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to