Hello,
We have a cluster with 21 nodes, each having 12 x 18TB, and 2 NVMe for db/wal.
We need to add more nodes.
The last time we did this, the PGs remained at 1024, so the number of PGs per
OSD decreased.
Currently, we are at 43 PGs per OSD.
Does auto-scaling work correctly in Ceph version 17.2.5?
Should we increase the number of PGs before adding nodes?
Should we keep PG auto-scaling active?
If we disable auto-scaling, should we increase the number of PGs to reach 100
PGs per OSD?
Considering that we use this cluster with a large EC pool (8+3).
Thank you for your assistance.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io