Honestly I’ve disabled that average object/pg warning in the past. ymmv.
I would bump mon_max_pg_per_osd to 1000 and mon_target_pg_per_osd to 300 > > HEALTH_WARN 1 pools have many more objects per pg than average; too many PGs > per OSD (261 > max 250) > [WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than average > pool cephfs.cephfs.data objects per pg (4084905) is more than 161.862 > times cluster average (25237) > [WRN] TOO_MANY_PGS: too many PGs per OSD (261 > max 250) > MANY_OBJECTS_PER_PG: > > The ceph.cephfs.data pool stores 449B in 130.72M objects? That is weird? I get weirder things than that free in my breakfast cereal. > We don't actually use that pool as as far as I know, as we use media specific > data pools for cephfs, but is evidently being used for something regardless? > It was created as part of the original cephfs file system where the command > created both .meta and .data.] My limited understanding is that HEAD objects or such are always kept in that first data pool, even if the tails are in a different pool. > > TOO_MANY_PGS: > > [root@lazy ~]# ceph config dump | grep mon| grep max > mon advanced mon_max_pg_per_osd 1000 > osd advanced mon_max_pg_per_osd 1000 > > Well above 250. I’d set it at global scope just to be sure. Despite the name I think it’s the mgr that acts on it. Honestly most things can be set at global scope just to be sure, unless you have to set them differently for different daemons. > > There used to be a mon_pg_warn_max_per_osd key but that was removed back in > Luminous It was changed to the above when it changed from a warning to being enforced. > so what controls this warning? > > [root@lazy ~]# ceph config get mon mon_pg_warn_max_per_osd > Error ENOENT: unrecognized key 'mon_pg_warn_max_per_osd' > > Mvh. > > Torkil > > -- > Torkil Svensgaard > Sysadmin > MR-Forskningssektionen, afs. 714 > DRCMR, Danish Research Centre for Magnetic Resonance > Hvidovre Hospital > Kettegård Allé 30 > DK-2650 Hvidovre > Denmark > Tel: +45 386 22828 > E-mail: tor...@drcmr.dk > _______________________________________________ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io