Hi

19.2.2

Got a couple new warnings when I doubled pg_num for the cephfs.hdd.data pool:

[root@lazy ~]# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average; too many PGs per OSD (261 > max 250) [WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than average pool cephfs.cephfs.data objects per pg (4084905) is more than 161.862 times cluster average (25237)
[WRN] TOO_MANY_PGS: too many PGs per OSD (261 > max 250)

[root@lazy ~]# ceph df
--- RAW STORAGE ---
CLASS        SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd       4.1 PiB  1.6 PiB  2.5 PiB   2.5 PiB      61.12
nvme      210 TiB   58 TiB  152 TiB   152 TiB      72.49
nvmebulk  196 TiB  149 TiB   47 TiB    47 TiB      23.83
ssd        49 TiB   32 TiB   16 TiB    16 TiB      33.66
TOTAL     4.5 PiB  1.8 PiB  2.7 PiB   2.7 PiB      59.77

--- POOLS ---
POOL                ID    PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
rbd                  4   2048  111 TiB   29.39M  288 TiB  25.04    288 TiB
libvirt              5    256  3.3 TiB  872.28k  6.8 TiB  37.73    3.7 TiB
rbd_internal         6   2048  103 TiB   32.71M  243 TiB  21.96    288 TiB
.mgr                 8      1  4.9 GiB    1.26k  2.1 GiB   0.02    5.6 TiB
rbd_ec              10     32  9.0 MiB       27  4.6 MiB      0    3.7 TiB
rbd_ec_data         11  16384  1.1 PiB  284.37M  1.4 PiB  63.10    576 TiB
rbd.nvme            23   2048   95 TiB   25.16M  147 TiB  77.45     21 TiB
.nfs                25     32   20 KiB       73  286 KiB      0    3.7 TiB
cephfs.cephfs.meta  31    128   15 GiB    3.05M   46 GiB   0.40    3.7 TiB
cephfs.cephfs.data  32    512    449 B  130.72M   48 KiB      0    3.7 TiB
cephfs.nvme.data    34     32  977 GiB     250k  122 GiB   0.28     21 TiB
cephfs.ssd.data     35     32  853 GiB    1.10M  1.9 TiB  14.65    3.7 TiB
cephfs.hdd.data     37   3098  208 TiB  175.22M  429 TiB  33.18    384 TiB
rbd.ssd             39     64  1.6 TiB  433.36k  4.3 TiB  27.65    3.7 TiB
rbd.ssd.ec          43     32  2.5 KiB        5   20 KiB      0    3.7 TiB
rbd.ssd.ec.data     44     32  1.0 TiB  269.92k  2.0 TiB  14.89    5.0 TiB
rbd.nvmebulk.ec     47     32  3.0 MiB        6  6.1 MiB      0    8.3 TiB
rbd.nvmebulk.data   48    512   23 TiB    6.00M   46 TiB  64.98     11 TiB

MANY_OBJECTS_PER_PG:

The ceph.cephfs.data pool stores 449B in 130.72M objects? That is weird?

We don't actually use that pool as as far as I know, as we use media specific data pools for cephfs, but is evidently being used for something regardless? It was created as part of the original cephfs file system where the command created both .meta and .data.

TOO_MANY_PGS:

[root@lazy ~]# ceph config dump | grep mon| grep max
mon advanced mon_max_pg_per_osd 1000

osd advanced mon_max_pg_per_osd 1000

Well above 250.

There used to be a mon_pg_warn_max_per_osd key but that was removed back in Luminous so what controls this warning?

[root@lazy ~]# ceph config get mon mon_pg_warn_max_per_osd
Error ENOENT: unrecognized key 'mon_pg_warn_max_per_osd'

Mvh.

Torkil

--
Torkil Svensgaard
Sysadmin
MR-Forskningssektionen, afs. 714
DRCMR, Danish Research Centre for Magnetic Resonance
Hvidovre Hospital
KettegÄrd Allé 30
DK-2650 Hvidovre
Denmark
Tel: +45 386 22828
E-mail: tor...@drcmr.dk
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to