David, I have 15 pools: # ceph osd lspools|sed 's/,/\n/g' 0 rbd 1 cephfs_data 2 cephfs_metadata 3 vmimages 14 .rgw.root 15 default.rgw.control 16 default.rgw.data.root 17 default.rgw.gc 18 default.rgw.log 19 default.rgw.users.uid 20 default.rgw.users.keys 21 default.rgw.users.email 22 default.rgw.meta 23 default.rgw.buckets.index 24 default.rgw.buckets.data # ceph -s | grep -Eo '[0-9]+ pgs' 3520 pgs
Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238 From: David Turner [mailto:david.tur...@storagecraft.com] Sent: Thursday, September 22, 2016 8:57 AM To: Andrus, Brian Contractor <bdand...@nps.edu>; ceph-users@lists.ceph.com Subject: RE: too many PGs per OSD when pg_num = 256?? Forgot the + for the regex. ceph -s | grep -Eo '[0-9]+ pgs' ________________________________ [cid:image001.jpg@01D214B5.3D5480F0]<https://storagecraft.com> David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation<https://storagecraft.com> 380 Data Drive Suite 300 | Draper | Utah | 84020 Office: 801.871.2760 | Mobile: 385.224.2943 ________________________________ If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited. ________________________________ ________________________________ From: David Turner Sent: Thursday, September 22, 2016 9:53 AM To: Andrus, Brian Contractor; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: RE: too many PGs per OSD when pg_num = 256?? How many pools do you have? How many pgs does your total cluster have, not just your rbd pool? ceph osd lspools ceph -s | grep -Eo '[0-9] pgs' My guess is that you have other pools with pgs and the cumulative total of pgs per osd is too many. ________________________________ From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Andrus, Brian Contractor [bdand...@nps.edu] Sent: Thursday, September 22, 2016 9:33 AM To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: [ceph-users] too many PGs per OSD when pg_num = 256?? All, I am getting a warning: health HEALTH_WARN too many PGs per OSD (377 > max 300) pool cephfs_data has many more objects per pg than average (too few pgs?) yet, when I check the settings: # ceph osd pool get rbd pg_num pg_num: 256 # ceph osd pool get rbd pgp_num pgp_num: 256 How does something like this happen? I did create a radosgw several weeks ago and have put a single file in it for testing, but that is it. It only started giving the warning a couple days ago. Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com