A few things to note, it is recommended to have your PG count, per pool, to be a base 2 value. Also, the number of PG's per OSD is an aggregate number between all of your pools. If you're planning to add 3 more pools for cephfs and other things, then you really want to be mindful of how many PG's each pool has. Now in this math that you did for your default number "(100 * 6) / 2 = 300" is based on 6 OSD's. You show that you currently have 6 OSD's up and in, but you have 12 OSD's in the cluster. You'll need to modify your pg_num and pgp_num settings for your pools when you do actually add more OSD's to be up and in.
BTW, in case you didn't catch that... To actually resolve this problem, you need to first increase the pg_num (total number of PG's) for the pool. Once those PG's have finished creating and the cluster is back to normal, then you need to increase your pgp_num to match so that the PG's will start being used by the cluster. At this point that warning will go away. On Wed, Jun 14, 2017 at 10:33 AM Jean-Charles LOPEZ <jeanchlo...@mac.com> wrote: > Hi, > > see comments below. > > JC > > On Jun 14, 2017, at 07:23, Stéphane Klein <cont...@stephane-klein.info> > wrote: > > Hi, > > I have this parameter in my Ansible configuration: > > pool_default_pg_num: 300 # (100 * 6) / 2 = 300 > > But I have this error: > > # ceph status > cluster 800221d2-4b8c-11e7-9bb9-cffc42889917 > health HEALTH_ERR > 73 pgs are stuck inactive for more than 300 seconds > 22 pgs degraded > 9 pgs peering > 64 pgs stale > 22 pgs stuck degraded > 9 pgs stuck inactive > 64 pgs stuck stale > 31 pgs stuck unclean > 22 pgs stuck undersized > 22 pgs undersized > too few PGs per OSD (16 < min 30) > monmap e1: 2 mons at {ceph-storage-rbx-1= > 172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0} > election epoch 4, quorum 0,1 > ceph-storage-rbx-1,ceph-storage-rbx-2 > osdmap e41: 12 osds: 6 up, 6 in; 8 remapped pgs > flags sortbitwise,require_jewel_osds > pgmap v79: 64 pgs, 1 pools, 0 bytes data, 0 objects > > As this line shows you only have 64 pgs in your cluster so far hence the > warning. This parameter must be positioned before you deploy your cluster > or before you create your first pool. > > 30919 MB used, 22194 GB / 22225 GB avail > 33 stale+active+clean > 22 stale+active+undersized+degraded > 9 stale+peering > > I have 2 hosts with 3 partitions, then 3 x 2 OSD ? > > Why 16 < min 30 ? I set 300 pg_num > > Best regards, > Stéphane > -- > Stéphane Klein <cont...@stephane-klein.info> > blog: http://stephane-klein.info > cv : http://cv.stephane-klein.info > Twitter: http://twitter.com/klein_stephane > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com