Hi Gagan,
You have 30 osd with 12 pools and only 6048 PG. Some of your pools must
have pretty low PG numbers. I think it looks for a 'skew' in numbers and
issues a warning now as well as if you've pools which have 'too many'
objects per placement group.
Run: ~$ceph osd dump | grep 'pg_num'
And see the docs:
http://ceph.com/docs/master/rados/operations/placement-groups/
You can currently increase the number of PG/PGP of a pool but not
decrease them, so take care if you need to balance them as higher
numbers increases CPU load.
-Michael
However, Osds start when I use ceph-osd -c /etc/ceph/ceph.conf -i
<osdnum> but not through service ceph or /etc/init.d/ceph. After I
stared all the osds, ceph warning comes up with a message that a
"pool has too few pgs". I deleted the pool as there wasn't any
important data in it. The same warning now comes up on a different pool.
[root@ceph1 ~]# ceph -s
cluster c0459c67-e2cd-45f7-b580-dec1afc9dea5
health HEALTH_WARN pool vmware-backups has too few pgs
monmap e3: 3 mons at
{a=192.168.6.101:6789/0,b=192.168.6.102:6789/0,c=192.168.6.103:6789/0
<http://192.168.6.101:6789/0,b=192.168.6.102:6789/0,c=192.168.6.103:6789/0>},
election epoch 17684, quorum 0,1,2 a,b,c
mdsmap e28128: 1/1/1 up {0=a=up:active}, 1 up:standby
osdmap e7053: 30 osds: 30 up, 30 in
pgmap v16242514: 6348 pgs, 12 pools, 9867 GB data, 2543 kobjects
19775 GB used, 58826 GB / 78602 GB avail
6343 active+clean
5 active+clean+scrubbing+deep
client io 0 B/s rd, 617 kB/s wr, 81 op/s
[root@ceph1 ~]# ceph osd pool delete vmware-backups vmware-backups
--yes-i-really-really-mean-it
pool 'vmware-backups' deleted
[root@ceph1 ~]# ceph -s
cluster c0459c67-e2cd-45f7-b580-dec1afc9dea5
health HEALTH_WARN pool centaur-backups has too few pgs
monmap e3: 3 mons at
{a=192.168.6.101:6789/0,b=192.168.6.102:6789/0,c=192.168.6.103:6789/0
<http://192.168.6.101:6789/0,b=192.168.6.102:6789/0,c=192.168.6.103:6789/0>},
election epoch 17684, quorum 0,1,2 a,b,c
mdsmap e28128: 1/1/1 up {0=a=up:active}, 1 up:standby
osdmap e7054: 30 osds: 30 up, 30 in
pgmap v16243076: 6048 pgs, 12 pools, 4437 GB data, 1181 kobjects
19775 GB used, 58826 GB / 78602 GB avail
6047 active+clean
1 active+clean+scrubbing+deep
client io 54836 kB/s rd, 699 op/s
Regards,
Gagan
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com