Hi list,
I am testing the Ceph cluster with unpractical pg numbers to do some
experiments.
But when I use ceph -w to watch my cluster status, I see pg numbers doubled.
From my ceph -w
root@mon1:~# ceph -w
cluster 1c33bf75-e080-4a70-9fd8-860ff216f595
health HEALTH_WARN
too many PGs per OSD (514>max 300)
noout,noscrub,nodeep-scrub,sortbitwise flag(s) set
monmap e1: 3 mons at
{mon1=172.20.1.2:6789/0,mon2=172.20.1.3:6789/0,mon3=172.20.1.4:6789/0}
election epoch 634, quorum 0,1,2 mon1,mon2,mon3
osdmap e48791: 420 osds: 420 up, 420 in
flags noout,noscrub,nodeep-scrub,sortbitwise
pgmap v892347:25600 pgs, 4 pools, 14321 GB data, 3579 kobjects
23442 GB used, 3030 TB / 3053 TB avail
25600 active+clean
2016-12-01 17:26:20.358407 mon.0 [INF] pgmap v892346:51200 pgs: 51200
active+clean;16973 GB data, 24609 GB used, 4556 TB / 4580 TB avail
2016-12-01 17:26:22.877765 mon.0 [INF] pgmap v892347: 51200 pgs: 51200
active+clean; 16973 GB data, 24610 GB used, 4556 TB / 4580 TB avail
>From my ceph osd pool ls detail
pool 81 'vms' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 512 pgp_num 512 last_change 48503 flags
hashpspool,nodelete,nopgchange,nosizechange stripe_width 0
pool 82 'images' replicated size 3 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 512 pgp_num 512 last_change 48507 flags
hashpspool,nodelete,nopgchange,nosizechange stripe_width 0
pool 85 'objects' erasure size 20 min_size 17 crush_ruleset 1 object_hash
rjenkins pg_num 8192 pgp_num 8192 last_change 48778 flags
hashpspool,nodelete,nopgchange,nosizechange stripe_width 4352
pool 86 'volumes' replicated size 3 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 16384 pgp_num 16384 last_change 48786 flags
hashpspool,nodelete,nopgchange,nosizechange stripe_width 0
I think I created 25600 pgs totally, but ceph -s reported 25600 / 51200
randomly. However ceph -w always reported 51200 on the latest line.
If this a kind of bug or just I was doing something wrong? Feel free to let me
know if you need more information.
Thanks.
Sincerely,
Craig Chi
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com