Hi JC,
The cluster already has 1024 PGs on only 15 OSD, which is above the
formula of (100 x #OSDs)/size. How large should I make it?
# ceph osd dump | grep Ray
pool 17 'Ray' replicated size 3 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 1024 pgp_num 1024 last_change 7785 owner 0 f
Hi Eric,
increase the number of PGs in your pool with
Step 1: ceph osd pool set pg_num
Step 2: ceph osd pool set pgp_num
You can check the number of PGs in your pool with ceph osd dump | grep ^pool
See documentation: http://ceph.com/docs/master/rados/operations/pools/
JC
On Jun 11, 20
Hi,
I am seeing the following warning on one of my test clusters:
# ceph health detail
HEALTH_WARN pool Ray has too few pgs
pool Ray objects per pg (24) is more than 12 times cluster average (2)
This is a reported issue and is set to "Won't Fix" at:
http://tracker.ceph.com/issues/8103
My test