Hi! 

I use ceph 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic (stable), 
and find that: 

When expand whole cluster, i update pg_num, all succeed, but the status is as 
below: 
cluster: 
id: 41ef913c-2351-4794-b9ac-dd340e3fbc75 
health: HEALTH_WARN 
3 pools have pg_num > pgp_num 

Then i update pgp_num too, warning miss. 

What makes me confused is that when i create whole cluster at first time, 
i use "ceph osd create pool pool_name pg_num", the pgp_num is auto equal to 
pg_num. 

But "ceph osd set pool pool_name pg_num" not. 

Why does this design? 

Why do not auto update pgp_num when update pg_num? 

Thanks 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to