Please refer to standard documentation as much as possible, 

    
http://docs.ceph.com/docs/jewel/rados/operations/placement-groups/#set-the-number-of-placement-groups
 
<http://docs.ceph.com/docs/jewel/rados/operations/placement-groups/#set-the-number-of-placement-groups>

Han’s is also incomplete, since you also need to change the ‘pgp_num’ as well.

Regards,

Hans

> On Jan 2, 2018, at 4:41 PM, Vladimir Prokofev <v...@prokofev.me> wrote:
> 
> Increased number of PGs in multiple pools in a production cluster on 12.2.2 
> recently - zero issues.
> CEPH claims that increasing pg_num and pgp_num are safe operations, which are 
> essential for it's ability to scale, and this sounds pretty reasonable to me. 
> [1]
> 
> 
> [1] 
> https://www.sebastien-han.fr/blog/2013/03/12/ceph-change-pg-number-on-the-fly/
>  
> <https://www.sebastien-han.fr/blog/2013/03/12/ceph-change-pg-number-on-the-fly/>
> 
> 2018-01-02 18:21 GMT+03:00 Karun Josy <karunjo...@gmail.com 
> <mailto:karunjo...@gmail.com>>:
> Hi,
> 
>  Initial PG count was not properly planned while setting up the cluster, so 
> now there are only less than 50 PGs per OSDs.
> 
> What are the best practises to increase PG number of a pool ?
> We have replicated pools as well as EC pools.
> 
> Or is it better to create a new pool with higher PG numbers?
> 
> 
> Karun 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to