https://access.redhat.com/solutions/2457321

It says it is a very intensive process and can affect cluster performance.

Our Version is Luminous 12.2.2
And we are using erasure coding profile for a pool 'ecpool' with k=5 and m=3
Current PG number is 256 and it has about 20 TB of data.

Should I increase it gradually? Or set pg as 512 in one step ?




Karun Josy

On Tue, Jan 2, 2018 at 9:26 PM, Hans van den Bogert <hansbog...@gmail.com>
wrote:

> Please refer to standard documentation as much as possible,
>
>     http://docs.ceph.com/docs/jewel/rados/operations/
> placement-groups/#set-the-number-of-placement-groups
>
> Han’s is also incomplete, since you also need to change the ‘pgp_num’ as
> well.
>
> Regards,
>
> Hans
>
> On Jan 2, 2018, at 4:41 PM, Vladimir Prokofev <v...@prokofev.me> wrote:
>
> Increased number of PGs in multiple pools in a production cluster on
> 12.2.2 recently - zero issues.
> CEPH claims that increasing pg_num and pgp_num are safe operations, which
> are essential for it's ability to scale, and this sounds pretty reasonable
> to me. [1]
>
>
> [1] https://www.sebastien-han.fr/blog/2013/03/12/ceph-change
> -pg-number-on-the-fly/
>
> 2018-01-02 18:21 GMT+03:00 Karun Josy <karunjo...@gmail.com>:
>
>> Hi,
>>
>>  Initial PG count was not properly planned while setting up the cluster,
>> so now there are only less than 50 PGs per OSDs.
>>
>> What are the best practises to increase PG number of a pool ?
>> We have replicated pools as well as EC pools.
>>
>> Or is it better to create a new pool with higher PG numbers?
>>
>>
>> Karun
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to