Thanks for your suggestions and help 

Andrei 

> From: "David Turner" <drakonst...@gmail.com>
> To: "Jack" <c...@jack.fr.eu.org>, "ceph-users" <ceph-users@lists.ceph.com>
> Sent: Monday, 2 October, 2017 22:28:33
> Subject: Re: [ceph-users] decreasing number of PGs

> Adding more OSDs or deleting/recreating pools that have too many PGs are your
> only 2 options to reduce the number of PG's per OSD. It is on the Ceph 
> roadmap,
> but is not a currently supported feature. You can alternatively adjust the
> setting threshold for the warning, but it is still a problem you should 
> address
> in your cluster.

> On Mon, Oct 2, 2017 at 4:02 PM Jack < [ mailto:c...@jack.fr.eu.org |
> c...@jack.fr.eu.org ] > wrote:

>> You cannot;

>> On 02/10/2017 21:43, Andrei Mikhailovsky wrote:
>> > Hello everyone,

>>> what is the safest way to decrease the number of PGs in the cluster. 
>>> Currently,
>> > I have too many per osd.

>> > Thanks



>> > _______________________________________________
>> > ceph-users mailing list
>> > [ mailto:ceph-users@lists.ceph.com | ceph-users@lists.ceph.com ]
>>> [ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com |
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ]


>> _______________________________________________
>> ceph-users mailing list
>> [ mailto:ceph-users@lists.ceph.com | ceph-users@lists.ceph.com ]
>> [ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com |
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ]

> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to