t; Of *Daniel Hoffman
> *Sent:* Friday, May 08, 2015 4:49 AM
> *Cc:* ceph-users@lists.ceph.com
>
> *Subject:* Re: [ceph-users] "too many PGs per OSD" in Hammer
>
>
>
> Is there a way to shrink/merge PG's on a pool without removing it?
>
> I have a pool with
gland; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: RE: [ceph-users] "too many PGs per OSD" in Hammer
Nope, 16 seems way too less for performance.
How many OSDs you have ? And how many pools are you planning to create ?
Thanks & Regards
Somnath
From: C
<mailto:carmstr...@engineyard.com]
Sent: Friday, May 08, 2015 11:29 AM
To: Somnath Roy
Cc: Stuart Longland; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] "too many PGs per OSD" in Hammer
We actually have 3 OSDs by default, but some users run 5. Typically we're not
looking
x27;Chris Armstrong'
> *Cc:* Stuart Longland; ceph-users@lists.ceph.com
> *Subject:* RE: [ceph-users] "too many PGs per OSD" in Hammer
>
>
>
> Nope, 16 seems way too less for performance.
>
> How many OSDs you have ? And how many pools are you planning to cr
ks & Regards
>
> Somnath
>
>
>
> *From:* Chris Armstrong [mailto:carmstr...@engineyard.com
> ]
> *Sent:* Thursday, May 07, 2015 11:34 PM
> *To:* Somnath Roy
> *Cc:* Stuart Longland; ceph-users@lists.ceph.com
>
> *Subject:* Re: [ceph-users] "too many PGs
nd; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] "too many PGs per OSD" in Hammer
Nope, 16 seems way too less for performance.
How many OSDs you have ? And how many pools are you planning to create ?
Thanks & Regards
Somnath
From: Chris Armstrong [mailto:carmstr...@engineyard.com]
Sen
sers@lists.ceph.com
Subject: Re: [ceph-users] "too many PGs per OSD" in Hammer
Thanks for the details, Somnath.
So it definitely sounds like 128 pgs per pool is way too many? I lowered ours
to 16 on a new deploy and the warning is gone. I'm not sure if this number is
sufficient, though...
>
> Thanks & Regards
> Somnath
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Stuart Longland
> Sent: Wednesday, May 06, 2015 3:48 PM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] "too
m: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Stuart
Longland
Sent: Wednesday, May 06, 2015 3:48 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] "too many PGs per OSD" in Hammer
On 07/05/15 07:53, Chris Armstrong wrote:
> Thanks for the feedback. That languag
On 07/05/15 07:53, Chris Armstrong wrote:
> Thanks for the feedback. That language is confusing to me, then, since
> the first paragraph seems to suggest using a pg_num of 128 in cases
> where we have less than 5 OSDs, as we do here.
>
> The warning below that is: "As the number of OSDs increases,
Here's a little more information on our use case:
https://github.com/deis/deis/issues/3638
On Wed, May 6, 2015 at 2:53 PM, Chris Armstrong
wrote:
> Thanks for the feedback. That language is confusing to me, then, since the
> first paragraph seems to suggest using a pg_num of 128 in cases where w
Thanks for the feedback. That language is confusing to me, then, since the
first paragraph seems to suggest using a pg_num of 128 in cases where we
have less than 5 OSDs, as we do here.
The warning below that is: "As the number of OSDs increases, chosing the
right value for pg_num becomes more imp
Hi,
You've too many PG for too few OSD
As the docs you linked said:
When using multiple data pools for storing objects, you need to ensure
that you balance the number of placement groups per pool with the number
of placement groups per OSD so that you arrive at a reasonable total
number of placem
Hi folks,
Calling on the collective Ceph knowledge here. Since upgrading to Hammer,
we're now seeing:
health HEALTH_WARN
too many PGs per OSD (1536 > max 300)
We have 3 OSDs, so we have used the pg_num of 128 based on the suggestion
here: http://ceph.com/docs/master/rados/operat
14 matches
Mail list logo