Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-11 Thread Chris Armstrong
t; Of *Daniel Hoffman > *Sent:* Friday, May 08, 2015 4:49 AM > *Cc:* ceph-users@lists.ceph.com > > *Subject:* Re: [ceph-users] "too many PGs per OSD" in Hammer > > > > Is there a way to shrink/merge PG's on a pool without removing it? > > I have a pool with

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-08 Thread Somnath Roy
gland; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: RE: [ceph-users] "too many PGs per OSD" in Hammer Nope, 16 seems way too less for performance. How many OSDs you have ? And how many pools are you planning to create ? Thanks & Regards Somnath From: C

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-08 Thread Somnath Roy
<mailto:carmstr...@engineyard.com] Sent: Friday, May 08, 2015 11:29 AM To: Somnath Roy Cc: Stuart Longland; ceph-users@lists.ceph.com Subject: Re: [ceph-users] "too many PGs per OSD" in Hammer We actually have 3 OSDs by default, but some users run 5. Typically we're not looking

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-08 Thread Daniel Hoffman
x27;Chris Armstrong' > *Cc:* Stuart Longland; ceph-users@lists.ceph.com > *Subject:* RE: [ceph-users] "too many PGs per OSD" in Hammer > > > > Nope, 16 seems way too less for performance. > > How many OSDs you have ? And how many pools are you planning to cr

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-08 Thread Chris Armstrong
ks & Regards > > Somnath > > > > *From:* Chris Armstrong [mailto:carmstr...@engineyard.com > ] > *Sent:* Thursday, May 07, 2015 11:34 PM > *To:* Somnath Roy > *Cc:* Stuart Longland; ceph-users@lists.ceph.com > > *Subject:* Re: [ceph-users] "too many PGs

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-07 Thread Somnath Roy
nd; ceph-users@lists.ceph.com Subject: RE: [ceph-users] "too many PGs per OSD" in Hammer Nope, 16 seems way too less for performance. How many OSDs you have ? And how many pools are you planning to create ? Thanks & Regards Somnath From: Chris Armstrong [mailto:carmstr...@engineyard.com] Sen

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-07 Thread Somnath Roy
sers@lists.ceph.com Subject: Re: [ceph-users] "too many PGs per OSD" in Hammer Thanks for the details, Somnath. So it definitely sounds like 128 pgs per pool is way too many? I lowered ours to 16 on a new deploy and the warning is gone. I'm not sure if this number is sufficient, though...

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-07 Thread Chris Armstrong
> > Thanks & Regards > Somnath > > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Stuart Longland > Sent: Wednesday, May 06, 2015 3:48 PM > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] "too

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-06 Thread Somnath Roy
m: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Stuart Longland Sent: Wednesday, May 06, 2015 3:48 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] "too many PGs per OSD" in Hammer On 07/05/15 07:53, Chris Armstrong wrote: > Thanks for the feedback. That languag

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-06 Thread Stuart Longland
On 07/05/15 07:53, Chris Armstrong wrote: > Thanks for the feedback. That language is confusing to me, then, since > the first paragraph seems to suggest using a pg_num of 128 in cases > where we have less than 5 OSDs, as we do here. > > The warning below that is: "As the number of OSDs increases,

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-06 Thread Chris Armstrong
Here's a little more information on our use case: https://github.com/deis/deis/issues/3638 On Wed, May 6, 2015 at 2:53 PM, Chris Armstrong wrote: > Thanks for the feedback. That language is confusing to me, then, since the > first paragraph seems to suggest using a pg_num of 128 in cases where w

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-06 Thread Chris Armstrong
Thanks for the feedback. That language is confusing to me, then, since the first paragraph seems to suggest using a pg_num of 128 in cases where we have less than 5 OSDs, as we do here. The warning below that is: "As the number of OSDs increases, chosing the right value for pg_num becomes more imp

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-06 Thread ceph
Hi, You've too many PG for too few OSD As the docs you linked said: When using multiple data pools for storing objects, you need to ensure that you balance the number of placement groups per pool with the number of placement groups per OSD so that you arrive at a reasonable total number of placem

[ceph-users] "too many PGs per OSD" in Hammer

2015-05-06 Thread Chris Armstrong
Hi folks, Calling on the collective Ceph knowledge here. Since upgrading to Hammer, we're now seeing: health HEALTH_WARN too many PGs per OSD (1536 > max 300) We have 3 OSDs, so we have used the pg_num of 128 based on the suggestion here: http://ceph.com/docs/master/rados/operat