Thanks Greg.  I appreciate the advice, and very quick replies too :)

On 18 July 2014 23:35, Gregory Farnum <g...@inktank.com> wrote:

> On Fri, Jul 18, 2014 at 3:29 PM, James Eckersall
> <james.eckers...@fasthosts.com> wrote:
> > Thanks Greg.
> >
> > Can I suggest that the documentation makes this much clearer?  It might
> just be me, but I couldn't glean this from the docs, so I expect I'm not
> the only one.
> >
> > Also, can I clarify how many pg's you would suggest is a decent number
> for my setup?
> >
> > 80 OSD's across 4 nodes.  5 pools.
> > I'm averaging 38 PG's per OSD and from the online docs and older posts
> on this list, I think I should be aiming for between 50 and 100?
> >
> > I'm hoping that by only having 38 PG's per OSD, that is the cause of the
> uneven distribution and that can be fairly easily rectified.
>
> That seems likely. The general formula to get a baseline is
> (100*OSDs/replication count) when using one pool. It's also generally
> better to err on the side of more PGs than fewer; they have a cost but
> OSDs can usually scale into the high thousands of PGs, so I personally
> prefer people to go a little higher than that. You'll also want to
> adjust things so that the pools with more data get more PGs than the
> ones with much less, or they won't do you much good.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to