Ah, so I've been doing it wrong all this time (I thought we had to take the
size multiple into account ourselves).

Thanks!

On Wed, Jan 7, 2015 at 4:25 PM, Michael J. Kidd <michael.k...@inktank.com>
wrote:

> Hello Christopher,
>   Keep in mind that the PGs per OSD (and per pool) calculations take into
> account the replica count ( pool size= parameter ).  So, for example.. if
> you're using a default of 3 replicas.. 16 * 3 = 48 PGs which allows for at
> least one PG per OSD on that pool.  Even with a size=2, 32 PGs total still
> gives very close to 1 PG per OSD.  Being that it's such a low utilization
> pool, this is still sufficient.
>
> Thanks,
> Michael J. Kidd
> Sr. Storage Consultant
> Inktank Professional Services
>  - by Red Hat
>
> On Wed, Jan 7, 2015 at 3:17 PM, Christopher O'Connell <c...@sendfaster.com>
> wrote:
>
>> Hi,
>>
>> I"m playing with this with a modest sized ceph cluster (36x6TB disks).
>> Based on this it says that small pools (such as .users) would have just 16
>> PGs. Is this correct? I've historically always made even these small pools
>> have at least as many PGs as the next power of 2 over my number of OSDs (64
>> in this case).
>>
>> All the best,
>>
>> ~ Christopher
>>
>> On Wed, Jan 7, 2015 at 3:08 PM, Michael J. Kidd <michael.k...@inktank.com
>> > wrote:
>>
>>> Hello all,
>>>   Just a quick heads up that we now have a PG calculator to help
>>> determine the proper PG per pool numbers to achieve a target PG per OSD
>>> ratio.
>>>
>>> http://ceph.com/pgcalc
>>>
>>> Please check it out!  Happy to answer any questions, and always welcome
>>> any feedback on the tool / verbiage, etc...
>>>
>>> As an aside, we're also working to update the documentation to reflect
>>> the best practices.  See Ceph.com tracker for this at:
>>> http://tracker.ceph.com/issues/9867
>>>
>>> Thanks!
>>> Michael J. Kidd
>>> Sr. Storage Consultant
>>> Inktank Professional Services
>>>  - by Red Hat
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to