Although the documentation is not great, and open to interpretation, there
is a pg calculator here http://ceph.com/pgcalc/.
With it you should be able to simulate your use case, and generate number
based on your scenario.

On Mon, Jan 26, 2015 at 8:00 PM, Italo Santos <okd...@gmail.com> wrote:

>  Thanks for your answer.
>
> But what I’d like to understand is if this numbers are per pool bases or
> per cluster bases? If this number were per cluster bases I’ll plan on
> cluster deploy how much pools I’d like to have on that cluster and their
> replicas
>
> Regards.
>
> *Italo Santos*
> http://italosantos.com.br/
>
> On Saturday, January 17, 2015 at 07:04, lidc...@redhat.com wrote:
>
>  Here are a few values commonly used:
>
>    - Less than 5 OSDs set pg_num to 128
>    - Between 5 and 10 OSDs set pg_num to 512
>    - Between 10 and 50 OSDs set pg_num to 4096
>    - If you have more than 50 OSDs, you need to understand the tradeoffs
>    and how to calculate the pg_num value by yourself
>
> But i think 10 OSD is to small for rados cluster.
>
>
> *From:* Italo Santos <okd...@gmail.com>
> *Date:* 2015-01-17 05:00
> *To:* ceph-users <ceph-users@lists.ceph.com>
> *Subject:* [ceph-users] Total number PGs using multiple pools
> Hello,
>
> Into placement groups documentation
> <http://ceph.com/docs/giant/rados/operations/placement-groups/> we have
> the message bellow:
>
> “*When using multiple data pools for storing objects, you need to ensure
> that you balance the number of placement groups per pool with the number of
> placement groups per OSD so that you arrive at a reasonable total number of
> placement groups that provides reasonably low variance per OSD without
> taxing system resources or making the peering process too slow.*”
>
> This means that, if I have a cluster with 10 OSD and 3 pools with size = 3
> each pool can have only ~111 PGs?
>
> Ex.: (100 * 10 OSDs) / 3 replicas = 333 PGs / 3 pools = 111 PGS per pool
>
> I don't know if reasoning is right… I’ll glad for any help.
>
> Regards.
>
> *Italo Santos*
> http://italosantos.com.br/
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to